Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


My Ideal Processor, Part Foo+4

Posted by cafebabe on Sunday April 22 2018, @08:35PM (#3174)
4 Comments
Software

This is part four of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part four covers the compiler.

So far, I've given an outline for a minimal trustworthy micro-coded mini-computer where every component can be sourced, tested and is otherwise open to inspection. I've also given an outline for a card bus system which allows cards made from stripboard and chips which can be manually soldered. Again, this is open to inspection. The card system also provides a bridge to contemporary networking and storage. This requires some cheating with micro-controllers to keep parts (and part) down to a reasonable level. Use of a micro-controller is obviously not trustworthy and therefore encryption and striping across redundant channels is required to ensure that no untrusted component gains a sufficient stranglehold on any data.

However, all of this is wasted if a trustworthy computer cannot self-host an operating system and compiler. We have the luxury of starting from an untrusted computer environment and therefore we can use any number of facilities to obtain a beach-head into a trustworthy environment. Conceptually, this requires one or more paper tapes which are inspected before transfer into the trustworthy environment. In practice, it will require a uni-directional serial link and a grudgingly trusted EPROM programmer. I argue that it is difficult (but not impossible) to compromise the EPROM programmer on the basis that all EPROM programmers may be sourced prior to micro-code or machine code patterns being finalized. In the absence of a network connection to a (very determined) attacker, malicious corruption is probably the best attack.

A quick recap on the current state of computing. We got away from boot-strapping every machine from its toggle-switches and sighed with relief. However, in the years that followed, computer security has become a quagmire. To get out of this problem, I propose a fairly drastic, unconventional approach. I hate to be a green-field developer but when computer security becomes an insurance category, that's because the details of systems - systems that people created - have become unknowable. Specifically, I propose writing a C compiler in Lisp and then using the C compiler to write an operating system kernel. At this point, the typical approach is to expand until it is possible to to self-host gcc or, more recently, clang. However, before we reach this point, we rapidly encounter a Turing tar-pit. This is where we lose the provinance of each file and this is where the security quagmire begins. Specifically, in the untrusted domain, it has become commonplace for binaries to depend upon more than 100 files from any of 19000 packages. These packages are typically downloaded and deployed without inspection. Furthermore, coupling between packages has become so tight that it is only possible to compile any piece if the remainder of a system is invariant. There are two problems with this arrangement and neither is solved with repeatable builds. The first problem is that we do not have Christmas light divisibility. We cannot sub-divide a system because the coupling is too tight. In BSD systems, we have:-

# make kernel
# make world

This provides separation between kernel-space and user-space. We can build a kernel with user-space software. Using the new kernel, we can build all of the user-space software. But don't ask how that process works because it doesn't follow the layered approach recommended by theory. This world-readable, compile-one-piece-at-a-time approach is also highly vulnerable to privilege escalation. How many routes are there for malicious code to obtain global influence after one, two, three or more global re-compilations? Unknown but there are probably very many. Do all of these paths obtain more scrutiny than OpenSSL? Definitely no. This cannot make a trustworthy system. The system is open but the dependencies are numerous. Therefore, it is not possible to inspect a system in a timely manner.

I wish to change this sloppy practice. I propose writing a C compiler in Lisp and then using the C compiler to write an operating system kernel. I also propose writing the C compiler in Lisp and writing the Lisp interpeter in C. The current practice of writing the C compiler in C (or, more recently, writing the C++ compiler in C++) allows quines to be trivially propagated in the compiler. This can be overcome with the use of three compilers. However, for this, you will have to exclude all commercial compilers for which you do not have the source code. Likewise for trusting any third party who has access to the source of three compilers. And that's the situation. I wish you good luck finding three C or C++ compilers which can compile each each other.

I wish to raise the task of writing a quine to recognizing when the (compiled) Lisp interpreter is running the (interpreted) C compiler. When this occurs, modify the C compiler parse tree. The task of writing a quine remains possible but it is substantial complicated because each compilation phase is separated by interpreted code.

Returning to the kernel, it is possible to compile a kernel and supporting programs with very few dependencies. For example, a hypothetical POSIX login.c (a known target of attack) would depend upon the C compiler written in Lisp, system headers, source input and the drivers and utilities required to make a kernel functional. The output of each compilation will be binaries of historical size. It is hoped that each binary can be inspected manually, especially if correctness is placed ahead of speed.

PerlPowerTools and similar effort comprehensively show that a subset of utilities may skip the compilation process and be implemented with an interpreter. In practice, more than 2/3 of utilities may be interpreted. Although, this may be significantly reduced if launch delay or historical compatibility is an issue. (Much of the historical compatibility arises from pointless tests inside GNU build scripts and the assumed functionality thereof.)

The obvious question is why not use gcc or clang? I'll mention gcc first. Ignoring, the extended mutual loop of dependencies across multiple software licences, gcc is a really good example of Greenspun's Tenth Rule ("Any sufficiently complex program contains an ad hoc implementation of Common Lisp" to which a wag added "including Common Lisp.") On a single core Raspberry Pi, each of the four stages of gcc compilation require more than 10 hours and ideally require more than 700MB RAM. On a homebrew mini-computer, this may require more than 4000 hours. For repeatable builds, each compilation stage would require zero bit errors over a period of six weeks. The worrying part is that gcc depends heavily upon GIMPLE which is a DSL [Domain Specific Language] with Lisp syntax. This is for parse tree manipulation. Specifically, architecture independent optimizations followed by architecture dependent optimizations. The verbosity of GIMPLE explains why LTO [Link-Time Optimisation] offers GZip compression.

Whereas, clang dumps Lisp syntax in favor of C++ templates. It also trades memory for speed. With a suitable infrastructure of processor caches, is about half of the duration. However, with the default compiler flags, gcc compiling clang exceeds the 31 bit writable address-space of a Raspberry Pi. On a homebrew mini-computer, would take longer to self-host clang than gcc.

Obviously, a simpler compiler is required. Access to the source of a such a compiler is also required. Where are they? Most compilers are proprietary or extensions (branded or unbranded) of gcc and clang. Even if we go back to an ancient version of gcc, we still have the notorious mutual dependancy with gmake. That takes us back to the security quagmire.

It is for these reasons that I suggest a mutual dependency of compiler and interpreter. The (interpreted) compiler has similar functionality to gcc but may be written in a much more compact and expressive form. At this stage, we would be writing for correctness rather than speed. This is on the basis that slow runs on amateur hardware will be lucky to complete. It would be counter-productive to get tricksy when lower layers are in question. On this basis, the size of source code should be minimized without compromising legibility. It should be as short as possible but no shorter. If we do not have the processing power to implement an optimizing compiler, correctness and compactness become the only choices.

The next consideration is implementation conformance of compiler and interpreter. The laziest implementation of a C compiler may have very conformance with other implementations. However, in the long-term, low conformance is a false economy. It is undesirable to have a language dialect which is incompatible with standard tools. For example, standard lint utilities catch trivial errors. However, if the language dialect has unusual constructs then it is more difficult to avoid predictable blunders. Increasingly, compilers have integrated lint functionality. However, we don't have that luxury. Regardless, it may be desirable to perform linting on untrusted computers but only perform compilation on trustworthy hardware.

It does not help that C is mostly defined by implementation rather than a formal definition. It was not always like this. Unfortunately, most of the drift occurred when gcc became almost a strict superset of proprietary Unix compilers from the 1990s. That includes the horrible compilers, often sold as an optional extra, for HP-UX, SunOS and Irix. A formal definition of C goes back to Kernighan & Ritchie's book: The C Programming Language from the 1970s. More recent definitions include ISO C 1999. A further complication is that embedded programmers write an eclectic mix of C where features prior to the 1990 standard are mixed with features after the 1999 standard. This created a feedback loop which encouraged dependence on gcc.

More recently, there are efforts to nudge C toward Algol. This is achieved by restricting the grammar. My preference is towards Algol derivates, such as Pascal or Jovial. Both have array bound checks at run-time. This alone eliminates a common cause of critical bugs: buffer overflow. Bruce Schneier agrees. It is better to have 10% of the processing power of a trustworthy computer rather than 100% of an untrusted computer. One method to implement this is to keep assertions (such as bound checks) in production code. One method to assert safety checks is to make them part of the language specification. Hence, my inclination towards Pascal and Jovial. These choices are not arbitrary. The initial target hardware (a trustworthy micro-coded mini-computer) has a passing ressemblance the the Apollo Guidance Computer. One of its successors for aerospace navigation, the obsolete MIL-STD-1750A, was typically programmed in Jovial. It remains easy to write Jovial because Algol, Pascal and Jovial are typically converted to C and then compiled as C. However, in the general case, it is difficult to convert C to an Algol derivative due to array bound checks and other differences. It would be possible to implement a byte-code interpreter in Algol which circumvents bound checks of the native language. However, this would incur significant speed penalty. Although, C derivatives and Algol derivatives are broadly similar, C is the lowest common denominator. Regardless, it is possible to write good code in Jovial and translate it to C with safeguards intact. This may be compiled using the same process as legacy code (which does not have the same protections).

In the general case, a C compiler is sufficiently flexible to provide the back-end for C++, Objective C, Fortran (which has extensive libraries), Pascal, Jovial and other compiled languages. Indeed, the use of a common compiler allows functions written in these languages to be statically linked with relatively little difficulty. We also have the option of compiling raw C (with no safeguards), MISRA C or similar (with retrospective safeguards) or languages which always had safeguards. Unfortunately, people fail to understand these options. This may accelerate the move away from "difficult" "low-level" languages, such as C. However, rather than moving to Algol languages or languages which can be statically linked with C, programmers now skip over interpreted languages (or JIT compiled languages) which are written in C derivatives (Perl, PHP, Python, Ruby, Java derivatives, Haskell, JavaScript, Lua) and settle upon languages with multiple modes and back-ends (Rust, Swift, Dart, Meteor, Flutter) which offer mutually exclusive benefits and make solved problems into lucrative busiwork.

My ultimate objection to a language which is supposedly spans everything below C to everything above Ruby is that it cannot be achieved with one language. A good interpreter has eval and a good compiler doesn't. These are mutually exclusive goals. This can only be fudged by giving the same name to multiple things. There are benefits for a compiler and interpreter to have the same grammar in the majority of cases. But that doesn't make a compiler and an interpreter the "same" language. If you can see past the grammar, it would be more accurate to describe C and Pascal as the "same" language.

Actually, I've comprehensively convinced myself that it would be useful to have a Lisp interpreter with a default dialect similar to C. The syntax and grammar of the compiler is fairly fixed but the syntax of the interpreter is a completely free variable. There may be cases where the overlap of grammar may be a hinderance. For example, when attempting to debug eval in the interpreter. However, this is greatly outweighed by benefits.

My Ideal Processor, Part Foo+3

Posted by cafebabe on Sunday April 22 2018, @08:32PM (#3173)
0 Comments
Hardware

This is part three of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part three covers networking.

I propose a fairly minimal trustworthy computer implementation which consists of a micro-coded mini-computer using one 8 bit ROM. It has performance which makes a Commodore 64 look racy. However, the redeeming feature is that it meshes well with a card bus bus system which scales from stripboard and DIP chips to PCI-X performance.

It would be extremely useful and practical if such a computer system could have a trustworthy network interface. Unfortunately, I'm not sure this is possible. Implementing a trustworthy UART with logic chips is easy if data integrity is secondary. Implementing a good, trustworthy UART is difficult. Implementing a cell network link layer is infeasible and implementing Ethernet requires VLSI and/or a dedicated processor. Unfortunately, VLSI and dedicated processors are not trustworthy.

Fortunately, it is possible to side-step this problem in a manner which is similar to trusting compilers. In the case of networking, data may be split and sent over two or more separate wired networks. This prevents untrusted network interfaces from ever obtaining sufficient information for a MITM attack. Untrusted network interfaces may maliciously drop packets or delay packets but they cannot inject information into communication. In the trivial case, this covers link layer security over a LAN. This can be extended to higher layers of communication and/or WAN communication.

How do we implement this? We borrow an idea from the counterfeit FTDI serial communication chips. Yes, we use one abuse of trust to counter another abuse of trust. The counterfeit serial chips use a more advanced scale of integration and use a micro-controller to simulate a bank of hardware registers. This is fairly trivial to implement if the parallel or serial bus timings are sufficiently generous. I give a worked example using commonly available hardware. In the untrusted (legacy computer) domain, we may program an 84MHz Arduino Due to use its dormant and relatively undocumented 100Mb/s Ethernet interface and work as a bank of registers within a 10MHz card bus system. The network card may specify 256 × 8 bit registers in which the bottom 64 registers may be a page-banked view of one filtered Ethernet packet in a queue of multiple packets. The card bus parameters are defined such that a wait state must be asserted within 100ns and timeout occurs before 2000ns. Device registers may be simulated with perfect fidelity if the firmware always keeps a 256 byte array in a consistent state. Specifically, the 84MHz micro-controller is able to respond to line level changes and assert a wait signal within 12 cycles. Page-banking is a little more difficult. However, with the reasonable assumption that packet data is always aligned on four byte boundaries and data is moved in multiples of four, a fixed length unrolled loop allows 64 bytes to be copied into the "registers" within 40 cycles (476ns). If line level interrupts for the card bus interface have interrupt priority which is higher than the Ethernet interface then it is possible for an 84MHz micro-controller to work correctly on a 10MHz, asynchronous, parallel bus interface with either:-

  • A suitably generous number of bus cycles prior to timeout.
  • A suitably constrained register interface.

In general:-

  • Any micro-controller may be intercepted before delivery and contain hidden radio links and any amount of malicious code.
  • Any micro-controller development environment may have malicious functionality.
  • Regardless, an untrusted computer may be used to program an untrusted network interface.
  • An untrusted micro-controller of 80MHz or so may operate on a card bus of 10MHz or so and maintain one or more Ethernet interfaces or any speed.
  • Multiple micro-controllers connected to separate switches should not be able to collude.
  • This requires screened cards, signal isolation along cables, separate power distribution and a split back-plane in which trusted cards are separated from untrusted cards - and each other. Untrusted cards do not belong in the same trust domain. Each untrusted card must have its own trust domain.
  • If suitable conditions are met, trustworthy computers may establish link layer security without prior trust. Current systems do not meet these conditions.
  • With suitable key management, trustworthy computers may extend trust to the application layer.
  • If trustworthy computers provide WAN routing and tunneling, it is possible to extend trusted networking over single runs of cable without distance restriction.

What is lacking in contemporary systems? Parallel buses have an unacceptable level of trust where any dubious card may perform bus mastering and/or specify a vector interrupt address. Serial buses offer some improvement but some provide downward compatibility to parallel bus systems and none prevent covert communication over common wires. The next problem is key management. It would be preferable if trustworthy computers had access to a trustworthy filing system. This requires pieces of keys to be stored across multiple magnetic disks or multiple flash storage units which, again have to placed on separate storage buses and electrically isolated from each other. We're not even considering the case of a trustworthy network filing system. How would you retrieve the keys to access the network without being subject to MITM attack?

We know that almost every magnetic harddisk made over the last 10 years has sector DRM. That's the unwanted functionality which is advertised. What other functionality was included by the manufacturer? What other functionality was added by parties, such as the Equation Group, who are able to re-flash harddisks of all major manufacturers? Solid state storage is worse. As an example, it is fairly trivial to re-flash a Micro SD card to work as an SPI host rather than an SPI client. Samsung "smartphone" firmware fixes provide alarming insight into the internals of ARM micro-controller firmware used in Samsung flash storage. From this, it is fairly trivial to set Samsung flash storage into debug mode, extract firmware, decode bad block maps, extract dormant data and then re-flash the micro-controller so that it may keep aside information of interest. However, that's tame compared to information learned from my local makerspace. Some Micro SD cards are publicly declared to have a wi-fi interface. (Others may have a dormant wi-fi interface.) A wi-fi interface allows a card to work in an infinite storage mode when placed in a camera. However, in other scenarios, malicious firmware may snoop keywords or copy documents into a hidden pool and then crack a wi-fi connection so that documents can be uploaded to interested parties. How do you know that you're not a victim already? If you use strong encryption for every file on every storage card then you are only safe from data egress. A scenario of "How did that get there?" combined with wi-fi and horse-porn becomes increasingly likely without a user feigning ignorance.

Magnetic storage security is screwed. Flash storage security is screwed. What about booting and key management on ROM? Well, that's screwed too. a 250ns "ROM" implemented as a 100MHz micro-controller has 25 clock cycles to respond to any signal level changes. Nowadays, this can be achieved without writing assembly. However, the expected response time provides less opportunity for malicious activity. For the truly paranoid, it is possible to stripe data across ROMs (in the range of bits or bytes) and implement parity checks. Although this is far from ideal, it provides options which are not available with magnetic storage or flash storage.

My Ideal Processor, Part Foo+2

Posted by cafebabe on Sunday April 22 2018, @08:28PM (#3172)
0 Comments
Hardware

This is part two of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part two covers the memory interface.

The previous section describes the outline of a discrete hardware implementation of processor where the virtual machine is 32 bits or more and the native machine has:-

  • 16 bit program counter for 8 bit micro-code.
  • 8 bit internal accumulator.
  • 128 byte internal register bus where values can be read into the accumulator.
  • 128 byte internal register bus where values can be written from the accumulator.
  • Locations on the internal buses where it is possible to read or write to the ALU, virtual processor registers and main memory.

Understandably, this requires a large number of clock cycles to do anything and clock cycles are likely to be less than 1MHz. However, this design works astoundingly well when used with a multiplexed card bus system. This covers multiple use cases:-

  • Micro-controller uses card bus system to interface with one more other micro-controllers on same circuit board.
  • FPGA uses card bus system as main memory interface.
  • Micro-coded mini-computer uses card bus system as main memory.
  • Legacy host computer accesses card bus system via parallel port. Access may be directed from native applications or software running in a virtual machine.

Where:-

  • One or two peripherals do not use an edge connector.
  • A desktop card frame provides 16 slots or less.
  • A card frame provides 40 slots or so in a 19 inch rack. Using a trick popularized by the Sun Microsystems E10000 Server, it is possible to have an equal number of slots on the reverse of a back-plane without increasing the wave-length of the back-plane.

Where a card may be:-

  • Homebrew cards made from strip-board and DIP chips.
  • Etched, single sided circuit board.
  • Double sided circuit board.
  • Multile layer circuit board.

Where each card may:-

  • Operate at 1MHz or less.
  • Have the throughput of PCI-X.
  • Also provide a PCI, PCI-X or PCI Express connector.

It may also be possible to implement generic bridges to and from PCI variants. However, the design is most likely to be used as an 8 bit interface running at less than 1MHz.

The card bus system was originally envisioned to be retro-fitted to 8 bit and 16 bit computers but I quickly realised that portable software would be required to cover multiple use cases. This led to the development of the virtual machine which compliments this card bus system. Therefore, the card bus may be driven via library code running on a computer which has its own address-space. Or it may be the only (visible) address-space of a hard-wired or trustworthy micro-coded mini-computer. During my absense, I began reading about legacy bus implementations from VAX, Apple 2, PCI and many others. This has led to making the design significantly more lax about power and signal tolerances.

Firstly, borrowing from the Apple 2 and elsewhere, supply Voltage should be unregulated. It is nominally 5 Volts but may be anything over the automotive range. This may exceed 15 Volts and may include transient spikes above 60 Volts. It is expected that each card performs Buck transformer regulation to 3.3V or similar but that all common signals are at 5V only.

Secondly, one open drain interrupt line may be common to all cards. Alternatively, the interrupt of each card may be fed into a priority encoder and/or switch fabric for delivery to multiple processor cores. If the host is a micro-controller then interrupts (which may be detected as edge triggered or level triggered) may or may not produce unique interrupt vectors. Similarly, for a trustworthy micro-coded mini-computer, unique interrupts may or may not be available.

Thirdly, signal lines are intended to allow a fairly minimal implementation of 3 × 74244 8 bit mono-directional buffer chips, 3 × 74245 8 bit bi-directional buffer chips, 2 × 74138 de-multiplex chips, two or more latches and XOR parity. Each card consumes a maximum of one TTL load on each signal line.

Fourthly, there are nominally 256 cards in a 64 bit address-space. Each card nominally implements a 56 bit address space. In the baseline specification, cards do not perform bus mastering or dynamic address allocation. Borrowing from PCI, cards may perform multiple, unrelated functions and therefore the address space on each card is divided into four equal sections. Each section is divided into four equal segments: ROM, I/O Segment, Manufacturer ID and Device ID. These segments can be decoded uniquely with 1 × 74138. In the absense of ROM, Manufacturer ID and Device ID may be hard-wired via buffer chips. A ROM format will be devised in which card capabilities will be defined. These will include wide data-paths, bus mastering, timings, Voltage ranges, current draw, larger numerical ranges for Manufacturer ID and Device ID, byte-code device drivers and native device driver implementations.

Fifthly, borrowing from QBus, if a card cannot accept or respond to a request within 100ns, it must assert a wait signal. A trivial host implementation may wait indefinitely. However, a host is expected to receive a bus error within 2000ns.

Sixthly, cards nominally have a 28 pin or so, single sided edge connector at 0.1 inch pitch where signals are: ground, power, ground, control signals, ground, interrupt, wait, ground, least significant four bits, ground, four bit even parity, ground, next four significant bits, ground, four bit even parity, ground, power, ground. A single height card is 8 inch. A single length card is 6 inch. Single height, quad length cards should fit within a 5U frame (8.75 inch) and should therefore be 8 inch high and 24 inch deep. Double height cards should fit within a 10U (17.5 inch) frame. It must be possible to fit three, half height, half length cards plus back-plane into 2 inch × 4 inch × 4 inch.

The memory of the card bus system is arranged so that a card in slot zero may be a mini-computer boot ROM. Furthermore, the bit patterns on the boot ROM may be compatible with multiple processor architectures. For example, for Z80, the first two memory locations are a vector for the start address. For x86, the first four memory locations are a vector for start address in 8086 segment format. For 680x0, the first four memory locations are a 32 bit start address in big endian order. Therefore, it is possible to choose representations which are distinct for all three architectures. For the virtual processor implemented by the mini-computer, address zero is the execution address. Fortunately, representations can be chosen for vectors which are also harmless, unprivileged instructions for the virtual processor. Specifically, it is possible to choose Z80 vectors, x86 vectors and 680x0 vectors which all lead to distinct addresses while also being a NOP sledge for the virtual processor. In the general case, architectures may be similar to the point that differences during boot are immaterial. More commonly, architectures differ to the extent that a dummy MOV in one instruction set is a jump in another instruction set. This allows bit patterns to be chosen such that execution for each architecture branches in one or two steps. The major consideration is that any card configuration held in ROM should be offset by 1KB or so. This avoids needlessly surmountable complications of card configuration clashing with boot vectors.

In practice, connecting the multiplexed card bus as the main memory of Z80, x86, 680x0 or similar would be troublesome. Connecting a virtual ARM processor in an FPGA would be easier. In general, it is good to keep options open when a broad solution is relatively easy to achieve. More specifically, choose any two or more architectures to provide cross-architecture support. For example: MIPS64, ARMv8, Xtensa, OpenRISC, RISC-V and my virtual processor.

Consideration of card access patterns is useful. In the trivial case where a card has not indicated a wider data-path, a 64 bit address would be specified in eight pieces. Each card latches chunks of interest. For I/O expanders without ROM, this may be the bottom 8 bit of the address-space (or less). Latches may be set in ascending order or descending order. However, descending order has the advantage of setting card select on the first cycle. It is also the most logical order when sequential addresses are accessed. This works very much like DRAM static column paging and is intended to facilitate such usage when used in conjunction with low density DRAM. This is not applicable when using high density DRAM but the option remains. Indeed, given typical bus bandwidth, use of high density DRAM may be impractical.

Borrowing from SCSI and Micro SD, host and card may be capable of wider transfers. For a trustworthy micro-coded mini-computer implementation, this requires moving the top 8 bits of a candidate address to an internal register then jumping to the micro-coded bus mastering routine for each case. In the trivial case, the extra check slows access to main memory. However, where a wider bus is common, latch operations and transfer operations may be reduced. Although the trivial case of a nine-way multiplexed bus seems slow, sequential 8 bit read/write operations approach 50% of 8 bit bus cycle throughput. In the frequent case of sequential, variable length instruction read, retrieval of longer instructions is considerably amortized. For wider buses, sequential 16 bit read/write operations closely approach 100% of 8 bit bus cycle throughput, 32 bit read/write operations are effectively 200% of 8 bit bus cycle throughput, 64 bit read/write operations are exactly 400% of 8 bit bus cycle throughput. Larger bus widths also perform better on random access. Specifically, 8 bit bus obtains 11% throughput, 16 bit bus obtains 20% throughput, 32 bit bus obtains 33% throughput and 64 bit bus obtains 50% throughput. However, for trivial cases, figures can also be considerably improved if partial address decode is used and bus cycles are correspondingly omitted. Given the bandwidth of the bus, this may be a practical default.

Unfortunately, the next section demonstrates that the desirable and trivial case of a passive back-plane is not sufficiently robust against malicious parties.

My Ideal Processor, Part Foo+1

Posted by cafebabe on Sunday April 22 2018, @08:25PM (#3171)
0 Comments
Hardware

This is part one of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part one concerns ALU and registers.

It is possible to prototype processors using FPGA [Field Programmable Gate Array] development boards. They have numerous advantages and dis-advantages. They are small, relatively cheap, relatively fast and can be re-purposed for other projects but they typically incorporate DRM and require the use of untrusted software. If the purpose of the project is to create a trustworthy computer, an untrusted FPGA would not be the final solution. Therefore, we have to "keep it real" and devise a solution which can be implemented without an FPGA. It may be common for the solution to be deployed on an FPGA but a solution which does not specifically require an FPGA has numerous advantages. For example, any solution which vastly under-utilizes an FPGA may be implemented as a multi-core processor. Whereas, any solution which is crammed into an FPGA may be expensive or impractical to implement in other forms. So, the intended scale of integration is DIP chips in sockets which can be hand soldered. However, trade-offs of other implementations are acknowledged.

There are numerous techniques to make minimal processors. It is possible to apply the design principles of the Apollo Guidance Computer to make something which can be programmed like a 6502 or similar. Unfortunately, it will be significantly slower than a 6502 because it will use 64KB EPROM or similar with 350ns access time. Further circuitry, such as counter chips, may require a system to run below 1MHz. A lack of instruction pipe-lining and 64 bit registers will also require significantly more clock cycles per instruction. It would be unrealistic to assume 10% performance of a Commodore 64. Regardless, in the same manner that it is possible to get a 6502 to add 64 bit numbers, a smaller data bus and ALU does not prevent 64 bit data from being processed on minimal hardware. Likewise, memory bank switching allows a very large amount of memory to be addressed by a processor. It would be faster and more efficient to not use multiplexing of an ALU, data bus or address bus. However, the minimal implementation is cheaper, more tractable and may be replaced with more substantial implementations.

The system will be based around one 8 bit ROM. One output from the ROM determines read or write. The other seven outputs determine which internal register to read or write into an internal accumulator. This has the obvious defficiency, described in NAND2Tetris, in which instructions will typically be a series of read then write operations. This can obviously be improved with the use of two ROMs and no intermediate accumulator. However, this incurs Amdahl's law because there are also cases where efficiency does not double.

The native machine has 128 × 8 bit read-only registers and 128 × 8 bit write-only registers. Of these, half of the read-only registers provide immediate constants. Therefore, any value from zero to 63 can be loaded into the accumulator. From this, it is possible to:-

  • Short jump within a 64 instruction page.
  • Medium jump within a 4096 instruction block.
  • Long jump across the whole ROM.

Some of the registers provide access to ALU functionality. In particular, it is possible set or clear a carry flag by writing to specific locations. Within the native machine, the window to main memory may be eight bytes or less. However, the virtual machine is not aware of this restriction. Anyhow, a minimal system proposal requires:-

  • 5 × 4 bit binary counter chips for the native program counter. This is arranged into six bits, six bits and four bits to allow page, block and global jumps.
  • 2 × 8 bit latch chips so that the parts of a medium jump or long jump can be applied synchonously. These are not essential but they avoid a large number of trampolines in the micro-code.
  • One 64KB EPROM or similar.
  • One 8 bit latch for the internal accumulator.
  • Numerous latches for all of the general purpose registers.
  • Latches for general register selection.
  • Latches for temporary registers.
  • Latches and counters for the virtual program counter.
  • Logic chips for the ALU. This includes internal flags which are not available via the virtual machine.
  • Logic chips for register decode.
  • Logic chips for virtual instruction decode.

Within the 128 × 8 bit write-only registers:-

  • Three addresses allow the native program counter to be set.
  • A range of addresses allow ALU flags to be cleared or set.
  • A range of addresses allow ALU operands to be written.
  • Three addresses allow register operands to be selected.
  • Eight addresses allow writes to a general purpose register.
  • Four or more addresses allow main memory address to be specified.
  • One or more addresses allow writes to main memory.
  • Four or more addresses allow the virtual program counter to be set.
  • One address allows the virtual program counter to be incremented.
  • Four or more addresses allow virtual stack pointer to be set.
  • Virtual instructions may be written into specific addresses for the purpose of splitting byte-code fields. This functionality is copied directly from the design of the Apollo Guidance Computer.

Within the 128 × 8 bit read-only registers:-

  • The first half of the address-space allows constants from zero to 63 to be loaded into the internal accumulator.
  • A range of addresses allow ALU operations to be queried.
  • 24 addresses allow reads from general purpose registers.
  • One or more addresses allow reads from main memory.
  • Four or more addresses allow the virtual program counter to be queried.
  • Four or more addresses allow virtual stack pointer to be queried.
  • A range of addresses conditionally provide a subset of known values. This allows bit-field decode, conditional execution and jump tables to be implemented within the native machine.

The native machine does not have subroutines or conditional instructions. It is not a general purpose 8 bit micro-processor. It is micro-code for the purpose of implementing one specific computer architecture. Conditional sequences, such as loading variable length instructions into a bit-field decoder, require a terminating jump at the end of the sequence and multiple jump tables for opcode and addressing mode. These conditional addresses can be obtained from dedicated register locations.

It is also possible to return to the beginning of the fetch-execute cycle via the use of a dedicated register. This allows an interrupt to be handled without incurring additional clock cycles to test an interrupt flag. Given that there are no conditional instructions, this is the most logical implementation. The read of the interrupt vector also allows atomic sampling of an interrpt signal and therefore it avoids the obscure corner case where a transitioning interrupt signal level causes indeterminate behavior across the hardware.

It is not strictly neccesary to implement the virtual machine's program counter with dedicated counter chips. However, omission would require a significant number of clock cycles to perform an increment operation via an ALU. This differs from the Apollo Guidance Computer in which all hardware counter increments stole bus cycles to perform increment via the ALU. However, in the case of the Apollo Guidance Computer, all counters were the width of the ALU. In the proposed design, all counters are significantly wider than the ALU and therefore increment operations require dedicated hardware or micro-code. While increment of the virtual machine program counter is an optional quick win, bi-directional increment/decrement of stack occurs far less frequently and is more difficult to implement as a dedicated unit. In both cases, the virtual machine's architecture constrains read and write operations within one instruction cache line and data cache line. Within a micro-coded implementation, this allows the bottom byte of the address to be incremented without concern for ripple carry into other bytes of the address. To achieve this, unaligned read and write operations are split into multiple virtual machine instructions. You'll see why this is important in the next section.

Many native instruction sequences will be read, write, read, write and the occasional poke to clear a flag or suchlike. This could be implemented with a pair of micro-coded EPROMs and no intermediate accumulator. However, this increases cost, slows development and doesn't double performance. Regardless, it is a trivial option to increase performance after initial development.

My Ideal Processor, Part Foo+0

Posted by cafebabe on Sunday April 22 2018, @08:18PM (#3170)
0 Comments
Hardware

In response to CID634544 from UID791:-

I have no interest in price/performance anymore. Whatever it is, it is. I'm extremely interested in price/security/openness. How open is the architecture? Are there any blobs anywhere? Is there a security chip like the Intel ME? If so, do I have access to the source code? Can I compile my own security engine to run on this dedicated security processor?

Those are the questions I have now. I would build a system today with far less power than the bleeding edge processors out there, but all the security we want and need. I firmly believe now that security can ONLY be obtained with full and absolute transparency. No security is obtained through obscurity of the methods and processes.

Intel and AMD can both do whatever the hell they want, but the first company to deliver the security we truly need will start getting a lot of orders. Even if they're lower on the performance/feature totem pole.

I have been working on a paractical solution to your problem. It would have been preferable to have a solution prior to Dec 2017, when Spectre and Meltdown became widely known. However, a solution remains urgent. I have an outline design in which all components are open to inspection. This will be described in several parts.

The first part is a minimal, micro-coded system which implements a processor with 64 bit registers but, to reduce costs, not a correspondingly dimensioned ALU. You said that you would accept this solution irrespective of its speed. I am concerned that you may wish to place further constraints on your specification because my proposal is likely to be significantly slower than a Commodore 64 or Apple 2. Obviously, if computers with a 16 bit address bus were suitable, you'd be using them already. However, if you compare, for example, the market viability of Tandem's high-availability, 16 bit, stack processors against my concerns about transistor switching speed and ALU latency and system integrity, you'll see that a native, flat, 32 bit address-space is a beguiling false economy from which I have freshly emerged.

The second part is the memory interface and card bus system. this also covers physical considerations, such as enclosures.

The third part covers contemporary expectations about network interfaces. Unfortunately, this may require a workaround involving bit-banging with an Arduino or similar. Allowing such an untrusted unit into a system requires all data to be distributed across multiple network interfaces. In the trivial case, this only provides link layer security across redundantly wired LANs. In restricted cases, this may also work across WANs.

The fourth part concerns writing a compiler for such a system. I have a viable proposal which allows a complete bootstrap from an insecure environment. However, all of the work on trustworthy hardware is moot if trustworthy software cannot be maintained.

Make Your Own Boxes With Rounded Corners

Posted by cafebabe on Sunday April 22 2018, @08:12PM (#3169)
0 Comments
Hardware

(The tools and materials lists look like a Dave Barry homage but this is not a spoof.)

Tools: bandsaw and all associated safety equipment, tape measure, coarse file, sandpaper.

Materials: two or more ball pit balls, drain-pipe, sheets of wood or plastic, glue, (optional) piano hinge, (optional) suitcase latches, (optional) aluminium strips, (optional) Mecanno pitch 5/32 inch nuts and bolts, (optional) yoga mat or child safety mat, (not required) 60000 feet of tram cable.

In the 1970s, rounded corners were a statement of finesse and sophistication. In the television series, UFO, Colonel Edward Straker's office has a lava-lamp style mood screen and is white with rounded corners with one of the rounded corners being a mini-bar. Totally shagadelic. It also helped that they dressed like Sgt. Pepper's Lonely Hearts Club Band. The white, rounded corners are copied in UFO's submarine cabins. When the third season of UFO was cancelled and work started on the inferior Space 1999, Commander John Koenig's office had more than a passing ressemblance to Ed Straker's office. These design elements and others are often mis-attributed to a Steve Jobs wet dream but since the 1970s, audio and computer equipment has cycled around box colors of white, biege, brown, gray, silver or black. Everything except white, silver and black has been dropped from this rotation but rounded corners on physical products remain a neccesity when consumers are a danger to themselves. For example, consumers are barely able to operate a micro-wave oven safely. It is from this situation that website styling follows physical products. If all of your products have rounded corner then, for consistency, web site style may be similar.

For a very long time, I've wanted to make boxes of arbitrary size with rounded corners and I feel like a dumb-ass for arriving at a solution after so long. The answer came about two months ago and, since then, I've been loudly telling people about it. The answer is to use ball pit balls for corners, drain-pipe for the edges and arbitrary sheets for flat sections. If a ball has radius (or diameter) which is the same or larger than the drain-pipe then the ball can be trimmed to size. I was able to find ball pit balls and drain-pipe with 68mm diameter. Unfortunately, the palettes are horribly mis-matched. Drain-pipe is typically available in white, gray, brown, black and (rarely) green. Unfortunately, gray and brown tend to be large diameter sewage pipes. My local supplier only had 50mm and 68mm pipe in black. To contrast with edge color, ball pit balls are only available in garish kid colors, such as bright purple, lurid pink and lime green. So, edges and corners are likely to be in clashing colors. Regardless, it is relatively trivial to manufacture boxes of arbitrary size.

I assumed that ball pit balls are ridiculously cheap because they are required in such large quantities. With a two inch diameter or similar, a hypothetical 10×10×10 block of balls is quite small and 10000 balls would barely fill a closet shower. A typical ball pool must have hundreds of thousands of balls and therefore they must be cheap in volume. Indeed, they are. 200 are less than US$20 and volume pricing becomes increasingly favorable. I was also able to get 10 for £0.99 but many retailers don't stock them during winter. The plastic is presumably ABS, BPA or similar plastic commonly used for food containers.

Unfortunately, the balls are sold by diameter and not thickness. They are incredibly thin to the extent that two or more layers should be glued together. Given that they are incredibly thin and flexible, any two sections of ball effectively have the same radius. Therefore, two or more layers can be easily glued together. The balls are presumably manufactured with injection molding. The balls have two smoothed injection points at the "poles" and a seam at the "equator". This facilitates cutting one ball into eight demi-semi-hemi-spheres when only using blunt scissors. With a little practice, this can be achieved with tolerance of 3mm (1/8 inch) or better. However, the balls are manufactured very cheaply and the molds are not spun before the plastic sets. Therefore, one hemi-sphere may have notably thicker plastic. To compensate, place the four heavier pieces in one pile and the four lighter pieces in another pile. Then match a piece from each pile when gluing layers together. This achieves the most consistent thickness after gluing. If cutting tolerances are bad and a piece from the top of one pile doesn't match a piece from the top of another pile then rotate one pile around with the intention of improving tolerances. There is a permutation explosion of possible matches but rotating one pile should obtain fairly optimal matches.

I tried three types of glue and there is no particular preference. I tried unbranded crazy glue which is presumably an aroma compound because it smells of acrid mint. I also tried water-proof silicone sealant and kiddie PVA glue. All work well. However, all leak from the sides, all glue corners badly and all take a long time to set. Presumably, this is due to ingress of air mostly occurring between the thin seam between the layers of plastic and the liquid glue being quite deep behind this seam. Predictably, crazy glue makes the most mess. Silicone takes more than one week to set. PVA gives the brightest finish. I am concerned that layers may flake due to ingress of water. In which case, silicone may be preferable.

My efforts with a hand saw have been completely laughable. The cheapest saw was completely impractical. However, a really good hand saw makes an unbearable resonance when sawing a plastic tube. This is completely impractical in my apartment at any time of day. Unfortunately, I discovered this after purchasing a 2m drain-pipe. I didn't want to take the full length of drain-pipe to my local makerspace. This led to the humorous situation of me, wearing a Hello Kitty tshirt, in a quiet street, sawing drain-pipe. This allowed me to take six small sections to my local makerspace. This was enough to make two boxes or one disaster. Unfortunately, the bandsaw was unavailable and a wooden bench vice was unsuitable without use of rags to hold pipe in one place.

Anyhow, the plan is to cut pipe lengthways in to 1/4 sections. This provides four edges with a 90° curve. Admittedly, this requires very accurate sawing. In volume, a jig to hold the pipe would aid such accuracy. The best part is that it is possible to cut all of the pipes without measurement because sections of the same length are always used in multiples of four. Sections of a particular length are always parallel to each other. If it aids visualization, each dimension (length, width, height) may be regarded as one section of pipe exploded into four complimentary curves of the same length. From this, it is possible (or even desirable) to mix pipe colors to aid identification and assembly of pieces. It is only after cutting edges that accurate measurements are required. This is to make the flat sections of the box to the correct sizes. These may be a mix of wood or plastic of any any color and of varying thickness.

So far, I have only described the outer surface of the box. An unseen layer is required to hold the outer layer together. For this, I considered purchasing a US$50 flight case and diassembling it. However, a thought experiment of case asembly is sufficient. A flight case typically has eight molded plastic corners. Each of these corners has three tongues. Cut aluminium can be slotted over each tongue. The extruded aluminium also has flat edges which allows the visible external panels to be glued. The tongues and edges are all missing from my design. I've only described the external shell but it can be held together with a second layer of flat (or curved) pieces. This may be aided with foam. A thick and solid type of foam may be sourced from yoga mats or child play mats. The thickest, dense foam is intended to be used a gymnasium flooring or play area flooring. [Insert Mark Twain "but I repeat myself" joke.] This thick foam is available in continuous rolls or inter-locking sections.

A box can be made with two sections which are hinged together. This may be achieved with piano hinge which is, conveniently, available in piano length sections. It may also be cut to length without the hinge falling apart. (Indeed, it is possible to make unrounded boxes through the use use of piano hinge. A friend made transparent cuboid computer cases where all 12 edges used piano hinge bolted to plastic sheets. Where all eight corners are formed from freely hinged edges, none of the edges have any freedom of movement and the box is relatively rigid. On this basis, you may want to order surplus piano hinge.)

Alternatively, strips of aluminium can be used to make a card frame system. I originally envisioned a system which is a multiple of 2 inch × 4 inch × 4 inch (5cm×10cm×10cm) in a flat orientation with one inch or so of padding and rounding on all sides (left, right, front, back, top, bottom). Within one unit, it is possible to fit three or more separate circuit boards. This would be a minimum of 12 square inch (300cm2) of circuit board. Nowadays, across two units, is it possible to fit nine credit card computers. Some of those credit card computers are quad-core and 64 bit, 16 core processors are foreseeable. That would be 144 cores in 2 inch × 4 inch × 8 inch (5cm×10cm×20cm). It is also possible to make units where the external dimensions approximate 19 inch rack boxes. For example, 4 inch × 16 inch × 16 inch (plus 68mm diameter tubing) easily fits within 4U. (Note that this is entirely compatible with the car dash-board design and allows equipment to be used during journeys and then removed without drawing blood.)

I considered making a rounded box, stereo audio amplifier for a partially-sighted friend. We previously bought speakers to watch Breaking Bad. However, these speakers used impressively hair-thin wires and speakers which required the molded box to hold the magnet against the coil and cone. Cable strain relief was a knot in the wire. This only cost £1 (approximately US$1.50) and had no amplifier but I was unaware of the construction until I tried repairing a strained wire at my local makerspace. The response to my laughter was akin to "What are you laughing at? Have you not seen inside recent Chinese exports?" I was unable to repair the cable strain and my friend's attempt to fix it. So, we watched the end of Season 4 and all of Season 5 audibly squinting in monophonic. Now I can make a small box with a headphone jack, two speakers, one of 10 PAM8403 stereo audio amplifier purchased for robotics and a 5 Volt USB charger. Unfortunately, I've only got black drain-pipe which isn't particularly good. My partially-sighted friend who regularly loses his black, Casio F-91W terrorist watch and his black, Dell mouse. Regardless, it is possible to make four of the corners red [port, left audio channel] and four of the corners green [starboard, right audio channel]. Actually, I wonder why stereo phono leads don't follow international shipping standards.

I can also make a box for the alarm clock project. A friend kindly ordered 15 large, red, seven segment digits. That's two sets of six for HH:MM:SS (or YY/MM/DD) plus spares. However, this is a case where I received significantly more than I expected. The description said one inch *segments* but I assumed that the entire seven segment module would be a metric inch tall. Oh, no. The segments really are one inch and therefore the digits are more than two inches tall. That should definitely be readable when I'm drowsy and bleary. However, digits this size don't fit into the hummus tubs which I use for credit card computers and prototyping. I now require corresponding lengths of drain-pipe to be cut without incurring serious injury.

My attempts at hardware hacking may be slow and abysmal but, to one of my friends, I look like a fricking wizard; especially after a friend stepped on a Casio F-91W watch and I repaired the catch with a blue paper-clip (to match the watch's blue trim).

Make Your Own Camera Tripod

Posted by cafebabe on Sunday April 22 2018, @07:56PM (#3168)
0 Comments
Hardware

Tools: scissors, (optional) drill.

Materials: six bamboo sticks, string, (optional) washers, (optional) nuts and (optional) bolt.

A friend suggested making a camera tripod from bamboo and the result was surprisingly good to both of us. I only wanted three bamboo sticks for testing and maybe a few spares. After some comedy at my local hydroponic shop, I purchased one pack of 25 bamboo sticks rather than three packs of 25 sticks. Optimistically, this was enough to make two or more tripods with plenty of opportunity to make repairs.

Bamboo sticks taper and therefore sticks can be used in pairs such that the cross-sectional area of the bamboo remains fairly constant along the length of each tripod leg. At the splayed end of the tripod legs, string should be knotted around each pair of sticks and the knots should be spaced so that sticks lean by about 15°. At the center, all sticks should be closely knotted in pairs and then knotted together and/or held together with elastic bands.

The platform for a camera can be made from an off-cut of wood. Cut three tapered ridges which take into account the off-centeredness of the stick pairs. With sticks wedged into the wood block, this platform is sufficiently stable to rest a camcorder. Although the picture may not be level, it is more than sufficient for motion capture. A level picture can be obtained with the use of a Gorilla Grip or similar. Alternatively, it is possible to add a standard camera clamp by drilling a hole and adding one bolt, one or two washers and a nut. Apparently, there is one standard bolt size for still cameras and one standard bolt size for motion cameras. The extra sturdiness was required for the weight of film cannisters. However, as digital cameras are decreasing in size, the small bolt size is becoming increasingly common for still cameras and motion cameras.

So! You refuse to shake hands with me, eh?

Posted by fustakrakich on Friday April 20 2018, @11:24PM (#3162)
15 Comments

Hardware And Software For Lucid Dreaming, Part 1

Posted by cafebabe on Thursday April 19 2018, @08:25PM (#3158)
6 Comments
/dev/random

Over the last few months, my access to the Internet has been very restricted. I reverted to reading heavily. In particular, after a few iterations of selecting interesting web pages from Wikipedia.Org, I have more than 7000 web pages to read. I thought that I could read through the bulk of long pages with the aid of text-to-speech software. Unfortunately, I fall asleep while pages are being spoken. It is also difficult to pause. This isn't a huge productivity gain and it also leads to less rested sleep. It is fairly similar to sleeping with a radio switched on.

After bothering to implement a script which pipes text to espeak or similar, it seemed like a waste not to use this functionality. I wasn't sure what should be spoken but large quantities of text aren't received verbatim and are likely to be detrimental. Perhaps something in a loop would be more effective? Or timed? Ah! It is possible to make lucid dreaming hardware with a micro-controller and was one of many possible projects when my ability with micro-controllers improved. However, this project has now been reduced to a script for a laptop or desktop computer and the optional use of headphones. (Indeed, it appears that I've had the ability to do this for many years but only had the imputus due to lack of bandwidth leading to a huge backlog of text. If I had more bandwidth, I'd probably be listening to podcasts or similar.)

In my local makerspace, there were a few issues of Make magazine. One issue had instructions for making a lucid dream machine from sunglasses, a micro-controller, red LEDs and a momentary switch. Before going to sleep, wear the sunglasses with the micro-controller and push the switch. The micro-controller is programmed to do precisely nothing for four hours. It then blinks the LEDs a few times every five minutes. If a person is dreaming during one of these blinks, it may be interpreted within the dream as car brake lights or similar. After several incidences over many nights, this should be sufficient to prompt "Ah! I'm dreaming!" and in the long-term, this should be sufficient to bootstrap lucid dreaming without a micro-controller contraption. The micro-controller (or script) aids the relatively difficult first step of lucid dreaming.

The article in the magazine noted an unusual side-effect which is definitely worth repeating. I have not encountered this problem but it is entirely plausible. People using lucid dream aids are more likely to experience false awakenings and this greatly increases the chance of urinating during sleep. Apparently, this is worse among the type of people who keep a dream diary. Apparently, after a lucid dream, a person "wakes", writes in their dream journal, goes to bathroom, "wakes", writes in dream journal, goes to bathroom, "wakes", skips journal because urination becomes more urgent, goes to bathroom, "wakes", bathroom, and suchlike. This cycle can occur eight times or more and you only have fail a reality check once with a full bladder before waking in a urine soaked bed. Welcome to reality. Make sure that you note the incident in your dream journal.

With downsides noted, it is possible to obtain similar functionality with a small script to sequence text-to-speech messages. From experiences in dreams, I suspected this would be more effective than LEDs. One night, I fell asleep with 24 hour news on television. In the dream, I was in an attic with other people. I attempted to watch the television in the attic in the dream but the view was repeatedly obstructed by items in the attic or other people. Despite continual obstruction, I did not leave the attic. This was how a brain integrated an audio channel without its video channel. It could not fake the video channel nor mask the audio channel and so was in a situation where it required the presence of a plausible audio source without video. From this, I know that it is possible to convey more information than blinking LEDs - up to 100% fidelity with zero feedback. However, interpretation is extremely random.

The micro-controller design stays dormant for four hours. In Perl or similar this would be sleep 4*60*60. LED blink is replaced with echo "This is a test." | espeak or similar. I recommend that default rate of speech is slowed from the default. Despite this, I've found that framing errors occur with a repetitive prefix. For example, "Alert! Alert! Alert! Alert! This is a dream!" gets interpreted as "Lerta! Lerta! Lerta!" and the message is missed while you ponder "What's 'Lerta'?" This may be a semi-deliberate action from a brain which is attempting to hold together a coherent experience.

I mentioned my project to a friend. My friend suggested writing a phone app because accelorometers can be used to estimate a dream period. I may have further conversations with my friend because I have more ambitious plans. Said friend introduced me to the SCP Foundation, which is a mix of Cthulu mythos and a warehouse of artefacts; possibly inspired by a scene from an Indiana Jones film. My plan is to make stateful messages which can picked up at any point and prompt a dream narative akin to SCP: Containment Breach or Five Nights At Freddy's. You may ask "Are you insane? Deliberately inducing nightmares?" and I would answer "People watch horror films and play zombie games. Bang for buck, this may be much more effective." At the very least, it should be obvious that lucid dream software (or any closed source accelorometer app) should be inspected very thoroughly; in a manner which does not apply to other software.

Since mentioning the project to my friend, I've conducted four nights of testing. The first was a complete failure due to incorrect insertion of a headphone jack. The third night was unsuccessful due to timing being completely wrong. I suspect this experiment makes me more sensitive to auditory input from other sources. On the third night, I may have interpreted some drama among house-mates. However, making enquiries about events which may or may not have occurred may induce more drama.

The second night was quite good. In the dream, I was in my local makerspace despite it not looking like my local makerspace. After receiving one of the messages, I recall being slouched over a chair, with headphones around my neck, talking to a person in the dream about lucid dreaming software. There are numerous logical faults with this situation. Most significantly, if I hear a message from the software, it is because an instant of the software is running and the reason it is running is to provide auditory prompts while I am dreaming. The fourth night was a long science-fiction dream. At one point, I was Captain Janaway and Neelix told me that I looked ill. When I gained some notion that it was narative, the dream shifted to an office dream where the text-to-speech was interpreted as a hateful door entry system and therefore ignored. It then shifted to a scenario where a house-mate who creates drama was attacked by a giant with Thor's hammer.

Overall, I've made more progress in four nights than would be expected with blinking LEDs. Unfortunately, I may not have anything further to report on this topic for an extended period. Regardless, here is some example code which can be adapted:-

#!/usr/bin/perl

# Example Prompts For Lucid Dreaming
# (C)2018 The Consortium.
# 20180417 finish

# requires espeak to be installed.

$wait=3*60*60;
$rand=90;

$pre="Alert!";
@say=(
  'This is a dream.',
  'You are dreaming.',
  'This is not real.'
);

while(0==0) {
  sleep($wait+rand($rand));
  $wait=$wait/6+30;
  open(OUT,"| espeak -s 60");
  print OUT join(' ',$pre,$pre,$pre,$pre,$say[rand(scalar(@say))]),"\n";
  close(OUT);
}

Addendum 1: The micro-controller implementation may induce photo-sensitive epilepsy. Risk may be reduced by avoiding flashing sequences from 2Hz to 55Hz and only using LEDs which are either red, green or blue. Risk of epilepsy can be eliminated by using the audio implementation.

Slayer on TV

Posted by turgid on Wednesday April 18 2018, @08:05PM (#3156)
14 Comments
Topics

When I was a kid I used to really hate all the "old" people droning on and on about how wonderful the 1960s were. The TV was full of nostalgia programmes, especially music, and even the radio had seemingly endless programmes of tinny and inane pop songs. Then there were the hippies. They had sideburns and flared trousers! Argh! What's more, adverts on TV all seemed to have 1960s pop songs as soundtracks. There was no escape.

At about that time in the 1980s I discovered Bay Area Thrash. That was my thing. One of my favourite bands of all time is Slayer who are doing their farewell tour this year. A couple of years ago, I had my hair cut short. I still love the music. Mrs Turgid and I went to see Testament playing in London a couple of weeks ago.

Last week I was watching TV in the evening and I was most pleasantly surprised when an advert came on for a company (OVO Energy) which sells electricity apparently from only renewable sources which used Raining Blood by Slayer as the soundtrack! Ladies and gentlemen, Slayer are in an advert on mainstream TV in the UK for renewable energy! The advert starts with a load of clips of politicians and the like stating that they do not believe in climate change.

I am now my parents. I have short hair and my favourite music, frequently accused of being Satanic and antisocial, is now used to sell things on TV. I am the Establishment. I have arrived.

And while I'm at it, allow me to VIRTUE SIGNAL loud and clear: I just got myself a hybrid car. You should see the mileage I'm getting. My dirty old turbo diesel is off to the breakers yard.