Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Trump is Lying More Now than at Beginning of his Presidency

Posted by DeathMonkey on Wednesday May 02 2018, @06:43PM (#3204)
17 Comments
News

President Donald Trump and the truth have grown more distant in recent months, according to a new analysis.

The Washington Post has been tracking the president’s false or misleading claims since he took office in January of last year.

In total he has averaged 6.5 false or misleading claims a day, but that the number of those claims has crept up since the beginning of his presidency. In the first 100 days of his administration, Trump averaged just 4.9 of those claims a day. In the last two months, that rate has almost doubled to 9 false or misleading claims a day, according to the Post. However, that number is bolstered by Trump’s rally in Michigan last week, where he lied 44 times during an 80-minute speech.

DONALD TRUMP IS LYING MORE NOW THAN HE WAS AT THE BEGINNING OF HIS PRESIDENCY

Bully Hunters

Posted by takyon on Monday April 30 2018, @10:53PM (#3198)
4 Comments

White House Correspondents' Dinner - Trump Era #2 of 8

Posted by takyon on Sunday April 29 2018, @03:49PM (#3196)
8 Comments

America's Rapey Dad

Posted by takyon on Thursday April 26 2018, @10:46PM (#3185)
5 Comments
Career & Education

Bill Cosby Found Guilty of Sexual Assault in Retrial

Bill Cosby Was Found Guilty on 3 Counts of Indecent Assault. Here's How Much Time He Could Serve

Cosby, who is 80, faces a maximum of 30 years in state prison.

The verdict prompted an outburst from Cosby in the Montgomery County, Pa., court house on Thursday, as he called District Attorney Kevin Steele an “a—hole.”

The three counts each carry a sentence of up to 10 years in state prison, but it remains to be seen how much time Cosby, will actually be sentenced to serve, and whether he could serve those sentences at the same time.

The Latest: Cosby's Alma Mater to Reconsider Honorary Degree

Janice Dickinson at Cosby Trial: ‘Here Was America’s Dad on Top of Me’

Methaqualone (Redirected from Quaaludes)

Trump's attorney to plead the fifth, a few amusing quotes.

Posted by DeathMonkey on Thursday April 26 2018, @06:37PM (#3184)
26 Comments
News

Trump's longtime personal lawyer/fixer Michael Cohen has now indicated that he intends to plead the Fifth Amendment in the civil case involving his hush-money payment to porn star Stormy Daniels, citing the fast-materializing criminal case stemming from that same payment.

"When you have your staff taking the Fifth Amendment, taking the Fifth so they are not prosecuted, when you have the man that set up the illegal server taking the Fifth, I think it is disgraceful." - Donald Trump.

“The mob takes the Fifth, If you're innocent, why are you taking the Fifth Amendment?” - Donald Trump

“Did you see her IT specialist? He's taken the Fifth,” Trump said. “The word is he's ratting her out like you wouldn't believe it.” - Donald Trump

"I am no fan of Bill Cosby but never-the-less some free advice - if you are innocent, do not remain silent. You look guilty as hell!" - Donald Trump

Peaceful Toronto Van Suspect Arrest "Stuns" U.S.

Posted by takyon on Wednesday April 25 2018, @05:22PM (#3179)
32 Comments
Career & Education

Toronto van attack: Calm actions of police stun US

The calm actions of a police officer who arrested the Toronto van suspect without firing a shot have prompted praise and, in some quarters, astonishment.

Video from the scene shows suspect Alek Minassian pointing an object at the officer and shouting: "Kill me!" The officer tells the man to "get down" and when the suspect says he has a gun, the officer repeats: "I don't care. Get down." Videos on social media show Mr Minassian lying down as the officer arrests him.

Many in North America are asking how the suspect did not end up dead in a hail of police gunfire. It contrasts with incidents in the US where police have shot and killed unarmed people.

"Research has shown that Canadian police are reluctant users of deadly force," says Rick Parent, a criminologist at Simon Fraser University in Canada's British Columbia. "An analysis of police shooting data over many years revealed, that in comparison to their American counterparts, Canadian police officers discharge their firearms far less, per capita than US police. However, like American police officers they take many risks in protecting the public."

One US-based academic told the BBC that the officer would have had a "duty" to kill the suspect, if the object he was pointing was a gun.

Mitt Rmoney Forced to Participate in Utah Primary

Posted by takyon on Sunday April 22 2018, @09:38PM (#3175)
7 Comments
Career & Education

Mitt Romney Fails to Bypass Utah Primary for U.S. Senate

Mitt Romney was forced on Saturday into a Republican primary for a United States Senate seat in Utah as he looks to restart his political career by replacing Orrin G. Hatch, a longtime senator who is retiring.

Mr. Romney, a former governor of Massachusetts and the Republican candidate for president in 2012, remains the heavy favorite to win the Senate seat in November. But he could have bypassed a primary altogether by earning a majority of votes on Saturday at the state’s G.O.P. convention.

Instead, the far-right party delegates preferred State Representative Mike Kennedy, who got 51 percent of the vote to Mr. Romney’s 49 percent.

Voters will decide between the candidates in a June 26 primary. Mr. Romney had previously secured his spot on the ballot by collecting 28,000 voter signatures, but he said on Saturday that the choice was partly to blame for his loss.

Gathering signatures is unpopular among many conservative delegates in the state who say it dilutes their ability to choose a candidate. The issue prompted hours of debate, shouting and booing at the convention.

[...] At the convention, Mr. Romney faced 11 other candidates, mostly political newcomers who questioned his criticism of President Trump and the depth of his ties to Utah. He had spent two months on the campaign trail visiting dairy farms, taking photos with college students and making stump speeches in small towns.

Utah Republican delegates force Mitt Romney into a primary election with state lawmaker Mike Kennedy in the race for the U.S. Senate

Romney's woes: No fun for mainstream GOP in Trump era

My Ideal Processor, Part Foo+4

Posted by cafebabe on Sunday April 22 2018, @08:35PM (#3174)
4 Comments
Software

This is part four of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part four covers the compiler.

So far, I've given an outline for a minimal trustworthy micro-coded mini-computer where every component can be sourced, tested and is otherwise open to inspection. I've also given an outline for a card bus system which allows cards made from stripboard and chips which can be manually soldered. Again, this is open to inspection. The card system also provides a bridge to contemporary networking and storage. This requires some cheating with micro-controllers to keep parts (and part) down to a reasonable level. Use of a micro-controller is obviously not trustworthy and therefore encryption and striping across redundant channels is required to ensure that no untrusted component gains a sufficient stranglehold on any data.

However, all of this is wasted if a trustworthy computer cannot self-host an operating system and compiler. We have the luxury of starting from an untrusted computer environment and therefore we can use any number of facilities to obtain a beach-head into a trustworthy environment. Conceptually, this requires one or more paper tapes which are inspected before transfer into the trustworthy environment. In practice, it will require a uni-directional serial link and a grudgingly trusted EPROM programmer. I argue that it is difficult (but not impossible) to compromise the EPROM programmer on the basis that all EPROM programmers may be sourced prior to micro-code or machine code patterns being finalized. In the absence of a network connection to a (very determined) attacker, malicious corruption is probably the best attack.

A quick recap on the current state of computing. We got away from boot-strapping every machine from its toggle-switches and sighed with relief. However, in the years that followed, computer security has become a quagmire. To get out of this problem, I propose a fairly drastic, unconventional approach. I hate to be a green-field developer but when computer security becomes an insurance category, that's because the details of systems - systems that people created - have become unknowable. Specifically, I propose writing a C compiler in Lisp and then using the C compiler to write an operating system kernel. At this point, the typical approach is to expand until it is possible to to self-host gcc or, more recently, clang. However, before we reach this point, we rapidly encounter a Turing tar-pit. This is where we lose the provinance of each file and this is where the security quagmire begins. Specifically, in the untrusted domain, it has become commonplace for binaries to depend upon more than 100 files from any of 19000 packages. These packages are typically downloaded and deployed without inspection. Furthermore, coupling between packages has become so tight that it is only possible to compile any piece if the remainder of a system is invariant. There are two problems with this arrangement and neither is solved with repeatable builds. The first problem is that we do not have Christmas light divisibility. We cannot sub-divide a system because the coupling is too tight. In BSD systems, we have:-

# make kernel
# make world

This provides separation between kernel-space and user-space. We can build a kernel with user-space software. Using the new kernel, we can build all of the user-space software. But don't ask how that process works because it doesn't follow the layered approach recommended by theory. This world-readable, compile-one-piece-at-a-time approach is also highly vulnerable to privilege escalation. How many routes are there for malicious code to obtain global influence after one, two, three or more global re-compilations? Unknown but there are probably very many. Do all of these paths obtain more scrutiny than OpenSSL? Definitely no. This cannot make a trustworthy system. The system is open but the dependencies are numerous. Therefore, it is not possible to inspect a system in a timely manner.

I wish to change this sloppy practice. I propose writing a C compiler in Lisp and then using the C compiler to write an operating system kernel. I also propose writing the C compiler in Lisp and writing the Lisp interpeter in C. The current practice of writing the C compiler in C (or, more recently, writing the C++ compiler in C++) allows quines to be trivially propagated in the compiler. This can be overcome with the use of three compilers. However, for this, you will have to exclude all commercial compilers for which you do not have the source code. Likewise for trusting any third party who has access to the source of three compilers. And that's the situation. I wish you good luck finding three C or C++ compilers which can compile each each other.

I wish to raise the task of writing a quine to recognizing when the (compiled) Lisp interpreter is running the (interpreted) C compiler. When this occurs, modify the C compiler parse tree. The task of writing a quine remains possible but it is substantial complicated because each compilation phase is separated by interpreted code.

Returning to the kernel, it is possible to compile a kernel and supporting programs with very few dependencies. For example, a hypothetical POSIX login.c (a known target of attack) would depend upon the C compiler written in Lisp, system headers, source input and the drivers and utilities required to make a kernel functional. The output of each compilation will be binaries of historical size. It is hoped that each binary can be inspected manually, especially if correctness is placed ahead of speed.

PerlPowerTools and similar effort comprehensively show that a subset of utilities may skip the compilation process and be implemented with an interpreter. In practice, more than 2/3 of utilities may be interpreted. Although, this may be significantly reduced if launch delay or historical compatibility is an issue. (Much of the historical compatibility arises from pointless tests inside GNU build scripts and the assumed functionality thereof.)

The obvious question is why not use gcc or clang? I'll mention gcc first. Ignoring, the extended mutual loop of dependencies across multiple software licences, gcc is a really good example of Greenspun's Tenth Rule ("Any sufficiently complex program contains an ad hoc implementation of Common Lisp" to which a wag added "including Common Lisp.") On a single core Raspberry Pi, each of the four stages of gcc compilation require more than 10 hours and ideally require more than 700MB RAM. On a homebrew mini-computer, this may require more than 4000 hours. For repeatable builds, each compilation stage would require zero bit errors over a period of six weeks. The worrying part is that gcc depends heavily upon GIMPLE which is a DSL [Domain Specific Language] with Lisp syntax. This is for parse tree manipulation. Specifically, architecture independent optimizations followed by architecture dependent optimizations. The verbosity of GIMPLE explains why LTO [Link-Time Optimisation] offers GZip compression.

Whereas, clang dumps Lisp syntax in favor of C++ templates. It also trades memory for speed. With a suitable infrastructure of processor caches, is about half of the duration. However, with the default compiler flags, gcc compiling clang exceeds the 31 bit writable address-space of a Raspberry Pi. On a homebrew mini-computer, would take longer to self-host clang than gcc.

Obviously, a simpler compiler is required. Access to the source of a such a compiler is also required. Where are they? Most compilers are proprietary or extensions (branded or unbranded) of gcc and clang. Even if we go back to an ancient version of gcc, we still have the notorious mutual dependancy with gmake. That takes us back to the security quagmire.

It is for these reasons that I suggest a mutual dependency of compiler and interpreter. The (interpreted) compiler has similar functionality to gcc but may be written in a much more compact and expressive form. At this stage, we would be writing for correctness rather than speed. This is on the basis that slow runs on amateur hardware will be lucky to complete. It would be counter-productive to get tricksy when lower layers are in question. On this basis, the size of source code should be minimized without compromising legibility. It should be as short as possible but no shorter. If we do not have the processing power to implement an optimizing compiler, correctness and compactness become the only choices.

The next consideration is implementation conformance of compiler and interpreter. The laziest implementation of a C compiler may have very conformance with other implementations. However, in the long-term, low conformance is a false economy. It is undesirable to have a language dialect which is incompatible with standard tools. For example, standard lint utilities catch trivial errors. However, if the language dialect has unusual constructs then it is more difficult to avoid predictable blunders. Increasingly, compilers have integrated lint functionality. However, we don't have that luxury. Regardless, it may be desirable to perform linting on untrusted computers but only perform compilation on trustworthy hardware.

It does not help that C is mostly defined by implementation rather than a formal definition. It was not always like this. Unfortunately, most of the drift occurred when gcc became almost a strict superset of proprietary Unix compilers from the 1990s. That includes the horrible compilers, often sold as an optional extra, for HP-UX, SunOS and Irix. A formal definition of C goes back to Kernighan & Ritchie's book: The C Programming Language from the 1970s. More recent definitions include ISO C 1999. A further complication is that embedded programmers write an eclectic mix of C where features prior to the 1990 standard are mixed with features after the 1999 standard. This created a feedback loop which encouraged dependence on gcc.

More recently, there are efforts to nudge C toward Algol. This is achieved by restricting the grammar. My preference is towards Algol derivates, such as Pascal or Jovial. Both have array bound checks at run-time. This alone eliminates a common cause of critical bugs: buffer overflow. Bruce Schneier agrees. It is better to have 10% of the processing power of a trustworthy computer rather than 100% of an untrusted computer. One method to implement this is to keep assertions (such as bound checks) in production code. One method to assert safety checks is to make them part of the language specification. Hence, my inclination towards Pascal and Jovial. These choices are not arbitrary. The initial target hardware (a trustworthy micro-coded mini-computer) has a passing ressemblance the the Apollo Guidance Computer. One of its successors for aerospace navigation, the obsolete MIL-STD-1750A, was typically programmed in Jovial. It remains easy to write Jovial because Algol, Pascal and Jovial are typically converted to C and then compiled as C. However, in the general case, it is difficult to convert C to an Algol derivative due to array bound checks and other differences. It would be possible to implement a byte-code interpreter in Algol which circumvents bound checks of the native language. However, this would incur significant speed penalty. Although, C derivatives and Algol derivatives are broadly similar, C is the lowest common denominator. Regardless, it is possible to write good code in Jovial and translate it to C with safeguards intact. This may be compiled using the same process as legacy code (which does not have the same protections).

In the general case, a C compiler is sufficiently flexible to provide the back-end for C++, Objective C, Fortran (which has extensive libraries), Pascal, Jovial and other compiled languages. Indeed, the use of a common compiler allows functions written in these languages to be statically linked with relatively little difficulty. We also have the option of compiling raw C (with no safeguards), MISRA C or similar (with retrospective safeguards) or languages which always had safeguards. Unfortunately, people fail to understand these options. This may accelerate the move away from "difficult" "low-level" languages, such as C. However, rather than moving to Algol languages or languages which can be statically linked with C, programmers now skip over interpreted languages (or JIT compiled languages) which are written in C derivatives (Perl, PHP, Python, Ruby, Java derivatives, Haskell, JavaScript, Lua) and settle upon languages with multiple modes and back-ends (Rust, Swift, Dart, Meteor, Flutter) which offer mutually exclusive benefits and make solved problems into lucrative busiwork.

My ultimate objection to a language which is supposedly spans everything below C to everything above Ruby is that it cannot be achieved with one language. A good interpreter has eval and a good compiler doesn't. These are mutually exclusive goals. This can only be fudged by giving the same name to multiple things. There are benefits for a compiler and interpreter to have the same grammar in the majority of cases. But that doesn't make a compiler and an interpreter the "same" language. If you can see past the grammar, it would be more accurate to describe C and Pascal as the "same" language.

Actually, I've comprehensively convinced myself that it would be useful to have a Lisp interpreter with a default dialect similar to C. The syntax and grammar of the compiler is fairly fixed but the syntax of the interpreter is a completely free variable. There may be cases where the overlap of grammar may be a hinderance. For example, when attempting to debug eval in the interpreter. However, this is greatly outweighed by benefits.

My Ideal Processor, Part Foo+3

Posted by cafebabe on Sunday April 22 2018, @08:32PM (#3173)
0 Comments
Hardware

This is part three of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part three covers networking.

I propose a fairly minimal trustworthy computer implementation which consists of a micro-coded mini-computer using one 8 bit ROM. It has performance which makes a Commodore 64 look racy. However, the redeeming feature is that it meshes well with a card bus bus system which scales from stripboard and DIP chips to PCI-X performance.

It would be extremely useful and practical if such a computer system could have a trustworthy network interface. Unfortunately, I'm not sure this is possible. Implementing a trustworthy UART with logic chips is easy if data integrity is secondary. Implementing a good, trustworthy UART is difficult. Implementing a cell network link layer is infeasible and implementing Ethernet requires VLSI and/or a dedicated processor. Unfortunately, VLSI and dedicated processors are not trustworthy.

Fortunately, it is possible to side-step this problem in a manner which is similar to trusting compilers. In the case of networking, data may be split and sent over two or more separate wired networks. This prevents untrusted network interfaces from ever obtaining sufficient information for a MITM attack. Untrusted network interfaces may maliciously drop packets or delay packets but they cannot inject information into communication. In the trivial case, this covers link layer security over a LAN. This can be extended to higher layers of communication and/or WAN communication.

How do we implement this? We borrow an idea from the counterfeit FTDI serial communication chips. Yes, we use one abuse of trust to counter another abuse of trust. The counterfeit serial chips use a more advanced scale of integration and use a micro-controller to simulate a bank of hardware registers. This is fairly trivial to implement if the parallel or serial bus timings are sufficiently generous. I give a worked example using commonly available hardware. In the untrusted (legacy computer) domain, we may program an 84MHz Arduino Due to use its dormant and relatively undocumented 100Mb/s Ethernet interface and work as a bank of registers within a 10MHz card bus system. The network card may specify 256 × 8 bit registers in which the bottom 64 registers may be a page-banked view of one filtered Ethernet packet in a queue of multiple packets. The card bus parameters are defined such that a wait state must be asserted within 100ns and timeout occurs before 2000ns. Device registers may be simulated with perfect fidelity if the firmware always keeps a 256 byte array in a consistent state. Specifically, the 84MHz micro-controller is able to respond to line level changes and assert a wait signal within 12 cycles. Page-banking is a little more difficult. However, with the reasonable assumption that packet data is always aligned on four byte boundaries and data is moved in multiples of four, a fixed length unrolled loop allows 64 bytes to be copied into the "registers" within 40 cycles (476ns). If line level interrupts for the card bus interface have interrupt priority which is higher than the Ethernet interface then it is possible for an 84MHz micro-controller to work correctly on a 10MHz, asynchronous, parallel bus interface with either:-

  • A suitably generous number of bus cycles prior to timeout.
  • A suitably constrained register interface.

In general:-

  • Any micro-controller may be intercepted before delivery and contain hidden radio links and any amount of malicious code.
  • Any micro-controller development environment may have malicious functionality.
  • Regardless, an untrusted computer may be used to program an untrusted network interface.
  • An untrusted micro-controller of 80MHz or so may operate on a card bus of 10MHz or so and maintain one or more Ethernet interfaces or any speed.
  • Multiple micro-controllers connected to separate switches should not be able to collude.
  • This requires screened cards, signal isolation along cables, separate power distribution and a split back-plane in which trusted cards are separated from untrusted cards - and each other. Untrusted cards do not belong in the same trust domain. Each untrusted card must have its own trust domain.
  • If suitable conditions are met, trustworthy computers may establish link layer security without prior trust. Current systems do not meet these conditions.
  • With suitable key management, trustworthy computers may extend trust to the application layer.
  • If trustworthy computers provide WAN routing and tunneling, it is possible to extend trusted networking over single runs of cable without distance restriction.

What is lacking in contemporary systems? Parallel buses have an unacceptable level of trust where any dubious card may perform bus mastering and/or specify a vector interrupt address. Serial buses offer some improvement but some provide downward compatibility to parallel bus systems and none prevent covert communication over common wires. The next problem is key management. It would be preferable if trustworthy computers had access to a trustworthy filing system. This requires pieces of keys to be stored across multiple magnetic disks or multiple flash storage units which, again have to placed on separate storage buses and electrically isolated from each other. We're not even considering the case of a trustworthy network filing system. How would you retrieve the keys to access the network without being subject to MITM attack?

We know that almost every magnetic harddisk made over the last 10 years has sector DRM. That's the unwanted functionality which is advertised. What other functionality was included by the manufacturer? What other functionality was added by parties, such as the Equation Group, who are able to re-flash harddisks of all major manufacturers? Solid state storage is worse. As an example, it is fairly trivial to re-flash a Micro SD card to work as an SPI host rather than an SPI client. Samsung "smartphone" firmware fixes provide alarming insight into the internals of ARM micro-controller firmware used in Samsung flash storage. From this, it is fairly trivial to set Samsung flash storage into debug mode, extract firmware, decode bad block maps, extract dormant data and then re-flash the micro-controller so that it may keep aside information of interest. However, that's tame compared to information learned from my local makerspace. Some Micro SD cards are publicly declared to have a wi-fi interface. (Others may have a dormant wi-fi interface.) A wi-fi interface allows a card to work in an infinite storage mode when placed in a camera. However, in other scenarios, malicious firmware may snoop keywords or copy documents into a hidden pool and then crack a wi-fi connection so that documents can be uploaded to interested parties. How do you know that you're not a victim already? If you use strong encryption for every file on every storage card then you are only safe from data egress. A scenario of "How did that get there?" combined with wi-fi and horse-porn becomes increasingly likely without a user feigning ignorance.

Magnetic storage security is screwed. Flash storage security is screwed. What about booting and key management on ROM? Well, that's screwed too. a 250ns "ROM" implemented as a 100MHz micro-controller has 25 clock cycles to respond to any signal level changes. Nowadays, this can be achieved without writing assembly. However, the expected response time provides less opportunity for malicious activity. For the truly paranoid, it is possible to stripe data across ROMs (in the range of bits or bytes) and implement parity checks. Although this is far from ideal, it provides options which are not available with magnetic storage or flash storage.

My Ideal Processor, Part Foo+2

Posted by cafebabe on Sunday April 22 2018, @08:28PM (#3172)
0 Comments
Hardware

This is part two of a four part proposal for a trustworthy computer consisting of ALU, registers, memory interface, network cards and compiler. Part two covers the memory interface.

The previous section describes the outline of a discrete hardware implementation of processor where the virtual machine is 32 bits or more and the native machine has:-

  • 16 bit program counter for 8 bit micro-code.
  • 8 bit internal accumulator.
  • 128 byte internal register bus where values can be read into the accumulator.
  • 128 byte internal register bus where values can be written from the accumulator.
  • Locations on the internal buses where it is possible to read or write to the ALU, virtual processor registers and main memory.

Understandably, this requires a large number of clock cycles to do anything and clock cycles are likely to be less than 1MHz. However, this design works astoundingly well when used with a multiplexed card bus system. This covers multiple use cases:-

  • Micro-controller uses card bus system to interface with one more other micro-controllers on same circuit board.
  • FPGA uses card bus system as main memory interface.
  • Micro-coded mini-computer uses card bus system as main memory.
  • Legacy host computer accesses card bus system via parallel port. Access may be directed from native applications or software running in a virtual machine.

Where:-

  • One or two peripherals do not use an edge connector.
  • A desktop card frame provides 16 slots or less.
  • A card frame provides 40 slots or so in a 19 inch rack. Using a trick popularized by the Sun Microsystems E10000 Server, it is possible to have an equal number of slots on the reverse of a back-plane without increasing the wave-length of the back-plane.

Where a card may be:-

  • Homebrew cards made from strip-board and DIP chips.
  • Etched, single sided circuit board.
  • Double sided circuit board.
  • Multile layer circuit board.

Where each card may:-

  • Operate at 1MHz or less.
  • Have the throughput of PCI-X.
  • Also provide a PCI, PCI-X or PCI Express connector.

It may also be possible to implement generic bridges to and from PCI variants. However, the design is most likely to be used as an 8 bit interface running at less than 1MHz.

The card bus system was originally envisioned to be retro-fitted to 8 bit and 16 bit computers but I quickly realised that portable software would be required to cover multiple use cases. This led to the development of the virtual machine which compliments this card bus system. Therefore, the card bus may be driven via library code running on a computer which has its own address-space. Or it may be the only (visible) address-space of a hard-wired or trustworthy micro-coded mini-computer. During my absense, I began reading about legacy bus implementations from VAX, Apple 2, PCI and many others. This has led to making the design significantly more lax about power and signal tolerances.

Firstly, borrowing from the Apple 2 and elsewhere, supply Voltage should be unregulated. It is nominally 5 Volts but may be anything over the automotive range. This may exceed 15 Volts and may include transient spikes above 60 Volts. It is expected that each card performs Buck transformer regulation to 3.3V or similar but that all common signals are at 5V only.

Secondly, one open drain interrupt line may be common to all cards. Alternatively, the interrupt of each card may be fed into a priority encoder and/or switch fabric for delivery to multiple processor cores. If the host is a micro-controller then interrupts (which may be detected as edge triggered or level triggered) may or may not produce unique interrupt vectors. Similarly, for a trustworthy micro-coded mini-computer, unique interrupts may or may not be available.

Thirdly, signal lines are intended to allow a fairly minimal implementation of 3 × 74244 8 bit mono-directional buffer chips, 3 × 74245 8 bit bi-directional buffer chips, 2 × 74138 de-multiplex chips, two or more latches and XOR parity. Each card consumes a maximum of one TTL load on each signal line.

Fourthly, there are nominally 256 cards in a 64 bit address-space. Each card nominally implements a 56 bit address space. In the baseline specification, cards do not perform bus mastering or dynamic address allocation. Borrowing from PCI, cards may perform multiple, unrelated functions and therefore the address space on each card is divided into four equal sections. Each section is divided into four equal segments: ROM, I/O Segment, Manufacturer ID and Device ID. These segments can be decoded uniquely with 1 × 74138. In the absense of ROM, Manufacturer ID and Device ID may be hard-wired via buffer chips. A ROM format will be devised in which card capabilities will be defined. These will include wide data-paths, bus mastering, timings, Voltage ranges, current draw, larger numerical ranges for Manufacturer ID and Device ID, byte-code device drivers and native device driver implementations.

Fifthly, borrowing from QBus, if a card cannot accept or respond to a request within 100ns, it must assert a wait signal. A trivial host implementation may wait indefinitely. However, a host is expected to receive a bus error within 2000ns.

Sixthly, cards nominally have a 28 pin or so, single sided edge connector at 0.1 inch pitch where signals are: ground, power, ground, control signals, ground, interrupt, wait, ground, least significant four bits, ground, four bit even parity, ground, next four significant bits, ground, four bit even parity, ground, power, ground. A single height card is 8 inch. A single length card is 6 inch. Single height, quad length cards should fit within a 5U frame (8.75 inch) and should therefore be 8 inch high and 24 inch deep. Double height cards should fit within a 10U (17.5 inch) frame. It must be possible to fit three, half height, half length cards plus back-plane into 2 inch × 4 inch × 4 inch.

The memory of the card bus system is arranged so that a card in slot zero may be a mini-computer boot ROM. Furthermore, the bit patterns on the boot ROM may be compatible with multiple processor architectures. For example, for Z80, the first two memory locations are a vector for the start address. For x86, the first four memory locations are a vector for start address in 8086 segment format. For 680x0, the first four memory locations are a 32 bit start address in big endian order. Therefore, it is possible to choose representations which are distinct for all three architectures. For the virtual processor implemented by the mini-computer, address zero is the execution address. Fortunately, representations can be chosen for vectors which are also harmless, unprivileged instructions for the virtual processor. Specifically, it is possible to choose Z80 vectors, x86 vectors and 680x0 vectors which all lead to distinct addresses while also being a NOP sledge for the virtual processor. In the general case, architectures may be similar to the point that differences during boot are immaterial. More commonly, architectures differ to the extent that a dummy MOV in one instruction set is a jump in another instruction set. This allows bit patterns to be chosen such that execution for each architecture branches in one or two steps. The major consideration is that any card configuration held in ROM should be offset by 1KB or so. This avoids needlessly surmountable complications of card configuration clashing with boot vectors.

In practice, connecting the multiplexed card bus as the main memory of Z80, x86, 680x0 or similar would be troublesome. Connecting a virtual ARM processor in an FPGA would be easier. In general, it is good to keep options open when a broad solution is relatively easy to achieve. More specifically, choose any two or more architectures to provide cross-architecture support. For example: MIPS64, ARMv8, Xtensa, OpenRISC, RISC-V and my virtual processor.

Consideration of card access patterns is useful. In the trivial case where a card has not indicated a wider data-path, a 64 bit address would be specified in eight pieces. Each card latches chunks of interest. For I/O expanders without ROM, this may be the bottom 8 bit of the address-space (or less). Latches may be set in ascending order or descending order. However, descending order has the advantage of setting card select on the first cycle. It is also the most logical order when sequential addresses are accessed. This works very much like DRAM static column paging and is intended to facilitate such usage when used in conjunction with low density DRAM. This is not applicable when using high density DRAM but the option remains. Indeed, given typical bus bandwidth, use of high density DRAM may be impractical.

Borrowing from SCSI and Micro SD, host and card may be capable of wider transfers. For a trustworthy micro-coded mini-computer implementation, this requires moving the top 8 bits of a candidate address to an internal register then jumping to the micro-coded bus mastering routine for each case. In the trivial case, the extra check slows access to main memory. However, where a wider bus is common, latch operations and transfer operations may be reduced. Although the trivial case of a nine-way multiplexed bus seems slow, sequential 8 bit read/write operations approach 50% of 8 bit bus cycle throughput. In the frequent case of sequential, variable length instruction read, retrieval of longer instructions is considerably amortized. For wider buses, sequential 16 bit read/write operations closely approach 100% of 8 bit bus cycle throughput, 32 bit read/write operations are effectively 200% of 8 bit bus cycle throughput, 64 bit read/write operations are exactly 400% of 8 bit bus cycle throughput. Larger bus widths also perform better on random access. Specifically, 8 bit bus obtains 11% throughput, 16 bit bus obtains 20% throughput, 32 bit bus obtains 33% throughput and 64 bit bus obtains 50% throughput. However, for trivial cases, figures can also be considerably improved if partial address decode is used and bus cycles are correspondingly omitted. Given the bandwidth of the bus, this may be a practical default.

Unfortunately, the next section demonstrates that the desirable and trivial case of a passive back-plane is not sufficiently robust against malicious parties.