Russian opposition leader Alexey Navalny in coma after suspected poisoning: spokeswoman
Russian opposition leader and outspoken Kremlin critic Alexey Navalny is in a coma after falling ill from suspected poisoning, his spokeswoman said Thursday.
Navalny, 44, started feeling unwell while on a return flight to Moscow from the Siberian city of Tomsk, his spokeswoman, Kira Yarmysh, said on Twitter. The plane later made an urgent landing in Omsk, she added.
Loud groaning can be heard in video footage apparently filmed on the flight taken by Navalny, which was shared on the Baza Telegram channel. More video apparently filmed through the airplane window shows an immobile man being taken by wheeled stretcher to a waiting ambulance.
He only drank black tea in an airport cafe before takeoff, Yarmysh told Russian radio station Echo of Moscow. "We assume that Alexey was poisoned with something mixed into the tea. It was the only thing that he drank in the morning. Doctors say the toxin was absorbed faster through the hot liquid," Yarmysh tweeted.
Alexei Navalny: 'Poisoned' Russian opposition leader in a coma
The Kremlin said that it wished Mr Navalny a "speedy recovery".
[...] Ms Yarmysh said later that Mr Navalny was on a ventilator and in a coma, and that the hospital was now full of police officers. All of his belongings were being confiscated, she added.
She also said that doctors were initially ready to share any information but then they later claimed the toxicology tests had been delayed and were "clearly playing for time, and not saying what they know". Diagnosis would be "towards evening", she was told.
Both Mr Navalny's wife, Yulia Navalnaya, and doctor, Anastasia Vasilyeva, have arrived at the hospital. Mrs Navalnaya was initially denied access to her husband because authorities said the patient had not agreed to the visit, Ms Yarmysh said, although she was later allowed on to the ward.
Dr Vasilyeva said they were seeking to transfer the opposition leader to a specialist poison control centre in Europe, but hospital doctors were refusing to provide records of his condition.
Alexei Navalny: Plane to bring 'poisoned' Russian critic to Germany
A German peace foundation is hoping to send an air ambulance to bring Russian opposition figure Alexei Navalny to Berlin for treatment following his suspected poisoning.
SpaceX installs orbital Starship heat shield prototype with robots
As with all SpaceX programs, the company began Starship heat shield installation development as soon as possible, installing a handful of tiles (presumably early-stage prototypes) on Starhopper as far back as H1 2019. This continued with small hexagonal tile installation tests on Starships SN1, SN3, SN4, SN5, and SN6 throughout 2020. While those coupon tests obviously didn’t involve orbital-class reentry heating or buffeting, they were still useful to characterize the mechanical behavior of heat shield tiles under the stress of cryogenic propellant loading, Raptor static fires, and hop tests.
In 2019, SpaceX even tested a few ceramic Starship heat shield tiles on an orbital Cargo Dragon mission for NASA. The fact that no more orbital Cargo or Crew Dragon tests were acknowledged seems to suggest that the demonstration was a success, proving that the tiles can stand up to the stresses of reentry from low Earth orbit (LEO).
Hopefully, this is just to speed up construction and not a sign that the tiles need to be removed and inspected after every launch.
‘Top Cop’ Kamala Harris’s Record of Policing the Police
Indeed, an examination of that record shows how Ms. Harris was far more reticent in another time of ferment a half-decade ago.
Since becoming California’s attorney general in 2011, she had largely avoided intervening in cases involving killings by the police. Protesters in Oakland distributed fliers saying: “Tell California Attorney General Kamala Harris to prosecute killer cops! It’s her job!”
Then, amid the national outrage stoked by the 2014 killing of Michael Brown in Ferguson, Mo., came pleas for her to investigate a series of police shootings in San Francisco, where she had previously been district attorney. She did not step in. Except in extraordinary circumstances, she said, it was not her job.
Still, her approach was subtly shifting. During the inaugural address for her second term as attorney general, Ms. Harris said the nation’s police forces faced a “crisis of confidence.” And by the end of her tenure in 2016, she had proposed a modest expansion of her office’s powers to investigate police misconduct, begun reviews of two municipal police departments and backed a Justice Department investigation in San Francisco.
Critics saw her taking baby steps when bold reform was needed — a microcosm of a career in which she developed a reputation for taking cautious, incremental action on criminal justice and, more often than not, yielding to the status quo.
[...] In her 2009 book, “Smart on Crime,” she wrote that “if we take a show of hands of those who would like to see more police officers on the street, mine would shoot up,” adding that “virtually all law-abiding citizens feel safer when they see officers walking a beat.”
Earlier this summer, in the wake of the police killing of George Floyd in Minneapolis, she told The New York Times that “it is status-quo thinking to believe that putting more police on the streets creates more safety. That’s wrong. It’s just wrong.”
How Kamala Harris Fought to Keep Nonviolent Prisoners Locked Up
Biden’s notes: ‘Do not hold grudges’ against Kamala Harris
Kamala Harris is clearly the official candidate of white guilt, but her pick is not going down well with at least some BLM supporters. The "cop" label isn't in vogue right now. It probably doesn't matter as the election will probably be a referendum on Trump's handling of the coronavirus and economy, possibly with an October surprise thrown in the mix.
Previously: Kamala Harris Endorses White Segregationist
Onyx Boox Poke2 Color eReader Launched for $299
Manga and comics fans, rejoice! After years of getting black & white eReaders, the first commercial color eReaders are coming to market starting with Onyx Boox Poke2 Color eReader sold for $299 (but sadly sold out at the time of writing).
The eReader comes with a 6-inch, 1448 x 1072 E-Ink display that supports up to 4096 colors, and runs Android 9.0 on an octa-core processor coupled with 2GB RAM and 32GB storage.
I think I would rather have an OLED/microLED tablet.
I've been working on a 64 bit extension to the 6502 processor architecture. This is for the purpose of implementing a secure computer which also has a hope of working after post industrial collapse.
Along the way, I have found a use for a practical use for 8 bit floating point numbers. Floating point representations were historically used for scientific calculations. The two components of a floating point number - the exponent and mantissa - work in a manner similar to logarithms, slide rules and the scientific representation of numbers. For example, 1.32×104 = 13,200. Why not just write the latter? Scientific notation works over a *very* large scale and is therefore useful for cosmology, biology and nanofabrication. For computing, floating point may use binary in preference to decimal. Also, it is not typical to store both the exponent and mantissa within 8 bits.
8 bit computers were known for parsimony. In particular, 8 bit computers were notable for a complete absence of FPU [Floating Point Unit]. This is also true for the numerous extensions of 6502 and Z80. Extending such systems to 64 bits falls within a particularly trivial case where data, address, SIMD and float may all fit into a small number of general registers. In particular, this reduces to two cases: SIMD integers and SIMD floats. However, there is an asymmetry in the system which has a semi-serious use. Integers may be 8 bit, 16 bit, 32 bit or 64 bit. If the data is less than the maximum size, there is little penalty for hardware to process multiple pieces of data in parallel; possibly using a different configuration of the same hardware. For example, it is possible to process 2-4 pieces of 16 bit integer data in parallel.
However, there are commonly less options for floating point SIMD. In part, this is due to mis-matched historical sizes which include 16 bit, 32 bit, 40 bit, 45 bit, 48 bit, 64 bit, 80 bit, 96 bit, 128 bit, 192 bit and 256 bit. After 10 or more drafts of the IEEE754 floating point standard, this settled down to powers of two; commonly 32 bit or 64 bit. However, multiples and submultiples of 32 bit are becoming increasingly common. Historically, FPUs and GPUs have commonly used single precision 32 bit, maybe double precision 64 bit and maybe a more detailed internal representation to minimize error between steps of a calculation. However, half precision 16 bit is becoming common. This is sufficient for some graphical applications. It also reduces energy and computation when working with neural networks. Unfortunately, some manufacturers differentiate and market segment their "gaming" GPUs and their "datacenter" GPUs by, for example, not allowing a single step conversion from 64 bit double precision to 16 bit half precision. This can be performed via the intermediate 32 bit representation but it requires more energy and more time using suboptimal code.
Much of the detail of SIMD, FPU and GPU is obscured by historical detail. If you have 56 minutes spare, I highly recommend a lecture by Danny Hillis which explains the multiple iterations of the Connection Machine. (Not a trivial matter because it attracted the mercurial attention of Richard Feynman.) In addition to explaining how to make and test a highly reliable cluster with thousands of nodes, it explains how to start with 1 bit processors, maintain downward compatibility while introducing 32 bit single precision floating point support, how to interface high bandwidth storage, how to adapt and migrate to generic hardware and how to introduce concurrent access. Towards the end, it resembles a GPU. CUDA, possibly the most successful parallel version of the C programming language, continues the trend to present. In particular, the use of bitmasks to gang of 32 "threads" into a "weft" is consistent with the second generation of the Connection Machine while also explaining why GPU support for 64 bit double precision has been relatively sparse. Many of CUDA's remaining limitations are due to downward compatibility of a PCI GPU using 32 bit addressing.
Anyhow, I'm taking a processor architecture with no hardware support for floating point and skipping everything prior to the fifth generation of the Connection Machine, Intel 80487, Intel AVX and ARM Neon. This is a blessing and a curse. In part, it is a blessing because it alleviates any requirement for separate registers or unusual data sizes. However, whether or not 16 bit half precision float is supported, it should be blindingly obvious that there is an asymmetry between 8 bit, 16 bit, 32 bit and 64 bit integers versus 32 bit and 64 bit floats. In particular, there is no standard for 8 bit floats.
Initially, I rejected 8 bit floating point representation as an absurd reduction. Perhaps I could find some other use for the unused instructions. However, there is some use for this limited precision. In the preferred implementation, 5 bit exponent and 3 bit mantissa approximates positive integers up to 35 bits with very coarse accuracy. It is possible to adjust the scale, reduce the error or include negative numbers. However, a four bit exponent (or less) is of particularly marginal utility when 16 bit float is more widely supported. In all cases, the use of greater precision may be preferred. If you remain committed to 8 bit floats, it is possible to use 2^8, 2^16 or 2^24 bytes to hold a table with one, two or three inputs. An example would be the cube of the input or the sum of squares. Historically, this would have been a huge overhead. However, if you want 64KB or 16MB table, it is because you have a vastly larger pool of data to process.
It may be desirable to reserve one or more values to represent values such as overflow or infinity. I could get fancy and explain this with ⊤ (top), ⊥ (bottom) and more obtuse notation. However, it is best explained with a semi-numerate example from Terry Prachett. Take a bridge troll who understands "one", "two" and "many". This gives us:-
Repeat for 8 bit, 16 bit, 32 bit or 64 bit floating point values but using a much longer sequence of finite values before we hit the end.
I'll finish with a practical example. Specifically, it is possible to convert an image to indexed color. It is typically available in programs such as Adobe PhotoShop and GIMP. This may be chosen dynamically based upon the content of the image, taken from a previous reduction, or from a pre-defined palette. Unfortunately, GIMP's algorithm is limited to a maximum of 256 palette entries. (I presume that Adobe PhotoShop has a similar format and limitations but I don't have a example handy.) The given example is monochrome to fit within the 256 limit. Regardless, it should give a empirical feel of 8 bit TinyFloat for the representation of images, although without the full dynamic range which may exceed 10,000,000,000:1. Optionally, run the following program:-
perl -e 'print "GIMP Palette\nName: TinyFloat\n#\n";for($e=0;$e<5;$e++){for($m=8;$m<16;$m++){$v=$m<<$e;printf("%3i %3i %3i\n",$v,$v,$v)}}'
to obtain:-
GIMP Palette
Name: TinyFloat
#
8 8 8
9 9 9
10 10 10
11 11 11
12 12 12
13 13 13
14 14 14
15 15 15
16 16 16
18 18 18
20 20 20
22 22 22
24 24 24
26 26 26
28 28 28
30 30 30
32 32 32
36 36 36
40 40 40
44 44 44
48 48 48
52 52 52
56 56 56
60 60 60
64 64 64
72 72 72
80 80 80
88 88 88
96 96 96
104 104 104
112 112 112
120 120 120
128 128 128
144 144 144
160 160 160
176 176 176
192 192 192
208 208 208
224 224 224
240 240 240
Install as /usr/share/gimp/2.0/palettes/TinyFloat.gpl or similar. Have fun converting arbitrary images.
Raspberry Pi Release Japanese Keyboard Variant
The Japanese keyboard is the latest layout available. Last month we saw the release of Swedish, Portuguese, Danish and Norwegian layouts of the official keyboard.
WHEEL REINVENTED:
Simon Martin, Senior Principal Engineer at Raspberry Pi Trading explains some of the challenges they faced “We ended up reverse-engineering generic Japanese keyboards to see how they work, and mapping the keycodes to key matrix locations. We are fortunate that we have a very patient keyboard IC vendor, called Holtek, which produces the custom firmware for the controller.”
SpaceX Starship rocket’s flight debut set for Monday
For the first time ever, a full-scale SpaceX Starship prototype could be less than a day away from an inaugural flight test in Boca Chica, Texas.
Expected to target the same 150m (~500 ft) maximum altitude as Starhopper’s final flight, the hop will see the full-scale tank section of Starship SN5 attempt to follow in the footsteps of its odd predecessor. Starhopper was essentially a back-of-the-envelope proof of concept, demonstrating that a large rocket could technically be built out of common steel with facilities so spartan and basic that it defied belief.
[...] SpaceX initially wanted to turn Starship SN5 around for its hop debut on Sunday, August 2nd – barely two days [after the static fire test]. Yesterday’s window came and went, though, and SpaceX ultimately pushed its hop test plans back by 24 hours and added a new backup window on August 4th.
Launch window is from 8am to 8pm CDT (13:00-01:00 UTC).
In 2019, there was a flurry of interest in a post-apocalyptic operating system called Collapse OS. Since then, the primary author has been overwhelmed and this is especially true after a pandemic and global disruption to supply chains. In the absence of peak oil, resource collapse and some of the more dystopian conspiracy theories about population control and reduction, we have fragile systems and a growing mountain of technical debt. In particular, computer security is worsening. This is a problem when computers are increasingly critical and inter-connected.
A common problem is when company founders (or a random power user) writes a poor program due to ignorance or convenience. This software becomes central to the organization and is often supported by ancillary programs. Documentation is poor and the original authors move on. Whatever the cause, bad software is bad because it accretes and becomes an archeology expedition. The situation is worsened by the relatively short career of a competent programmer. It is often 15 years or less. Some people get stale and/or promoted. Some people quit the industry. Some people turn to hard drugs. Others turn to fantasy, the hypothetical or neo-retro-computing.
There are definitely things that we can learn from the past. While I would be agreeable and adaptable to a lifestyle modeled wholesale somewhere around 1950s-1980s, I'm probably in the minority. Considering only piecemeal changes, the parsimony of 8 bit computing is welcome to the technical debt of 2020.
I have an interest in micro-processor design and this has led me to research the precursors to the golden age of 8 bit home computing (1980-1984). Of particular interest is improvement to instruction density because this pays dividends as systems scale. In particular, it magnifies system effectiveness as caching (and cache tiers) grow.
I've enjoyed tracing the path of various architectures and seeing successive iterations of design. For example, the path from DEC's 12 bit PDP-8 to Intersil's 12 bit 6100 to PIC (with 12 bit, 14 bit, 16 bit and 18 bit instructions) to AVR. Likewise, from DataPoint 2200 to Intel 8080 to Zilog's products and x86. The shrink from mini-computer to micro-computer was more often inspiration than binary compatible. Even when components were shared, there was incompatibility abound. For example, it was common practice to dual boot a mini-computer. During the day, it would run an interactive operating system. Overnight, it would run a batch processing system. Commodore, which bought MOS Technology and had the rights to the 6502 micro-processor, couldn't make two computer designs compatible with each other. And it was common for people using computers from the same vendor to use incompatible floppy disk formats.
I'm astounded by the scattershot development of 1970s mini-computers, early micro-processor and the systems built around them. During the 1970s and 1980s, it was common for teams to be assembled - to design a mini-computer, micro-processor or micro-computer - and then disband at the end of the project. As noted in Tracy Kidder's book: The Soul Of A New Machine, a product could fail even if there was sufficient staff to cover attrition. However, downward compatibility wasn't a huge issue. Even in the 1980s, computers were mostly sold using analog processes. That included fully optical photo-copying and tallying sales manually, on paper. Since antiquity, calculations were assisted with an abacus. Romans advanced the technology with the nine digit decimal pocket abacus - which had approximately the same form factor and utility as a smartphone running a calculator application - and was probably used in a similar manner to split restaurant bills The Roman Way. In the 17th century, this was partially replaced with logarithms and slide-rules. Although mechanical calculators pre-date Blaise Pascal and Charles Babbage (an early victim of the Second System Effect), the ability to make reliable, commercial components began in the 20th century. Since the 1930s, there has been an increasingly rapid diffusion of mechanical and electronic systems. By the 1960s, large companies could afford a computer. By the 1980s, accountants could afford a computer. By the 1990s, relatively poor people could afford a computer for entertainment. By the 2010s, it was possible to shoot and edit digital films using pocket devices. Only relatively recently has is become normal to sell computers using computers - and it is much more recent to retail computers over the Internet.
In the 1970s, downward compatibility wasn't a concern and clean-sheet designs were the norm. A napkin sketch and 500 lines of assembly could save five salaries. Every doubling of transistor count added scope for similar projects. By 1977, there was the very real fear that robots would take all of our jobs and that there would be rioting in the streets. This was perhaps premature by 40 years. However, when Fred Brooks wrote the Mythical Man Month, the trope of the boy genius bedroom programmer had already been established and debunked. By 1977, autonomous tractors had already been prototyped. By 1980, people like Clive Sinclair were already planning fully electric, highway, autonomous vehicles guided by nothing more than an 8 bit Z80. (Perhaps this is possible but requires a exaflop of processing power to refine the algorithm?)
Mini-computer Operating Systems were in a huge state of flux in the 1970s. Unfortunately, technical merits were unresolved during the transition to micro-computers. This has created a huge amount of turbulence over more than 40 years. Even in 2001, it was not obvious that more than one billion people would use a pocket Unix system as a primary computer. Unfortunately, GUI in 2020 is as varied as OS in 1980 - and I doubt that we'll have anything sane and consistent before instruction sets fragment.
Every decade or so, process shrink spawns another computer market. However, the jump from transistor mini-computers to silicon chips was different because it slowly ate everything that came before. Initially it was the workstations and the departmental servers. Nowadays, the average mainframe is an Intel Xeon with all of the error checking features enabled. And, until recently, supercomputing had devolved into an international pissing match of who could assemble and run the largest x86 Linux cluster. Unfortunately, that has only been disrupted by another diffusion of industrialization. In the 1980s, the firehose of incompatible systems from the US overspilled and mixed with the firehose of incompatible systems from the UK and then flooded into other countries. While the saga of Microsoft and Apple in the US is relatively well know, a similar situation occurred in the UK with Acorn and Sinclair. Meanwhile, exports from the UK led to curious products, such as a ZX Spectrum with BASIC keywords translated into Spanish. Or a Yugoslavian design inspired by the ZX Spectrum but with 1/4 of the ROM. (The optional, second 4KB EPROM provided advanced functions.) Indeed, anything which strayed near the Iron Curtain was grabbed, cloned or throughly inspected for ideas. This has led to an unusual number of Russian and Chinese designs which are based on Californian RISC designs. If the self-reported figures from China are believed then the most powerful computer in the world consisted of 10 million cores; loosely inspired by DEC Alpha.
1980s computing was heavily characterized by MOS Technology's 6502 and Zilog's Z80 which were directly copied from Motorola's 6800 and Intel's 8080. Zilog's design was a superset of an obsolete but familiar design. However, the 6502 architecture was intended to be a cheaper, nastier, almost pin compatible design which undercut Motorola and stole goodwill. Regardless, 6502 is a fine example of parsimony. Instructions were removed to the extent that it is the leading example of a processor architecture which does not define 2/3 of the opcodes in the first release. The intention was to make each chip smaller and therefore more numerous at the same scale as Motorola's 6800. It was also 1/6 of the price because Motorola was handling technical pre-sales for an almost identical product. There is also the matter that the 6502 designers had abandoned their work on the 6800 before defecting to MOS Technology. This considerably hobbled Motorola. Well, Motorola sued and won on a technicality. The financial impact allowed acquisition by Commodore where the design was milked until it was obsolete. And then it was milked further by price gouging, economies of scale and vertical integration. Zilog actively invested in design improvement with Z180, Z280 and Z380 extensions and mutually incompatible Z800, Z8000 and Z80000. However, 6502 and Z80 were largely cheap, ersatz, one-hit-wonders before customers migrated to Motorola and Intel. During this period, it was Motorola - not Intel - which was known for the craziest heatsinks and chip packages. Likewise, it was Microsoft - not Apple - which had the most precarious finances. The rôles change but the actors don't.
In 1982, the UK had an estimated 400 micro-computer companies. The US had thousands. Many had vaporware, zero sales or only software. Even the successful companies had numerous failures. Apple failed with the Apple 3 and Apple Lisa before success with Apple Macintosh. Acorn had an inordinate number of the cost-reduced Acorn Electron. Quite infamously, Atari dumped an inordinate number of game cartridges in the Californian desert. By 1984, home computing had become a tired fad. In 1979, it was common for a home computer to have 128 bytes RAM and a 256 byte monitor program. By 1984, 128KB RAM and text menu directory browsing was common. Casual customers were bored with Space Invaders, Pac-Man and Centipede but the economies of scale aided academia, industry and the development of 16 bit systems.
1979-1983 was a period of economic trouble and Reagan/Thatcher economics. It also overlapped with an economic bubble in the computer industry. The end of that tech bubble was fueled by the Great DRAM Fire Of 1983. A manufacturing disaster caused a shortage. That stimulated demand. That keep the fad running. In the 2010s, DRAM was often sold in powers of two: one gigabyte, two gigabytes, four gigabytes. In the 1980s, DRAM was often sold in powers of four: four kilobits, 16 kilobits, 64 kilobits. A shortage of the newly introduced 256 kilobit chips caused a comical shortage elsewhere. Imagine a shortage of $1 dollar bills causing people to use $0.25 quarters, the shortage of quarters causing people to use $0.05 nickels and the shortage of nickels causing people to use $0.01 pennies. This type of lunacy is entirely normal in the computer industry. A system with 512KB RAM would ordinarily require 16 chips. However, the shortage led to bodge-boards with 64 (or considerably more) chips. The harddisk shortage in 2007 was minor compared to this comedy. Although, depressingly, the cause was similar.
Moore's law of computing is the observation that transistor count doubles every two years. A re-statement of this law is that computing requires an additional bit of addressing every two years. With the full benefit of hindsight, the obtuse 20 addressing scheme of the 8086 processor architecture gave Intel an extra four spins of Moore's law. Meanwhile, every 16 bit addressing scheme became increasingly mired in bank switching. Bank switching is perfectly acceptable within a virtual machine or micro-coded processor implementation. However, if every idiot programmer has to handle every instance of bank switching in every part of a program then the result is going to be bloated and flaky. Unfortunately, in the 1980s, the typical compiler, interpreter or virtual machine added an overhead of at least 10:1. That put any sane implementation at least three generations (six years) behind. To quote Tim Cook, "No-one buys sour milk." As I noted in Jul 2018:-
With the exception of some market leaders, the majority of organizations grow slower than Moore's law and the related laws for bandwidth and image quality until their function becomes trivial. As an example, it is difficult to write a spell check within 128KB RAM. A dictionary of words is typically larger than 128KB and word stemming was quite awkward to implement at speed. For this reason, when Microsoft Word required a spell check function, Microsoft merely acquired a company with a working implementation. It seems outrageous to acquire a company to obtain a spell check. It can be written very concisely in a scripting language but that doesn't work on an 8MHz system with 128KB RAM. Likewise, it is difficult to write a search engine within 16MB RAM but trivial to write in a scripting language with 1GB RAM.
While Acorn introduced people to the joys of "Sideways RAM", Intel had the luxury of repeatedly failing with 80186, 80286 and 432 (and subsequently 860, 960, Itanium and probably others). Indeed, the gap from 8086 to 80386 is about eight years and the gap to 80586 is also about eight years. Meanwhile, Microsoft scooped up the DataPoint 2200, Intel 8080 and CP/M weenies and consolidated a monoply while giving customers every insecure or easy to implement feature. We can speculate about IBM buying micro-processors from Intel with AMD as a hypothetical second-source. However, a possible consideration was avoiding high-drama characters, such as Clive Sinclair, Frederico Faggin, Chuck Peddle, Jack Tramiel, Steve Jobs and Gary Kildare.
There were so many opportunities for x86 to not dominate. For example, the 6502 architecture was released in 1976. By 1977, Atari began work on 6516: a clock-cycle accurate, downward compatible, 16 bit extension. Unfortunately, the project was abandoned. When Apple received the (unrelated) 65816 which met this criteria, it was deliberately underclocked to ensure that it was slower than the first Apple Macintosh. Acorn could have made ARM binary compatible with 6502. However, I've previously noted that such compatibility is easiest when starting from ARMv6 with Thumb extensions - which itself cribs from every iteration of Intel Pentium MMX. And Commodore? Which owned MOS Technology? This is the same Commodore which subsequently spent USD0.5 million to reverse engineer its own Amiga chips because its lost the plans. Similar opportunities were missed with Z80, RCA1802, TMS9900, MC68000, NS32000 and others, although possibly not as numerous. It was also possible that IBM chose another architecture, although it was unlikely to be from a direct competitor, such as RCA.
Boot-strapping a computer is a crucial consideration. It is usually performed with the previous generation of hardware. Specifically, early 8 bit home computers couldn't self-host. Work outsourced to Microsoft was initially assembled on a rented mini-computer. Work outsourced to Shepardson Microsystems, such as Apple's 8 bit ProDOS and Atari BASIC, was assembled on a Data General mini-computer. Perhaps the latter systems could self-host but I am unaware of any serious effort to attempt it. Acorn and Apple, who are in many ways trans-Atlantic fraternal twins, both started with a 6502 and a 256 byte monitor program. However, that doesn't imply that either system was fully self-hosted. For example, when a Commodore PET production delay led to delayed royalties to Microsoft, Apple switched from Steve Wozniak's Integer BASIC to Microsoft's floating point BASIC. From that point onwards, many Apple customers relied upon software which has been cross-assembled from a mini-computer. Likewise, the first version of Apple's ProDOS was written in 35 days on a mini-computer using punch card. It was a similar situation for Z80 systems. For example, Japanese MSX computers which used various extensions of Microsoft BASIC.
Like Russia and China, Japan has its own twist on technology. That includes its own fork of ICL mainframes, its own fork of 8086 and numerous Z80 systems from Casio, Sega, Sharp and others. It is a minor sport in Japan to port NetBSD to yet another Z80 system. However, the proliferation of Z80 systems within Japan does not explain the widespread availability of Z80 systems outside of Japan. This is due to the history of Japan's industrialization. Japan was particularly committed to exporting quality electronics after World War 2. However, Japan's electricity grid has aided export to rich consumers in the developed world. Specifically, Japan's first two public electrical generators were a European 50Hz generator and a US 60Hz generator. Inevitably, Japan doesn't use a single mains frequency throughout the country. This has the advantage that domestic electronics products are invariably suitable for global export. This has contributed to Japanese games consoles from multiple manufacturers being common in Europe and North America. The disadvantage to mixed 50Hz/60Hz mains came after the Fukushima nuclear disaster. Relatively little power can be transferred over DC grid ties. Ordinarily, this is sufficient to balance power. However, it was insufficient to prevent power cuts in Tokyo despite surplus generator capacity.
Anyhow, when Collapse OS started, Z80 was the most common and workable micro-processor which could be adapted with a 15 Watt soldering iron. Unlike many other designs, such as 6502, Z80 remains in production. Should this change due to industrial collapse, numerous examples are available at DIP scale. Unfortunately, programming the things is *horrible*. Worse that 8086. Most significantly, everything takes a long time. Like the RCA1802 and early PIC, Z80 uses a four phase clock. Despite the Z80 being released in 1977, the cycle-efficient eZ80 was only released in 2001. In general, 4MHz Z80 has similar bus bandwidth to 2MHz 6502. However, despite the Z80 having at least four times as many registers and instructions, there are places where Z80 is inferior to 6502 or other choices.
Connectivity between Z80 registers is poor. Transfer via stack covers all cases. However, that's inane. It is particularly slow due to stack operations which only work on register pairs. One of these pairs is the accumulator and flags. This arrangement is not upwardly compatible with 16 bit, 32 bit or 64 bit extensions. It is for this reason that Z800, Z8000, Z80000 and x86 separate these fused registers. When not using stack, instruction encodings allow reference to seven registers and a memory reference. One memory reference. Which is terrible for traversing data structures. A linked list is the most trivial case. There are idioms and workarounds. However, they have pointless limitations. For example, there are index registers which escape the memory reference. However, they are not downwardly compatible with 8080 or work independently of the alternate register set which may be reserved for interrupts. Furthermore, handling index register upper and lower bytes separately explicitly breaks upward compatibility with Z180. So, it is possible to use the escaped index registers in a manner which is neither upward compatible, downward compatible or interrupt compatible.
I mention such detail because I admire the shear bloody mindedness of self-hosting a Z80 Operating System on a Sega Master System, in 8KB RAM. This is an art which has fallen out of fashion since the late 1970s but could be urgently needed.
In its current form, Collapse OS optionally uses a PS/2 keyboard or Sega joypad. It optionally runs on an RC2014 Z80 system. It implements software SPI to maintain its own storage format on MicroSD cards. I appreciate the quirky storage format. It has been a historical problem. Indeed, when working on a quiz game buzzer, a feature request to play sound samples led to an investigation of playing WAV or MP3 from MicroSD. Read only access to FAT32 is by far the most difficult part. That's more difficult than decoding and playing MP3 without dropping sound samples.
There are three major components in the core of Collapse OS: monitor, line editor and assembler. These components and all of the lesser components can be assembled in two passes while accumulating no more than 8KB of state. Likewise for linking. Obviously, such a system can be expanded outwards. Maybe a better text editor, a cross-assembler or compiler. A stated intention is to migrate to AVR. However, this does not have to be self-hosting. It is sufficient to self-host on Z80 and use such a system to program AVR micro-controllers, such as commonly used in Arduino hardware. Again, I admire the intention to program an Arduino from a Sega Master System.
In support of Collapse OS, I attempted to port Steve Wozniak's SWEET16 to Z80. Perhaps this is not a beginner's project for Z80. However, it is possible that SWEET16 and a 6502 emulator are the first and last things I ever write in Z80. Outside of Collapse OS, I may write other Z80 assembly. For example, a Z80 to AVR cross-assembler has considerable merit but requires extensive test programs. Indeed, a chain of interpreters and cross-assemblers gain a network effect. Specifically, Forth interpreted on SWEET16 interpreted on 6502 interpreted on Z80 cross-assembled to AVR. SWEET16 on 6502 on AVR is publicly available. Likewise for Forth on 6502. SWEET16 on Z80 offers more options. In all cases, it limits execution to one layer of interpreter overhead. Given the relative efficiency of 4MHz Z80 versus 20MHz AVR, many of these options equal or exceed native Z80 for speed, size, legibility and portability.
Collapse OS is broadly aligned with my effort to implement a 64 bit extension to 6502. Firstly, there is the shared goal of undoing a large amount of technical debt. Secondly, there is the shared goal of self-hosting on DIP scale hardware with considerably less than 2^31 bytes of state. Thirdly, there is the shared goal of AVR as a possible target architecture. However, we differ as much as we agree. This includes storage format, instruction set, implementation language and user interface. I encourage old 6502 applications on new hardware. Collapse OS encourages new applications on old Z80 hardware. My vaporware is a multi-core, pre-emptive, networking, graphical system with a dedicated card bus system. Whereas, Collapse OS is a working, single-tasking, command line system which runs on legacy hardware.
Regardless, we both strongly agree that the current level of bloat is unmanageable. It manifests as seemingly unrelated problems, such as RowHammer, buffer overflow or the inability to repair hardware. However, it is a symptom of laziness and externalized cost which has been ongoing for decades. For example, there was a period in the 1990s where multiple platforms attempted the stretch goals of implementing downward compatibility, upward compatibility, modular re-use of software and migration to new hardware at minimal cost while entering new markets. This includes Apple's Copland, Microsoft's Chicago (the full implementation, not Windows95) and Sega's 32X. It also includes 3DO, a vaporware games console which could be expanded into a USD10,000 Virtual Reality system. Much of this is deprecated. However, some of this technical debt has been inflated but not paid in full. For example, the Java Applets and ActiveX which remain in use can be traced back to this era. Much of the vendor bloat in POSIX, SNMP and BGP also began in this era.
I've previously mentioned that a Linux system with 512MB RAM is typically regarded as an embedded system because it typically doesn't self-host a C compiler. Specifically, a Raspberry Pi running Raspbian is unable to compile GCC or LLVM with the default compiler settings because the result exceeds the per-process limit 2GB virtual memory. With a mere change of compiler settings, it is possible to get this down to 360MB. However, it still requires 30-45 hours to compile. For similar reasons, experts recommend that compiling the optional 22,000 packages of FreeBSD should not be attempted without 48 hardware threads and a minimum of 2GB RAM per thread. Obviously, a system with a minimum of 96GB RAM is overwhelmingly likely to be 64 bit. Regardless, such a system will still get snagged on heavyweight packages, such as GCC, LLVM, MySQL Server, Postgres, Firefox, Chrome, OpenOffice, GIMP, QEMU, JBoss and ffmpeg.
Wirth's law of software bloat is such that 2^31 bytes memory is dicey for compiling a popular application. I target 2^24 bytes - in part so I don't have to solder more than 3×8 bit latches or 32×128KB static RAM chips. 2^16 bytes was a threshold avoided in the 1980s because it risked delay or failure. And yet, Collapse OS happily self-hosts in 2^13 bytes. This leads to multiple questions.
Is it possible to self-host in less than 2^13 bytes? Unknown but systems from the 1980s suggest yes.
An alternative question is: How much software fits into a micro-controller with 16 bit address-space? Empirically, I've discovered that within an Atmel AVR AtMega328P with 32KB EEPROM there is space for six or more dialects of 6502 interpreter. I've also discovered that my own implementation of Arduino digital I/O and time delay requires less than 2KB. A trivial bytecode interpreter requires 2-4KB. Cell networking with error correction requires a maximum of 6.5KB. If a system uses a combination of native functions and Forth style bytecode, it is possible to fit Arduino style tutorial programs, buzzer game, alarm clock with I2C RTC, analog servo control, digital LEDs, power control, cell networking protocol and multiple user applications. With consideration for embedded programming standards, this is all suitable for automotive and medical use. Furthermore, this can be self-hosting and accessed via serial from Windows, Mac or Linux with no software install.
Obviously, there are disadvantages. The suggested micro-controller's EEPROM is only suitable for 10,000 writes. Therefore, to extend this limitation, a wear leveling and/or a bad block scheme should be considered. However, the major disadvantage is the anything like Forth is a hugely impenetrable mess which makes Perl look pleasant. It could be made more similar to BASIC or Lua. Alternatively, it could be more like Java. From the outside, Java looks like a simplified version of C++ without pointers or multiple inheritance. This is a con. From the inside, Java looks like Forth had a fight with 6502, Z80 and 8086 and they all lost. Specifically, it has a stack with 16 bit alignment, zero page and four register references. This raises many questions about bytecode, block structure and graph-coloring which I may summarize separately.
A limitation common to many programming languages is the cluttered name-space of function names. Object oriented languages typically arrange this into a strict tree hierarchy. Forth with a loose class hierarchy would be a considerable advantage. In addition to an explicit named tree structure, it partially solves the twin problems of line editing and execution order. BASIC typically solves this with line numbers and an absence of block structure. This is a particular problem because GOTO in Microsoft BASIC typically has O(n^2) overhead. This can be replaced with a directory structure of block structured one-liners and a significantly faster interpreter.
Forth has a traditional representation in which token names are stored as a linked list and each element begins with a one byte value (five bits for length of name, three bits for flags). This could be replaced with a tree of names with very little overhead, if any. (In both cases, a name is a 16 bit pointer.) If not using the traditional representation, it may also be desirable to use a variation of DEC Radix50 encoding. Specifically, 40^3 = 64,000. Therefore, it is possible to store three alpha-numeric characters in two bytes. There will be a little overhead for encode and decode routines. However, in aggregate, it'll save space. (With hindsight, I am unsure why this is not implemented more widely.) It may be worthwhile to take further ideas from Jackpot, one of James Gosling's lesser known projects to edit bytecode. Or perhaps Jack, a simplified version of Java used in some of the exercises in FromNAND2Tetris.
When I suggest that it is possible to make a network switch with 1KB RAM or remote desktop into micro-controller with all the finesse of Commodore 64 graphics, I also suggest that functionality can be modified without reboot or loss of data. Obviously, that's a dangerous proposition for deployment. However, it provides significant options for development and testing in a field where feedback is notoriously poor.
I finally manage time for a fun weekend for myself for the first time in nearly a year and you pull this shit while I'm gone? You ever, and I mean fucking ever, automate your spam bullshit again and your welcome will officially and permanently be worn out here. If you want to be a shithead, go for it, but be a manual shithead.
Quibi’s recipe for winning an Emmy without really trying
Television’s most important award might be going to its least important streaming service: Quibi. The service was nominated for 10 Emmy Awards, of which it’s almost certain to win at least one.
Quibi — which one estimate claims retained only 8 percent of people who signed up for its three-month free trial — hasn’t suddenly started putting out content on the same level as Breaking Bad or Game of Thrones. Instead, the short-form streaming service is competing in a game that no one else is playing.
At face value, Quibi’s nominations and near-certain win seem impossible: Quibi only has 16 original drama or comedy shows, has been around for just over three months, and has made about the same impact on the media landscape as a water balloon has on an Abrams tank.
[...] Quibi’s nominations are exclusively in the short-form-specific Emmy categories. Its competition is a few web-series spinoffs of larger shows and a YouTube series. Unless things go very badly for the mobile-focused streaming service, it’ll be walking away with at least one award come September.
[...] A report from Sensor Tower earlier in July claimed that the company was only able to convert about 72,000 of its initial 910,000 users into paid customers when the three-month free trial offer expired.
Assuming random selection of winners, Quibi has just a 2.4% chance of winning zero Emmy Awards.
Previously: Fox Could Buy Tubi While NBCUniversal Eyes Vudu
Meg Whitman-Run Streaming Service "Quibi" Launches, Reception Mixed
The Fall of Quibi: How Did a Starry $1.75bn Netflix Rival Crash So Fast?