Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Self-Hosting In 2GB Or Less? Collapse OS Requires 8KB

Posted by cafebabe on Sunday August 02 2020, @11:35PM (#5780)
9 Comments
Hardware

In 2019, there was a flurry of interest in a post-apocalyptic operating system called Collapse OS. Since then, the primary author has been overwhelmed and this is especially true after a pandemic and global disruption to supply chains. In the absence of peak oil, resource collapse and some of the more dystopian conspiracy theories about population control and reduction, we have fragile systems and a growing mountain of technical debt. In particular, computer security is worsening. This is a problem when computers are increasingly critical and inter-connected.

A common problem is when company founders (or a random power user) writes a poor program due to ignorance or convenience. This software becomes central to the organization and is often supported by ancillary programs. Documentation is poor and the original authors move on. Whatever the cause, bad software is bad because it accretes and becomes an archeology expedition. The situation is worsened by the relatively short career of a competent programmer. It is often 15 years or less. Some people get stale and/or promoted. Some people quit the industry. Some people turn to hard drugs. Others turn to fantasy, the hypothetical or neo-retro-computing.

There are definitely things that we can learn from the past. While I would be agreeable and adaptable to a lifestyle modeled wholesale somewhere around 1950s-1980s, I'm probably in the minority. Considering only piecemeal changes, the parsimony of 8 bit computing is welcome to the technical debt of 2020.

I have an interest in micro-processor design and this has led me to research the precursors to the golden age of 8 bit home computing (1980-1984). Of particular interest is improvement to instruction density because this pays dividends as systems scale. In particular, it magnifies system effectiveness as caching (and cache tiers) grow.

I've enjoyed tracing the path of various architectures and seeing successive iterations of design. For example, the path from DEC's 12 bit PDP-8 to Intersil's 12 bit 6100 to PIC (with 12 bit, 14 bit, 16 bit and 18 bit instructions) to AVR. Likewise, from DataPoint 2200 to Intel 8080 to Zilog's products and x86. The shrink from mini-computer to micro-computer was more often inspiration than binary compatible. Even when components were shared, there was incompatibility abound. For example, it was common practice to dual boot a mini-computer. During the day, it would run an interactive operating system. Overnight, it would run a batch processing system. Commodore, which bought MOS Technology and had the rights to the 6502 micro-processor, couldn't make two computer designs compatible with each other. And it was common for people using computers from the same vendor to use incompatible floppy disk formats.

I'm astounded by the scattershot development of 1970s mini-computers, early micro-processor and the systems built around them. During the 1970s and 1980s, it was common for teams to be assembled - to design a mini-computer, micro-processor or micro-computer - and then disband at the end of the project. As noted in Tracy Kidder's book: The Soul Of A New Machine, a product could fail even if there was sufficient staff to cover attrition. However, downward compatibility wasn't a huge issue. Even in the 1980s, computers were mostly sold using analog processes. That included fully optical photo-copying and tallying sales manually, on paper. Since antiquity, calculations were assisted with an abacus. Romans advanced the technology with the nine digit decimal pocket abacus - which had approximately the same form factor and utility as a smartphone running a calculator application - and was probably used in a similar manner to split restaurant bills The Roman Way. In the 17th century, this was partially replaced with logarithms and slide-rules. Although mechanical calculators pre-date Blaise Pascal and Charles Babbage (an early victim of the Second System Effect), the ability to make reliable, commercial components began in the 20th century. Since the 1930s, there has been an increasingly rapid diffusion of mechanical and electronic systems. By the 1960s, large companies could afford a computer. By the 1980s, accountants could afford a computer. By the 1990s, relatively poor people could afford a computer for entertainment. By the 2010s, it was possible to shoot and edit digital films using pocket devices. Only relatively recently has is become normal to sell computers using computers - and it is much more recent to retail computers over the Internet.

In the 1970s, downward compatibility wasn't a concern and clean-sheet designs were the norm. A napkin sketch and 500 lines of assembly could save five salaries. Every doubling of transistor count added scope for similar projects. By 1977, there was the very real fear that robots would take all of our jobs and that there would be rioting in the streets. This was perhaps premature by 40 years. However, when Fred Brooks wrote the Mythical Man Month, the trope of the boy genius bedroom programmer had already been established and debunked. By 1977, autonomous tractors had already been prototyped. By 1980, people like Clive Sinclair were already planning fully electric, highway, autonomous vehicles guided by nothing more than an 8 bit Z80. (Perhaps this is possible but requires a exaflop of processing power to refine the algorithm?)

Mini-computer Operating Systems were in a huge state of flux in the 1970s. Unfortunately, technical merits were unresolved during the transition to micro-computers. This has created a huge amount of turbulence over more than 40 years. Even in 2001, it was not obvious that more than one billion people would use a pocket Unix system as a primary computer. Unfortunately, GUI in 2020 is as varied as OS in 1980 - and I doubt that we'll have anything sane and consistent before instruction sets fragment.

Every decade or so, process shrink spawns another computer market. However, the jump from transistor mini-computers to silicon chips was different because it slowly ate everything that came before. Initially it was the workstations and the departmental servers. Nowadays, the average mainframe is an Intel Xeon with all of the error checking features enabled. And, until recently, supercomputing had devolved into an international pissing match of who could assemble and run the largest x86 Linux cluster. Unfortunately, that has only been disrupted by another diffusion of industrialization. In the 1980s, the firehose of incompatible systems from the US overspilled and mixed with the firehose of incompatible systems from the UK and then flooded into other countries. While the saga of Microsoft and Apple in the US is relatively well know, a similar situation occurred in the UK with Acorn and Sinclair. Meanwhile, exports from the UK led to curious products, such as a ZX Spectrum with BASIC keywords translated into Spanish. Or a Yugoslavian design inspired by the ZX Spectrum but with 1/4 of the ROM. (The optional, second 4KB EPROM provided advanced functions.) Indeed, anything which strayed near the Iron Curtain was grabbed, cloned or throughly inspected for ideas. This has led to an unusual number of Russian and Chinese designs which are based on Californian RISC designs. If the self-reported figures from China are believed then the most powerful computer in the world consisted of 10 million cores; loosely inspired by DEC Alpha.

1980s computing was heavily characterized by MOS Technology's 6502 and Zilog's Z80 which were directly copied from Motorola's 6800 and Intel's 8080. Zilog's design was a superset of an obsolete but familiar design. However, the 6502 architecture was intended to be a cheaper, nastier, almost pin compatible design which undercut Motorola and stole goodwill. Regardless, 6502 is a fine example of parsimony. Instructions were removed to the extent that it is the leading example of a processor architecture which does not define 2/3 of the opcodes in the first release. The intention was to make each chip smaller and therefore more numerous at the same scale as Motorola's 6800. It was also 1/6 of the price because Motorola was handling technical pre-sales for an almost identical product. There is also the matter that the 6502 designers had abandoned their work on the 6800 before defecting to MOS Technology. This considerably hobbled Motorola. Well, Motorola sued and won on a technicality. The financial impact allowed acquisition by Commodore where the design was milked until it was obsolete. And then it was milked further by price gouging, economies of scale and vertical integration. Zilog actively invested in design improvement with Z180, Z280 and Z380 extensions and mutually incompatible Z800, Z8000 and Z80000. However, 6502 and Z80 were largely cheap, ersatz, one-hit-wonders before customers migrated to Motorola and Intel. During this period, it was Motorola - not Intel - which was known for the craziest heatsinks and chip packages. Likewise, it was Microsoft - not Apple - which had the most precarious finances. The rôles change but the actors don't.

In 1982, the UK had an estimated 400 micro-computer companies. The US had thousands. Many had vaporware, zero sales or only software. Even the successful companies had numerous failures. Apple failed with the Apple 3 and Apple Lisa before success with Apple Macintosh. Acorn had an inordinate number of the cost-reduced Acorn Electron. Quite infamously, Atari dumped an inordinate number of game cartridges in the Californian desert. By 1984, home computing had become a tired fad. In 1979, it was common for a home computer to have 128 bytes RAM and a 256 byte monitor program. By 1984, 128KB RAM and text menu directory browsing was common. Casual customers were bored with Space Invaders, Pac-Man and Centipede but the economies of scale aided academia, industry and the development of 16 bit systems.

1979-1983 was a period of economic trouble and Reagan/Thatcher economics. It also overlapped with an economic bubble in the computer industry. The end of that tech bubble was fueled by the Great DRAM Fire Of 1983. A manufacturing disaster caused a shortage. That stimulated demand. That keep the fad running. In the 2010s, DRAM was often sold in powers of two: one gigabyte, two gigabytes, four gigabytes. In the 1980s, DRAM was often sold in powers of four: four kilobits, 16 kilobits, 64 kilobits. A shortage of the newly introduced 256 kilobit chips caused a comical shortage elsewhere. Imagine a shortage of $1 dollar bills causing people to use $0.25 quarters, the shortage of quarters causing people to use $0.05 nickels and the shortage of nickels causing people to use $0.01 pennies. This type of lunacy is entirely normal in the computer industry. A system with 512KB RAM would ordinarily require 16 chips. However, the shortage led to bodge-boards with 64 (or considerably more) chips. The harddisk shortage in 2007 was minor compared to this comedy. Although, depressingly, the cause was similar.

Moore's law of computing is the observation that transistor count doubles every two years. A re-statement of this law is that computing requires an additional bit of addressing every two years. With the full benefit of hindsight, the obtuse 20 addressing scheme of the 8086 processor architecture gave Intel an extra four spins of Moore's law. Meanwhile, every 16 bit addressing scheme became increasingly mired in bank switching. Bank switching is perfectly acceptable within a virtual machine or micro-coded processor implementation. However, if every idiot programmer has to handle every instance of bank switching in every part of a program then the result is going to be bloated and flaky. Unfortunately, in the 1980s, the typical compiler, interpreter or virtual machine added an overhead of at least 10:1. That put any sane implementation at least three generations (six years) behind. To quote Tim Cook, "No-one buys sour milk." As I noted in Jul 2018:-

With the exception of some market leaders, the majority of organizations grow slower than Moore's law and the related laws for bandwidth and image quality until their function becomes trivial. As an example, it is difficult to write a spell check within 128KB RAM. A dictionary of words is typically larger than 128KB and word stemming was quite awkward to implement at speed. For this reason, when Microsoft Word required a spell check function, Microsoft merely acquired a company with a working implementation. It seems outrageous to acquire a company to obtain a spell check. It can be written very concisely in a scripting language but that doesn't work on an 8MHz system with 128KB RAM. Likewise, it is difficult to write a search engine within 16MB RAM but trivial to write in a scripting language with 1GB RAM.

While Acorn introduced people to the joys of "Sideways RAM", Intel had the luxury of repeatedly failing with 80186, 80286 and 432 (and subsequently 860, 960, Itanium and probably others). Indeed, the gap from 8086 to 80386 is about eight years and the gap to 80586 is also about eight years. Meanwhile, Microsoft scooped up the DataPoint 2200, Intel 8080 and CP/M weenies and consolidated a monoply while giving customers every insecure or easy to implement feature. We can speculate about IBM buying micro-processors from Intel with AMD as a hypothetical second-source. However, a possible consideration was avoiding high-drama characters, such as Clive Sinclair, Frederico Faggin, Chuck Peddle, Jack Tramiel, Steve Jobs and Gary Kildare.

There were so many opportunities for x86 to not dominate. For example, the 6502 architecture was released in 1976. By 1977, Atari began work on 6516: a clock-cycle accurate, downward compatible, 16 bit extension. Unfortunately, the project was abandoned. When Apple received the (unrelated) 65816 which met this criteria, it was deliberately underclocked to ensure that it was slower than the first Apple Macintosh. Acorn could have made ARM binary compatible with 6502. However, I've previously noted that such compatibility is easiest when starting from ARMv6 with Thumb extensions - which itself cribs from every iteration of Intel Pentium MMX. And Commodore? Which owned MOS Technology? This is the same Commodore which subsequently spent USD0.5 million to reverse engineer its own Amiga chips because its lost the plans. Similar opportunities were missed with Z80, RCA1802, TMS9900, MC68000, NS32000 and others, although possibly not as numerous. It was also possible that IBM chose another architecture, although it was unlikely to be from a direct competitor, such as RCA.

Boot-strapping a computer is a crucial consideration. It is usually performed with the previous generation of hardware. Specifically, early 8 bit home computers couldn't self-host. Work outsourced to Microsoft was initially assembled on a rented mini-computer. Work outsourced to Shepardson Microsystems, such as Apple's 8 bit ProDOS and Atari BASIC, was assembled on a Data General mini-computer. Perhaps the latter systems could self-host but I am unaware of any serious effort to attempt it. Acorn and Apple, who are in many ways trans-Atlantic fraternal twins, both started with a 6502 and a 256 byte monitor program. However, that doesn't imply that either system was fully self-hosted. For example, when a Commodore PET production delay led to delayed royalties to Microsoft, Apple switched from Steve Wozniak's Integer BASIC to Microsoft's floating point BASIC. From that point onwards, many Apple customers relied upon software which has been cross-assembled from a mini-computer. Likewise, the first version of Apple's ProDOS was written in 35 days on a mini-computer using punch card. It was a similar situation for Z80 systems. For example, Japanese MSX computers which used various extensions of Microsoft BASIC.

Like Russia and China, Japan has its own twist on technology. That includes its own fork of ICL mainframes, its own fork of 8086 and numerous Z80 systems from Casio, Sega, Sharp and others. It is a minor sport in Japan to port NetBSD to yet another Z80 system. However, the proliferation of Z80 systems within Japan does not explain the widespread availability of Z80 systems outside of Japan. This is due to the history of Japan's industrialization. Japan was particularly committed to exporting quality electronics after World War 2. However, Japan's electricity grid has aided export to rich consumers in the developed world. Specifically, Japan's first two public electrical generators were a European 50Hz generator and a US 60Hz generator. Inevitably, Japan doesn't use a single mains frequency throughout the country. This has the advantage that domestic electronics products are invariably suitable for global export. This has contributed to Japanese games consoles from multiple manufacturers being common in Europe and North America. The disadvantage to mixed 50Hz/60Hz mains came after the Fukushima nuclear disaster. Relatively little power can be transferred over DC grid ties. Ordinarily, this is sufficient to balance power. However, it was insufficient to prevent power cuts in Tokyo despite surplus generator capacity.

Anyhow, when Collapse OS started, Z80 was the most common and workable micro-processor which could be adapted with a 15 Watt soldering iron. Unlike many other designs, such as 6502, Z80 remains in production. Should this change due to industrial collapse, numerous examples are available at DIP scale. Unfortunately, programming the things is *horrible*. Worse that 8086. Most significantly, everything takes a long time. Like the RCA1802 and early PIC, Z80 uses a four phase clock. Despite the Z80 being released in 1977, the cycle-efficient eZ80 was only released in 2001. In general, 4MHz Z80 has similar bus bandwidth to 2MHz 6502. However, despite the Z80 having at least four times as many registers and instructions, there are places where Z80 is inferior to 6502 or other choices.

Connectivity between Z80 registers is poor. Transfer via stack covers all cases. However, that's inane. It is particularly slow due to stack operations which only work on register pairs. One of these pairs is the accumulator and flags. This arrangement is not upwardly compatible with 16 bit, 32 bit or 64 bit extensions. It is for this reason that Z800, Z8000, Z80000 and x86 separate these fused registers. When not using stack, instruction encodings allow reference to seven registers and a memory reference. One memory reference. Which is terrible for traversing data structures. A linked list is the most trivial case. There are idioms and workarounds. However, they have pointless limitations. For example, there are index registers which escape the memory reference. However, they are not downwardly compatible with 8080 or work independently of the alternate register set which may be reserved for interrupts. Furthermore, handling index register upper and lower bytes separately explicitly breaks upward compatibility with Z180. So, it is possible to use the escaped index registers in a manner which is neither upward compatible, downward compatible or interrupt compatible.

I mention such detail because I admire the shear bloody mindedness of self-hosting a Z80 Operating System on a Sega Master System, in 8KB RAM. This is an art which has fallen out of fashion since the late 1970s but could be urgently needed.

In its current form, Collapse OS optionally uses a PS/2 keyboard or Sega joypad. It optionally runs on an RC2014 Z80 system. It implements software SPI to maintain its own storage format on MicroSD cards. I appreciate the quirky storage format. It has been a historical problem. Indeed, when working on a quiz game buzzer, a feature request to play sound samples led to an investigation of playing WAV or MP3 from MicroSD. Read only access to FAT32 is by far the most difficult part. That's more difficult than decoding and playing MP3 without dropping sound samples.

There are three major components in the core of Collapse OS: monitor, line editor and assembler. These components and all of the lesser components can be assembled in two passes while accumulating no more than 8KB of state. Likewise for linking. Obviously, such a system can be expanded outwards. Maybe a better text editor, a cross-assembler or compiler. A stated intention is to migrate to AVR. However, this does not have to be self-hosting. It is sufficient to self-host on Z80 and use such a system to program AVR micro-controllers, such as commonly used in Arduino hardware. Again, I admire the intention to program an Arduino from a Sega Master System.

In support of Collapse OS, I attempted to port Steve Wozniak's SWEET16 to Z80. Perhaps this is not a beginner's project for Z80. However, it is possible that SWEET16 and a 6502 emulator are the first and last things I ever write in Z80. Outside of Collapse OS, I may write other Z80 assembly. For example, a Z80 to AVR cross-assembler has considerable merit but requires extensive test programs. Indeed, a chain of interpreters and cross-assemblers gain a network effect. Specifically, Forth interpreted on SWEET16 interpreted on 6502 interpreted on Z80 cross-assembled to AVR. SWEET16 on 6502 on AVR is publicly available. Likewise for Forth on 6502. SWEET16 on Z80 offers more options. In all cases, it limits execution to one layer of interpreter overhead. Given the relative efficiency of 4MHz Z80 versus 20MHz AVR, many of these options equal or exceed native Z80 for speed, size, legibility and portability.

Collapse OS is broadly aligned with my effort to implement a 64 bit extension to 6502. Firstly, there is the shared goal of undoing a large amount of technical debt. Secondly, there is the shared goal of self-hosting on DIP scale hardware with considerably less than 2^31 bytes of state. Thirdly, there is the shared goal of AVR as a possible target architecture. However, we differ as much as we agree. This includes storage format, instruction set, implementation language and user interface. I encourage old 6502 applications on new hardware. Collapse OS encourages new applications on old Z80 hardware. My vaporware is a multi-core, pre-emptive, networking, graphical system with a dedicated card bus system. Whereas, Collapse OS is a working, single-tasking, command line system which runs on legacy hardware.

Regardless, we both strongly agree that the current level of bloat is unmanageable. It manifests as seemingly unrelated problems, such as RowHammer, buffer overflow or the inability to repair hardware. However, it is a symptom of laziness and externalized cost which has been ongoing for decades. For example, there was a period in the 1990s where multiple platforms attempted the stretch goals of implementing downward compatibility, upward compatibility, modular re-use of software and migration to new hardware at minimal cost while entering new markets. This includes Apple's Copland, Microsoft's Chicago (the full implementation, not Windows95) and Sega's 32X. It also includes 3DO, a vaporware games console which could be expanded into a USD10,000 Virtual Reality system. Much of this is deprecated. However, some of this technical debt has been inflated but not paid in full. For example, the Java Applets and ActiveX which remain in use can be traced back to this era. Much of the vendor bloat in POSIX, SNMP and BGP also began in this era.

I've previously mentioned that a Linux system with 512MB RAM is typically regarded as an embedded system because it typically doesn't self-host a C compiler. Specifically, a Raspberry Pi running Raspbian is unable to compile GCC or LLVM with the default compiler settings because the result exceeds the per-process limit 2GB virtual memory. With a mere change of compiler settings, it is possible to get this down to 360MB. However, it still requires 30-45 hours to compile. For similar reasons, experts recommend that compiling the optional 22,000 packages of FreeBSD should not be attempted without 48 hardware threads and a minimum of 2GB RAM per thread. Obviously, a system with a minimum of 96GB RAM is overwhelmingly likely to be 64 bit. Regardless, such a system will still get snagged on heavyweight packages, such as GCC, LLVM, MySQL Server, Postgres, Firefox, Chrome, OpenOffice, GIMP, QEMU, JBoss and ffmpeg.

Wirth's law of software bloat is such that 2^31 bytes memory is dicey for compiling a popular application. I target 2^24 bytes - in part so I don't have to solder more than 3×8 bit latches or 32×128KB static RAM chips. 2^16 bytes was a threshold avoided in the 1980s because it risked delay or failure. And yet, Collapse OS happily self-hosts in 2^13 bytes. This leads to multiple questions.

Is it possible to self-host in less than 2^13 bytes? Unknown but systems from the 1980s suggest yes.

  • A Sinclair ZX Spectrum has a Z80, 16KB ROM and 16KB or 48KB RAM. This is a minimum of 2^15 bytes.
  • A Jupiter Ace makes an exceptionally poor first impression. It could be mistaken for a ZX81 if it didn't have a Forth compiler. It has a Z80. It has 8KB ROM of which 3KB is Operating System and 5KB is Forth keywords. It also has two synchronous banks of RAM: 2KB for monochrome display and 1KB for application. If functionality is trimmed, this would be 2^14 bytes.
  • The 4KB ROM B of a Z80 Galaksija is broadly equivalent to the functionality of Collapse OS. Including state for assembly, this is 2^13 bytes.
  • TinyBASIC is 4KB on 8080 and Z80. However, traditional implementations use two layers of interpreter and have terrible performance. Regardless, state remains 2^13 bytes.
  • Older systems self-host with less state. However, they may require, for example, micro-code which exceeds the complexity of Z80.

An alternative question is: How much software fits into a micro-controller with 16 bit address-space? Empirically, I've discovered that within an Atmel AVR AtMega328P with 32KB EEPROM there is space for six or more dialects of 6502 interpreter. I've also discovered that my own implementation of Arduino digital I/O and time delay requires less than 2KB. A trivial bytecode interpreter requires 2-4KB. Cell networking with error correction requires a maximum of 6.5KB. If a system uses a combination of native functions and Forth style bytecode, it is possible to fit Arduino style tutorial programs, buzzer game, alarm clock with I2C RTC, analog servo control, digital LEDs, power control, cell networking protocol and multiple user applications. With consideration for embedded programming standards, this is all suitable for automotive and medical use. Furthermore, this can be self-hosting and accessed via serial from Windows, Mac or Linux with no software install.

Obviously, there are disadvantages. The suggested micro-controller's EEPROM is only suitable for 10,000 writes. Therefore, to extend this limitation, a wear leveling and/or a bad block scheme should be considered. However, the major disadvantage is the anything like Forth is a hugely impenetrable mess which makes Perl look pleasant. It could be made more similar to BASIC or Lua. Alternatively, it could be more like Java. From the outside, Java looks like a simplified version of C++ without pointers or multiple inheritance. This is a con. From the inside, Java looks like Forth had a fight with 6502, Z80 and 8086 and they all lost. Specifically, it has a stack with 16 bit alignment, zero page and four register references. This raises many questions about bytecode, block structure and graph-coloring which I may summarize separately.

A limitation common to many programming languages is the cluttered name-space of function names. Object oriented languages typically arrange this into a strict tree hierarchy. Forth with a loose class hierarchy would be a considerable advantage. In addition to an explicit named tree structure, it partially solves the twin problems of line editing and execution order. BASIC typically solves this with line numbers and an absence of block structure. This is a particular problem because GOTO in Microsoft BASIC typically has O(n^2) overhead. This can be replaced with a directory structure of block structured one-liners and a significantly faster interpreter.

Forth has a traditional representation in which token names are stored as a linked list and each element begins with a one byte value (five bits for length of name, three bits for flags). This could be replaced with a tree of names with very little overhead, if any. (In both cases, a name is a 16 bit pointer.) If not using the traditional representation, it may also be desirable to use a variation of DEC Radix50 encoding. Specifically, 40^3 = 64,000. Therefore, it is possible to store three alpha-numeric characters in two bytes. There will be a little overhead for encode and decode routines. However, in aggregate, it'll save space. (With hindsight, I am unsure why this is not implemented more widely.) It may be worthwhile to take further ideas from Jackpot, one of James Gosling's lesser known projects to edit bytecode. Or perhaps Jack, a simplified version of Java used in some of the exercises in FromNAND2Tetris.

When I suggest that it is possible to make a network switch with 1KB RAM or remote desktop into micro-controller with all the finesse of Commodore 64 graphics, I also suggest that functionality can be modified without reboot or loss of data. Obviously, that's a dangerous proposition for deployment. However, it provides significant options for development and testing in a field where feedback is notoriously poor.

Quibi Floods Short Form Emmy Nominations

Posted by takyon on Wednesday July 29 2020, @01:47AM (#5750)
2 Comments
/dev/random

Quibi’s recipe for winning an Emmy without really trying

Television’s most important award might be going to its least important streaming service: Quibi. The service was nominated for 10 Emmy Awards, of which it’s almost certain to win at least one.

Quibi — which one estimate claims retained only 8 percent of people who signed up for its three-month free trial — hasn’t suddenly started putting out content on the same level as Breaking Bad or Game of Thrones. Instead, the short-form streaming service is competing in a game that no one else is playing.

At face value, Quibi’s nominations and near-certain win seem impossible: Quibi only has 16 original drama or comedy shows, has been around for just over three months, and has made about the same impact on the media landscape as a water balloon has on an Abrams tank.

[...] Quibi’s nominations are exclusively in the short-form-specific Emmy categories. Its competition is a few web-series spinoffs of larger shows and a YouTube series. Unless things go very badly for the mobile-focused streaming service, it’ll be walking away with at least one award come September.

[...] A report from Sensor Tower earlier in July claimed that the company was only able to convert about 72,000 of its initial 910,000 users into paid customers when the three-month free trial offer expired.

Assuming random selection of winners, Quibi has just a 2.4% chance of winning zero Emmy Awards.

Previously: Fox Could Buy Tubi While NBCUniversal Eyes Vudu
Meg Whitman-Run Streaming Service "Quibi" Launches, Reception Mixed
The Fall of Quibi: How Did a Starry $1.75bn Netflix Rival Crash So Fast?

China Takes Control of U.S. Consulate in Chengdu

Posted by takyon on Monday July 27 2020, @01:24PM (#5738)
3 Comments
News

Flag lowered at US consulate in Chengdu as China takes control

Chinese authorities have taken over the US consulate general in Chengdu, marking the diplomatic mission’s official closure and a new low point in ties between the world’s largest economies.

At dawn on Monday, the American flag outside the consulate was lowered while police held back crowds that had gathered over the weekend to watch. At 10am, the mission was closed, according to China’s foreign ministry.

Chinese soldiers goose-stepped in front of the consulate while teams of workers in hazmat suits and officials dressed in white short-sleeved dress shirts and black briefcases entered the mission. Workers draped grey clothes over signs bearing the consulate’s name.

“Competent Chinese authorities entered through the front entrance and took it over,” the foreign ministry said.

U.S.-China engagement is over. Is military conflict next?


There is now a bipartisan consensus in the United States on the need for a tougher China policy. Even longtime China scholars and policymakers who have spent their lives building closer ties with China in the belief that such engagement would induce democratic reform have grown disillusioned.

At the same time, they say the Trump administration’s “sledgehammer” approach, which seems intent on starting another cold war and leaves no room for dialogue, is counterproductive and disingenuous in its purported concern for Chinese people. It is also dangerous and could lead to outright conflict, they say.

“There are ways to handle the relationship without blasting through it,” said Deborah Seligsohn, who served as a U.S. diplomat for more than two decades, mostly in Asia. “There are ways to weigh the pluses and minuses. It doesn’t have to be this antagonistic.”

How the Cold War Between China and U.S. Is Intensifying (archive)


The Trump administration has increasingly challenged China’s assertions of sovereignty and control over much of the South China Sea, including vital maritime shipping lanes. Just last week, Secretary of State Mike Pompeo, who has described China as a major security threat, decreed that most of China’s claims in the South China Sea are “completely unlawful,” setting up potential military confrontations between Chinese and U.S. naval forces in the Pacific.

[...] The New York Times, concerned about the possibility of further limitations on journalists working in China, announced last week that it was relocating much of its major news hub in Hong Kong to Seoul, South Korea.

Previously: U.S. Scoops Up Chinese Spies; "Friendship" Ended

U.S. Scoops Up Chinese Spies; "Friendship" Ended

Posted by takyon on Sunday July 26 2020, @01:05AM (#5734)
24 Comments
Career & Education

US arrests three Chinese nationals for visa fraud


The US has charged four Chinese nationals with visa fraud for allegedly lying about their membership of China's armed forces.

Three are under arrest while the FBI is seeking to arrest the fourth, who is said to be in China's San Francisco consulate.

FBI agents have also interviewed people in 25 US cities who have an "undeclared affiliation" with China's military.

Prosecutors say it is part of a Chinese plan to send army scientists to the US.

Singapore man admits being Chinese spy in US


A Singaporean man has pleaded guilty in the US to working as an agent of China, the latest incident in a growing stand-off between Washington and Beijing.

Jun Wei Yeo was charged with using his political consultancy in America as a front to collect information for Chinese intelligence, US officials say.

Separately, the US said a Chinese researcher accused of hiding her ties to China's military was detained.

China earlier ordered the closure of the US consulate in Chengdu.

The move to shut down the diplomatic mission in the south-western city was in response to the US closing China's consulate in Houston.

FBI arrests Chinese researcher for visa fraud after she hid at consulate in San Francisco


A researcher who took refuge in the Chinese consulate in San Francisco after allegedly lying to investigators about her Chinese military service was arrested and will appear in court on Monday, according to a senior Justice Department official.

According to court documents unsealed earlier this week in the Eastern District of California, Juan Tang, a researcher at the University of California, Davis, applied for a nonimmigrant J1 visa in October 2019. The visa was issued in November 2019 and Tang entered the United States a month later.

Tang allegedly made fraudulent statements on her visa application by concealing that she served in the Chinese military. The FBI concluded that Tang was a uniformed officer of the People’s Liberation Army Air Force after photographs of her were uncovered on electronic media seized in accordance with a search warrant.

Officials Push U.S.-China Relations Toward Point of No Return


Top aides to President Trump want to leave a lasting legacy of ruptured ties between the two powers. China’s aggression has been helping their cause.

[...] China’s leader, Xi Jinping, has inflamed the fight, brushing aside international concern about the country’s rising authoritarianism to consolidate his own political power and to crack down on basic freedoms, from Xinjiang to Hong Kong. By doing so, he has hardened attitudes in Washington, fueling a clash that at least some in China believe could be dangerous to the country’s interests.

The combined effect could prove to be Mr. Trump’s most consequential foreign policy legacy, even if it’s not one he has consistently pursued: the entrenchment of a fundamental strategic and ideological confrontation between the world’s two largest economies.

Ukraine Prez: Everyone should watch the 2005 film Earthlings

Posted by takyon on Thursday July 23 2020, @04:36AM (#5708)
21 Comments
Career & Education

Ukraine Hostage Standoff Ends After President Agrees To Promote Joaquin Phoenix Film

A hostage standoff on a bus in western Ukraine ended Tuesday after a bizarre demand from the captor was met when the country's president publicly recommended a 15-year-old animal rights documentary narrated by Joaquin Phoenix.

Just before the end of the 12-hour standoff in Lutsk, a city located some 250 miles west of Kyiv, President Volodymyr Zelenskiy posted a video clip to his Facebook page stating: "Everyone should watch the 2005 film Earthlings."

The post has since been deleted.

Earthlings (film)
IMDB

5-port 2.5GbE Switch

Posted by takyon on Friday July 17 2020, @07:49PM (#5680)
12 Comments
Hardware

At Last, a 2.5Gbps Consumer Network Switch: QNAP Releases QSW-1105-5T 5-Port Switch

After entirely too long of a delay, the wait for faster consumer-grade network switches appears to be coming to an end. This week QNAP launched its QSW-1105-5T switch, one of the industry’s first unmanaged 2.5Gbps (2.5GBASE-T) switches. The 5-port switch supports 2.5GbE operation on all five of its RJ45 Ethernet ports, and along with being unmanaged it is also fanless, allowing the switch to work maintenance-free and installed virtually anywhere. The QSW-1105-5T is already on sale in Taiwan for roughly $100, meaning that we’re looking at a price-per-port of about $20.

[...] As the first of what will undoubtedly be many 2.5G switches over the coming months, the QSW-1105-5T also gives us our first real look at what we can expect from this generation of switches as far as footprints and power consumption goes. Since it’s not carved from a pro-grade switch, the 18 cm x 14.5 cm switch is significantly smaller than earlier NBASE-T switches. And with a maximum power consumption rating of 12 W, we’re looking at power consumption of just a bit over 2 Watts per port, which is also a significant improvement over admittedly far more powerful switches.

All of which sounds unremarkable, and indeed that’s exactly what makes the QSW-1105-5T so interesting. The biggest barrier to wide consumer adoption over the last few years has been the cost – both in regards to the core technology and added frills – so we’ve been waiting for quite a while to see NBASE-T technology transition from pro-grade switches to cheap, consumer-grade gear.

PinePhone Gets New Version With 3GiB RAM and 32GB Storage

Posted by takyon on Thursday July 16 2020, @05:57PM (#5675)
6 Comments
Mobile

Pinephone “Community Edition: PostmarketOS” Launched with 3GB RAM, 32GB Flash, USB-C Hub

After PinePhone “BraveHeart Edition” with any OS pre-installed introduced at the end of last year, Pine64 launched PinePhone “Community Edition: UBports” with Ubuntu Touch last April, and now the company is taking pre-orders for Pinephone “Community Edition: PostmarketOS with Convergence Package”.

Besides using a different operating system, the new PinePhone also got a hardware upgrade with 3GB RAM and 32GB flash instead of the 2GB/16GB configuration from earlier models. Due to the changes and the addition of a USB-C dock for convergence, the price has also gone up from $149.99 to $199.99 with shipping scheduled to start at the end of August. If you don’t need the extra memory, storage, and convergence package, you can still pre-order PinePhone with postmarketOS for $149.99.

Got a PinePhone!
Adventures in PinePhone-land

Two Chinese Companies Working on Discrete GPUs?

Posted by takyon on Tuesday July 14 2020, @09:58PM (#5665)
15 Comments
Hardware

Previously:

Semiconductor Manufacturing International Corporation (SMIC) Starts "14nm" FinFET Volume Production

Look out Nvidia and AMD… Chinese GPU maker has a GTX 1080-level card in development

The new story:

Asia based Zhaoxin has plans for a dedicated graphics card series

A 70 Watt GPU wouldn't be as interesting as the 200 Watt "1080-level" GPU w/HBM concept from Jingjia Micro, but it could be good enough for cheap office PCs. It's also just a start: note that Zhaoxin's CPUs are on "16nm" while the GPU is on "28nm".

Holographic Optics for Lightweight VR, and Volumetric Video

Posted by takyon on Thursday July 09 2020, @11:24PM (#5645)
7 Comments
Hardware

Facebook reveals holographic optics for thin and light VR headsets

Now it’s revealing a holographic optical architecture designed for thinner, lighter VR headsets, which it expects will appear in future “high performance AR/VR” devices.

Discussed in a Siggraph 2020 research paper titled “Holographic Optics for Thin and Lightweight Virtual Reality,” the system uses flat films to create a VR display only slightly thicker than today’s typical smartphones. Facebook’s “pancake optics” design combines several thin layers of holographic film with a laser projection system and directional backlights, delivering either flat imagery or volumetric holograms depending on the sophistication of the design. Depending on how many color, lighting, and alignment-enhancing components a prototype contains, the thickness of the optical system can range from 11mm to just under 9mm.

In wearable prototype form, each eye display features a resolution of roughly 1,200 by 1,600 pixels — comparable to current VR goggles — with a field of view that’s either a 93-degree circle or a 92-by-69-degree rectangle. That’s roughly comparable to the display specs of a 571-gram Oculus Quest, but in a glasses-like form factor that weighs less than 10 grams in total, albeit with only a single eye display in the prototypes. The researchers note they could cut parts and change materials to achieve a 6.6 gram weight equivalent to plastic aviator sunglasses, but would compromise performance by doing so.

Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing

Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

8-Channel Threadripper Rumor Back From the Dead

Posted by takyon on Wednesday July 08 2020, @04:00PM (#5636)
7 Comments
Hardware

AMD Ryzen Threadripper PRO 3995WX Workstation CPU Spotted, Lots of Zen 2 Cores & Increased I/O on WRX80 Platform

They will be supported by AMD's new WRX80 platform which is actively being worked on by several board partners right now. Main features include 8-channel DDR4-3200 support in UDIMM, RDIMM, LRDIMM flavors, 96-128 Gen4 PCIe lanes with 32 switchable lanes to SATA and some PRO features which will allow these chips to be the ultimate workstation solution in the market. In another tweet, Videocardz reported that AMD will be introducing its Ryzen Threadripper PRO lineup on the 14th of July which is next week [Tuesday].

I think 256 GiB RDIMMs (Samsung) are still the highest capacity out there, so it could support up to 4 TiB of memory.

That's also a greater amount of PCIe 4.0 lanes, TR 3990X only supports 64.