Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by cafebabe

(This is the 59th of many promised articles which explain an idea in isolation. It is hoped that ideas may be adapted, linked together and implemented.)

I defined a provisional instruction set. The final instruction set will be defined after empirical testing. Fields may be split, moved, removed and/or expanded. Opcodes may be re-arranged. However, there is enough to begin implementation and testing.

The basic concept for implementing a virtual machine is a set of case statements within a while loop. This implements the fetch-execute cycle. One instruction is fetched at the beginning of the while loop and it is decoded via a switch statement (or nested switch statements) and then each case implements an instruction or related functionality. Instructions can be as abstract as desired and may include "make robot walk", "set light to half brightness" or "fetch URL". In my case, I'm trying to define something which could be implemented as a real micro-processor. Even here, people have implemented hardware with instructions such as "calculate factorial", "find string length", "copy string" or "add arbitrary precision decimal number". Even this is too complicated. I'm intending to implement a virtual machine which would compare favorably with hardware from 25 years ago. This is partly for my sanity given how computers have become encapsulated to the extent that almost no-one understands how a serial stream is negotiated or encoded over USB. There is also the security consideration. I cannot secure my desktop. I certainly cannot secure a larger system. Anyone who claims that they can secure a network is ignorant or a liar. I'm not even sure that my storage or network interface run their supplied firmware.

I'd like something which targets Arduino, FPGA, GPU or credit card size computer (or smaller). We've got quad-core watches and a watch with RAID is immenent. With this level of miniturization, we can apply mainframe reliability techniques to micro-controllers. For example, we can run computation twice and check results; insert idempotent, hot-failover checkpoints between all observable changes of micro-controller state; or implement parity checking when it is not available on target hardware. These techniques have obvious limitations. However, embedded systems often have surplus processing power. When EPROM cost more than a week's wages, micro-controllers would decode Huffman compressed Forth bytecode or similar. Now we can use the surplus to increase reliability. The alternative is too awful to contemplate.

It is possible to have standardized object code which resumes on replacement hardware. This is like VMware for micro-controllers. In a trivial case, a LCD panel may run a clock application. The clock application checkpoints to a house server. When the panel fails, it would be possible to purchase a replacement panel and restore the application on the new panel. It may now run on a slower system with smaller display but within a minute or so, it should find its time source, adjust to the new display size and otherwise display time to your preferences.

A more practical example would be a hydroponic controller. People are developing I/O libraries which allow relay control over Ethernet with minimal authentication, no error checks (between devices with no memory integrity) and no hardware interlocks. For your own safety, please don't do this. A more sensible approach is to run two instances of the firmware. One instance runs locally in a harsh environment where humidity may reach 100% and temperature may fall below 0°C. The other instance runs in a controlled environment which is always dry and at room temperature. Both instances run integrity checks but no relays get triggered unless both instances computation the same results. Alternatively, the instance local to the relays may continue with less oversight while the server container sends alerts about the lack of monitoring. For a large environment, it is possible to use the standard database technique of a double or triple commit to ensure that servers have a consistent state prior to server hot-failover. This can work in conjunction with console access, graphical remoting and centralized OLAP over low-bandwidth networks.

Sun Microsystems said "The network is the computer." and then gave us Java. This only provided hot-failover within the Enterprise Java Bean environment. I'm proposing a system where low-bandwidth process control runs in two or more places, has the convenience of Android, the reliability of an IBM mainframe, the openness of p-code and the security of erm, erm. That part has been lacking for quite a while.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.