Since playing with Knoppix on a friends thinkpad circa 2001 while he was on break from uni, ive been a Debian user ever since. That extended to Ubuntu back during 6.06 LTS days and then on to Linux Mint and Cinnamon after hating unity/gnome 3. But I've been craving something more lightweight and systemd free but had yet to find an interesting enough alternative until reading this reply (Thanks urza9814 and Azuma Hazuki!).
I really liked what I saw and it felt like a cross between a modern Linux distro and light weight OpenBSD. BSD licenses for their from-scratch package manager; I don't dislike the GPL but BSD is much more permissive and allows giving back to the BSD folks. Then we have the use of runit for init which uses the very slick idea of a managed process tree (very Unix). Plus the distro is rolling which is something I've always been interested in for desktop use. LibreSSL is a big plus along with a myriad of ports to other arch's like raspberry pi and other arm boards (like open and net bsd).
I first downloaded the musl libc version to see how much it could support and surprisingly the musl libc version is plenty stable and usable. Though, I wanted to run a serious hardware box to test it out and see how it performs, and more importantly, how simple it is to install and manage. So I opted for the standard glibc version. Runs like a champ. The xbps package manager is easy to use and a few simple commands keep the system up to date and install packages and resolves dependencies just like apt or rpm. The install process is also very simple as you boot into a live desktop, open a terminal and run the installer as root using sudo. After a very quick configuration the installer does its thing and you're ready to reboot. So far I have my plan 9 tools setup such as drawterm and plan9port. Getting a usable system up and running is honestly pretty damn easy. One word that sums it up: refreshing.
My next steps are some sort of mdadm torture test to see how stable it runs between reboots. Though I'm not so sure if it's systemd/kernel/hardware related just yet. If its crap software, then my workstation/big file server gets voided. I'm also interested in switching my laptop as well.
== All aboard the 9grid Part 1.1 ==
So I've been eager to write a second chapter to my plan 9 experience and so far progress has been slow and steady. I have a much better grasp of the concept of name spaces and how the OS works but do not yet feel qualified to write a guide. I have been tinkering with my setup, a Celeron J1900 board with a 256GB SSD running the most recent version of 9ants configured as a CPU server. The box sits next to my router plugged into the network and happily hums along at about 11 watts which costs me about a buck sixty here in NYC to run 24/7. I don't allow direct access instead relying on an ssh tunnel via my ageing Debian box which is an old wyse terminal that sips power at 8W (void conversion in the future). I can pop onto my CPU server from work using cygwin on windows 10 and drawterm built under cygwin.
The feel of plan 9 is quite interesting. The GUI feels obtuse at first but after getting to know how to mouse chord and use the terminal, it becomes a pleasure. The whole idea behind the lack of a command history is because the text buffer of the window is your history and is fully editable. if you see a command example in a man page you can edit it on screen, highlight it, and "send" it which runs that command.
The plumber is a message server that receives plumb requests which are text strings that the plumber parses and matches against a set of rules in your plumber config. The rule then performs the appropriate action such as running the associated command. This is similar to the context menus in the MS Windows right click menu. Instead of having open/openwith or some menu modifier buried deep within the registry, you just have a message decoding server. If it's a url, open the web browser with that url. If it's a png open page to display it. If it's a text file which includes scripts and source files then open them in acme. etc. Very neat little service that gives you a lot of interesting functionality.
The Acme Text editor is quite interesting and quickly showing it's powerful ease of use. Open a session and you have a listing of your home directory. right click a file to open it in a new window. if the file is an image, media, or whatever, then the plumber will forward the request to page which will open the appropriate viewer. Right clicking a directory opens that directory in a new window. want to create a directory? type the shell command mkdir newdir, highlight it, and middle click it to run that command. Then click get to refresh the directory listing and you will see newdir. Acme use example:
Create a new script file in current directory window:
touch newfile, highlight, middle click.
middle click get to relist directory contents and observe newfile.
chmod +x newfile, highlight, middle click.
right click newfile to open edit buffer in new window.
add some stuff to file, e.g. #!/bin/rc \ echo "Hello world!", click put (save)
in directory window, middle click newfile
new windows opens with script output we just executed: Hello world!
All of the above commands and editing is done within acme windows. The idea is the editor works with your tools, not against them. The only issue I have had is input events are skipped causing infinite loops. This happened when using fgets from stdio and not native plan 9 input using the draw libraries.
I have to organize my notes and writings pertaining to my tinkering with plan 9. The architecture is quite interesting and simple overall. I plan on writing a bit of a basic into of the OS internals such as how the kernel works, booting, networking, graphics and more. But I am still a ways off but having a ton of fun learning by working in the OS. I have been fooling around with this guide and building the examples in plan 9 using acme and as much native plan 9 libraries as possible. Fun learning experience.
So my last post was a bit of an intro to my newly ignited flame of curiosity fueled by the discovery of a plan 9 based cloud called 9gridchan. In this post I'm going to help you get up and running with the ants iso and get drawterm connected to a live plan9 session to get started. It's a bit lengthily and jumps around a bit but it's a crash course and my first guide so hold on tight...
First off grab the latest iso from http://ants.9gridchan.org/
Second, grab and build drawterm: http://drawterm.9front.org/ (we NEED the 9front version which supports p9sk1 security.)
Boot it in your favorite VM of choice using a bridged adapter so the system gets it's own ip accessible from any machine on your lan. I'm using Virtualbox under Linux Mint 18.3. If bridging doesn't work you need to install libvdeplug-dev. That was an issue I ran into.
Next up is booting the ants ISO. You have to enter a few settings manually during boot time but the defaults are fine. Aside from perhaps a different video mode you can just keep hitting enter at every single prompt until the GUI terminal is started. If you want a larger terminal, use 1280x1024x24 (or whatever) for video mode and leave it set to VESA mode. But take note, we are going to be doing the rest of our exploring in drawterm (more on that).
Right after entering the video mode, the GUI starts with a big white terminal asking you one final question: what kind of mouse are you using? If you want the scroll wheel enabled, type ps2intellimouse at the prompt and hit enter. Otherwise the default is ps2 which is fine for now.
You should now see your mouse cursor come to life and rio, the window manager, is started and two windows are opened for you. The small one in the upper left is your system monitor and the other is a terminal window with a little intro. If you want to jump right into the grid, type gridstart at the prompt. But not just yet, keep reading. If you're really hardcore: '% man intro'
If you've never used the plan 9 GUI before here is a crash course:
Right click an empty space (not a window), select new, then right click and drag to draw a new shell window. (mouse cording and button menus are core, read up on it) To delete a window you right click an empty space, select delete, and right click the window you want to delete. To cancel a delete, left click anywhere, or right click an empty space (meaning not over a window). Since everything is a file, including our terminal, deleting is how we close them. Resizing them works like any other OS, grab an edge or corner, left click and drag. To move a window is a little more tricky, go to an edge until the cursor turns to the resize glyph, right click, and the cursor should turn into a little square and the window border turns red so drag away! Window taking up too much screen and you want to "minimize" it? Right click, select hide, right click window to hide. To unhide, that window is now listed in the right click menu, just right click and select it to "maximize". In addition, any windows hidden by other windows are also listed in that menu so you dont lose them.
Since plan 9 was inspired directly by Unix it too has a shell, rc to be exact1, which looks and somewhat behaves like a Unix shell. Many of the commands are the same such as cp, ls, grep, awk, and are mostly present but be warned, they are very different but offer the same basic functionality. man pages are your friend.
And I want to be clear, the shell and terminal are two different things. The shell is what we use to run commands, the terminal is what we sit in front of.
The shell experience is not unlike the primitive terminals of old *nix systems where accidental input can cause errors or leave you smashing the keyboard trying to figure out why it doesn't work anymore. You probably moved the cursor to another line and pressed enter sending garbage to the command interpreter or backspaced over the prompt. Press delete and you're back to the prompt.
The plan 9 kernel is a kind of file system router that is transparent over the network. All devices, processes, and resources are implemented as files. Plan 9 also separates the function of a computer into pieces you can arrange on one or more systems on a network, transparently, as needed: CPU, auth, disk. CPU is what runs user processes, without it you cant run stuff. Disk is a service that handles everything storage and disk related. It can run alone on a machine dedicated to serving up storage any way you want. Auth is how security is implemented. Want an active directory like domain controller? Just point your terminals to a single auth server and you're done. And finally, one last component: the terminal, or where your butt is parked and doing work. That can be anything from a drawterm session to a full blown workstation which is running it's own local CPU, disk, auth and terminal (In fact, that is what our VM is doing!). At this point your probably wondering, can I spin up a bunch of CPU servers to make a cluster? YUP! In fact, this is part of nix, another fork which is focused on distributed grid computing. And yes, workstations can also be part of distributed networks because it doenst make the distinction between configurations, it's just how that machine is setup. You can also have multiple auth servers linked as well to form more complex networks with multiple domains.
Namespaces, still fuzzy on this subject, but it implements per-process isolation. Quick demo of name space process isolation: draw two shell windows and run '% cat /dev/mouse' in each. If you notice, only when the mouse cursor is inside the window do you see the coordinates appear (also your mouse is a file that simply contains the current XY coordinates in human readable text). This demonstrates per-process namespace isolation. The /dev/mouse in each window is different and bound only to that window. Outside of that window, that process has no clue where the mouse is. Each process has it's own namespace and file system you can change at any time. What happens in that namespace, stays in that namespace. To see what binds and mounts are in that windows namespace, run '% ns'. Want to kill those cat /dev/mouse processes? Hit ctrl-c and watch as nothing happens. That's because you press the delete key. What? Why! Remember, in plan 9 processes are files too. You delete the process to kill it. It's simple when everything is a file. Lastly, cut/copy/paste works easily by left click and drag highlighting using the middle mouse button, though copy is called snarf, why? ThunderCats fan? Who knows.
In plan 9, a terminal is the device you sit in front of to do actual work. It means a GUI running rio. There is no text only interface to plan 9. I know that doesn't sound Unix but this is unix 2.0, the gui is for moar terminals! And this is where it gets weird, all windows are terminals. A GUI application starts and runs in it's parent terminal, the windows is resized as necessary. This is how Unix should work. Everything is unified. No duality of terminal vs GUI. In plan 9, they are one in the same.
Now if the VM networking is correctly working we should have an IP address accessible from our lan. Find your ip address by running '% cat /net/ndb' in a terminal. ndb is the network database, still learning that one but it is a collection of network configuration data as well as other machines which you wish to "dial". an alternative command is '% cat /net/ipselftab' which is a bit terse.
Can you ping? '% ping' but why is the command not found? because plan 9 breaks things up and /bin has a few directories in it, like ip, to group tools by category. So to run ping type '% ip/ping -n 4 HOST'. But wait one more demo: make sure we can talk to the VM from localhost or another host on the lan. Run this in a plan 9 terminal: '% aux/listen1 -tv tcp!*!9999 /bin/echo HELLO!' Then on another box: '$ telnet HOST 9999' which should print HELLO! and exit. Congrats, your plan 9 system is talking to the world!
Okay. Networking is running. Now we need to get drawterm up and running. Why this emphasis on using drawterm? The fork of plan 9 that ants is based on does not yet support virtio (Harvey, another fork, does but is out of scope for now). So graphics are sluggish and no mouse cursor integration. Drawterm is similar to an x client but uses 9p to directly mount/bind remote resources directly without any intermediate protocols. So you have the benefit of a more native user experience via smooth graphics and the mouse cursor is not captured. But there are two more major bonuses: drawterm serves up your local file system to your login namespace meaning you can seamlessly access your local files from your plan 9 session :-). And copy/paste between the host OS and your plan 9 session. That is how we share things easily between systems. The VM is used as a CPU and disk server and we leave the GUI alone.
Okay, enough nonsense lets get drawterm running. The live cd doesn't configure factotum for authentication so a one liner is needed to get that configured:
% echo 'key proto=dp9ik user=glenda dom=whatever !password=something' >/mnt/factotum/ctl
You can change the password to whatever you want. You'll also notice there was no output, could be all is well or nothing works. To check if configured properly run: '% cat /mnt/factotum/ctl' which should print the configuration string minus the password (should look like this: key proto=dp9ik user=glenda dom=whatever !password?). The authentication server, factotum, is a file server which is mounted at /mnt/factotum and we just configured it by writing to its control file. Cool eh? Repeat to yourself: "Everything is a file"
Now by this time you may or may not have drawterm built, if not get to it! You might need to gab a few libs to get it building. But on *nix but its pretty simple, run '$ CONF=unix make'
Okay. Now we need to dial our CPU server in the VM from drawterm:
'$ ./drawterm -h xxx.xxx.xxx.xxx -u glenda -a xxx.xxx.xxx.xxx'
h - specifies the host (or CPU server in plan 9 jargon)
u - is the user, always glenda for this tutorial
a - is the auth server (factotum) which is running on the same machine so it too has the same ip as the CPU(host).
If everything is still working as configured, drawterm should open up to a similar screen you saw when the VM first started the GUI with a big white windows and bluish border. It might not display anything immediatly, give it a second. If nothing appears, type in the password and hit enter, I noticed it sometimes doesn't render the login text for some reason (could be my system or a bug). After typing the password and hitting enter, you should then see a command prompt: gnot%
type rio and hit enter. You should now be starting at a blank grey screen! Congrats! http://9front.org/img/1334088281.jpg
Open a new terminal (right click menu) and run gridstart. If everything is working you should see post printed a few times and a rio session is started from the 9grid. The 9 grid session presents you with four windows opened by default:
upper left is the hubchat window, enter a nick and say hello!
To the right is the Acme text editor. Acme is *THE* text editor of plan 9 so learn it if you plan to do anything code related. (draw a window in your cpu and run acme)
Under acme is a window displaying a bitmap for what I assume is decoration.
To the lower left, under the chat window, is the wiki web page rendered in mothra, a very primitive web browser (try running % mothra http://www.soylentnews.com in a window on your CPU).
If you want the resource monitor running in your drawterm, draw a little window and run stats. Right click the stats window to add or remove resources. Resize it as needed. NOTE!!!! The remote rio session is not local so make sure you draw new windows in YOUR rio session, outside of the remote rio window, for local use. (remember to hide it if you need it out of the way)
That's all for now. Stay tuned. My next entry should be a guide to install the OS to a VM as a CPU and disk server leaving the gui to drawterm.
If anyone is interested in doing a meetup in the grid, let me know and we will work out a date and time. I should be on tonight around 9pm eastern USA time. I hope to get a few you you on board. We're going to build our own internet, with black jack and hookers.
This is hopefully going to be one of many new journal entries I'm going to write. I'm at a point in life where I'm thoroughly bored and frustrated. My problem is one of isolation so I've been trying to find something technologically and socially oriented. I did the maker space thing but found it full of IoT wankers and webtards. If it wasn't arduino, raspberry pi and/or javascript you were SOL. I'm the technological rebel type. I hate the direction computing is headed. Boring neutered, walled gardens: the septic white suburbia of computing. No thanks. So last night I made a discovery that has reignited the dim flame of intrigue so sit back and read on.
I've been very interested in plan 9 for quite a while. It's concepts are both intriguing and at the same time abstract and confusing. Remember the old Unix mantra: "everything is a file!" Well we all know that's crap because networking (and more) is accomplished via syscalls, not files. This is where Unix broke. The creators knew this and decided that Unix was broken and it was time to replace it with something that more Unix than Unix itself. This is how plan 9 came to be. It was Unix 2.0 designed to fix all the ugly hacks and shortcomings of Unix. (they even eschewed dynamic linking and shared libraries). The first thing that came to my mind was "wow! I can build a cloud without a cloud" Now just remember, plan 9 was started in the 80's. We've had distributed cloud computing since the fucking 80's and no one noticed.
But like the early days of my exploring Linux/Unix, it is abstract and confusing. The big hurdle I always face is "okay, I got this damn thing booted. Now what?" So my problem is one of utility, not technical understanding. Now that I have a working system, what do I do with it? With no goal beyond installation and poking around, there is little motivation to further explore (I attribute that to my ADD/Asperger's/whatever where I need continuous reward otherwise I crash and abandon. rinse, wash, repeat.)
This past month I've taken a hard run at learning Plan 9 and started tinkering with it more and more. I downloaded and build Harvey. Got a version of drawterm running and tried connection but nothing but frustration and failure. I then got Inferno (plan 9's sibling designed by the same bell labs people) built and running on my raspberry pi but beyond playing with the demo's I couldn't figure it out. So I set about looking for more plan 9 guides and trying to find an activity or application I can get into and the motivation to continue will follow. So in my search I stumbled on http://ants.9gridchan.org/.
I realized I found exactly what I was hoping I would find: a decentralized grid of plan 9 computers on the internet to explore. So I downloaded the iso, popped it into virtual box and fired it up. Once booted, I followed the directions, ran gridstart, and was greeted with a new rio session (rio is the plan 9 window manager). In the new rio session a chat window, wiki, Acme editor, and a png are loaded automatically. I was prompted to enter a handle for an irc like chat, punched in my handle and was promptly greeted by a user, mycroftiv. That person kindly helped this n00b get on his feet and get things running. It was my teenage years all over again, discovering the internet. I swear, I welled up a bit and it bought a tear to my eye. Lost in a strange, digital place was both exhilarating and intimidating and there were people helping. I'm home again!
Think of the grid as a cloud of plan 9 systems linked together using nothing more than plan 9's 9p protocol and utilities. There is no special software needed to accomplish the networking, it's all built into the OS. The grid offers chat, wiki's, radio, file sharing, and more. So now I'm hooked. The idea that a radio station is just a remote file system that you mount and play like a regular audio file is pure bliss. No protocols, clients, servers, ports, etc. You are directly served the damn file using the same protocol the entire OS is built on. This is how the cloud should work, distributed operating systems sharing their resources. Not walled gardens hidden behind proprietary clients, servers and yet another protocol (YAP). And in keeping within the spirit of Unix, discreet tools and scripts are the foundation of user land. mycroftiv even pointed out that the connection is served up by just a small script that is the server. No special software necessary.
My goal is to not only learn and use plan 9, but to also become part of the plan 9 community, develop the system, and get more people to help. I also have an idea for getting others involved: I would like to propose creating a soylent 9gridchan community with the ultimate goal of somehow bridging soylent news to the 9 cloud. We can offer services via 9p to those interested and possibly run our own CPU and disk servers to host everything.
I hope a few soylentils will join me on this journey. Stay tuned for my next post!
This is apparently the UI used by the government employee that set off the missile warning in Hawaii this past Saturday: https://twitter.com/CivilBeat/status/953127542050795520
I'm sure they paid good money for it too.
Spotted on reddit:
Someone might have just cracked Intel's ME via JTAG. This twitter post:
https://twitter.com/h0t_max/status/928269320064450560
Game over man! Game over!
If true, then we now have ring -3 access into the ME with the possibility of permanently disabling it. I am also a little excited to hear what Intel has to say about this and what their next anti-consumer move is. Though the more exciting thing might be hacking it and using it for good and possibly allowing systems to run libre ME software! Perhaps we might see a proper open source Minix ME project spawn.
Next up! AMD's PSP.
So I was browsing around and stumbled upon this very interesting IoT platform: https://www.grisp.org/. Not into IoT but as an embedded platform it sounds cool as hell.
From what I gather, GRiSP is the Erlang VM ported to run on bare metal using the RTEMS RTOS which is a library that makes your program look like a Unix process that runs on bare metal. So you have an open source POSIX RTOS which runs on military hardware and space based systems with a high reliability concurrent functional language VM designed to run on communications systems on a cheap Atmel Arm CPU. I like the balance of a high level language running close to metal without miles of OS in between needing hundreds of megs of RAM. This video is a good intro and shows the hardware in action: https://www.youtube.com/watch?v=W0P-l7dBGJk&feature=youtu.be
Speaking of Erlang, I've yet to look into functional languages so here I am, learning me some Erlang while listening to some newer Polish Black Metal from a few different bands. Anyone else work with Erlang? What are your thoughts on the language and it's BEAM VM alternative, Elixr (which can also run on GRiSP).
And does anyone else listen to this racket while coding or doing other engineering work? Few great finds (sorry all youtube links) Enjoy:
Odraza - Esperalem tkane (Warning! NSFW cover art) https://www.youtube.com/watch?v=zZHb0yAPHyI
MOROWE - Piekło.Labirynty.Diabły https://www.youtube.com/watch?v=WsmLo1VN5Zg
Kriegsmaschine - Enemy of Man https://www.youtube.com/watch?v=7iY5RxusESg&t=13s
Batushka - Litourgiya https://www.youtube.com/watch?v=xgfa5UlZAL8
Biesy - Noc lekkich obyczajów https://www.youtube.com/watch?v=R9JDs4GDJsg&t=1887s
Medico Peste - א: Tremendum et Fascinatio https://www.youtube.com/watch?v=MevYwfa_pvg&t=1736s
So I was writing a post in the comment section of this story. It got me thinking about a few bat-shit crazy ideas I've had rolling around in my head and how they would be perfect fit for a modern desktop/laptop RISC-V SoC. Keep in mind these are what I consider pipe-dream paper napkin ideas that I would like to share with others to see what they think. This is my first journal entry as well. Any references to RIO or RapidIO is from ideas from my original post I'll probably paste as a comment here later on. It might be a little messy and chopped up but I'm pressed for time. I've probably skipped a few things or left some stuff out. I just want some opinions and feedback if any.
* RISC-V APU. Something like an IBM Cell or larabee with a few general purpose RISC-V cores and tens of grouped, modular lightweight risc-v cores. The system can be built as a heterogeneous setup so data loaded into RAM by a user program is directly addressable by the micro cores without copying from main memory. Would be even better if we write GPGPU code the same way we write CPU code. Since the instructions are the same, a single compiler builds all of the code in one go. Code for the micro cores is restricted to certain instructions or has access to special instruction for texture, video decode or whatever else is needed. A preprocessor keyword can tag code sections that are to be ran on the micro cores so the compiler knows what to do with it. GPU instructions and associated data can be tagged so it is loaded into HBM and not slower main memory.
Perhaps the HBM is some kind of massive L3 or L4 hybrid cache that is part system memory and part cache. Maybe it works similar to Arm's tightly coupled memory where the CPU bypasses cache for the HMB and GPU instruction reducing latency by eliminating cache searches. Now you have a system that reduces GPGPU programming complexity by using the same CPU instructions and design. In a way, this removes the need for a GPU driver as the GPU is just a bunch of CPU's. No video driver means easier development of graphics libraries and API's. No longer bound to OpenGL/Vulkan/DirectX because that's what the GPU maker supports in their proprietary driver. The code could also be statically linked or compiled right into the application binary meaning the application can contain the entire 2D/3D rendering stack.
I also think this is a great system for co-processing of all sorts of other things such as 3D sound, physics, simulation, etc. The idea is to eliminate externally programmed coprocessors or accelerators which require drivers. Though it remains to be seen if this approach is practical hardware wise as both cell and larrabee failed. But I think the design might be useful for high performance computing outside of Graphics or an excellent companion to a pure GPU. For example, standard cores handle game loop/logic/input, the micro cores run AI and physics, and the GPU renders video. I would also think that the host OS would directly handle scheduling thus making them fully managed cores. You can set affinity, execution caps, loading, see which threads or processes are running on them, memory used etc. Though, it remains to be seen if this is any less complex coding wise vs current GPGPU/Shader programming.
* Two-way trust security processor. this is one I've been pondering for a while but I'm not sure how practical or secure it can be. Big companies like making money off of their IP. They don't like open platforms with CPU's that can't hide their precious unencrypted bits. We don't like platforms containing secret code running on secret CPU's doing secret things which includes full hardware access outside of the OS kernel. So how about we come to a compromise and build a system both parties can trust?
It would work like this: Limited secure VM's created by a processor extension allowing black box modules to run inside the black box VM out of system view. The VM and associated memories and I/O paths are completely isolated from the system, including the host kernel. The key part here is the user maintains full control over what the VM can access through control flags set before the VM is spun up. It's like when you install mobile apps and it lists what permissions the app needs. A shared memory segment can be used as a FIFO or DPRAM to allow the VM to communicate with the host OS using mmap() and the like. An SPI or i2c port can then be dedicated to a user added commercial security chip that is trusted by the IP holders which contains keys or another crypto CPU that does verification but has zero access to the systems hardware, it's just a secure peripheral. Everything is controlled by the initiation application including memory needed, cpu core affinity and execution cap. If a program wants to run secure code, a special OS flag would trigger a prompt to ask the computer owner's permission to start the VM AND what memories and I/O devices it needs to commandeer or share (cpu, memory, sound, video, framebuffer, GPU). If the initiator application is terminated, it's secure VM is also terminated. Another mechanism is a memory scrubber in the IOMMU which zeros out memory once the VM is killed before returning it to the host OS.
I should also note that these secure VM's are not like a regular VM in the sense that they are limited to a certain number of resources. For example the secure VM's should run entirely from memory and have zero access to certain hardware like disk controllers, bus controllers, and certain IO hardware like USB or input devices. Though, certain output devices can be securely mapped like audio and even secure frame buffers or segments of a frame buffer. The idea is if you want to securely process data, go ahead. Just be aware you can only do it in a limited yet secure environment. The user can't see the inside of the VM and the VM can't see outside of it's designated secure paths save for just the shared memory segment and perhaps two way interrupt lines. And of course, all black box secure VM's and associated resources are fully visible and auditable by the user from the outside. The user also has the ability to kill any of those VM's at *any* time.
My only problem is I don't know how to deliver the secure code to the VM or how it gets decrypted and ran without the user being able to extract keys or hacking a decrypted module to run in a simulator or unsecure VM (though I'm sure this is already a problem). Perhaps some boot-strap process between the secure VM and external security processor which verifies the VM and then lets the initiator application know it's safe to copy the encrypted binary blob to the VM via shared memory. Then the code is decrypted in VM and ran.
The user can also create their own secure VM's. So it's a secure computing system for everyone including the owner. This could useful for creating isolated VM's for secure computing, banking, communications, web servers, etc. I also see it being used for secure soft-dongles enabling commercial software to run by stuffing certain critical components into the secure VM and using shared memory to operate. To me it's a fair way to do this and the extra work to build the secure VM is worth it. Can this system be abused? Of course. But I think the key part of the idea is the user/owner is in full control of the VM's from the outside. Even if my idea is dumb and won't work the two-way idea is the key point and might be practical in some other form. It sounds nuts but I want to be able to watch Netflix on my Linux powered Risc-V system without secret software running on secret CPU's while SMI's run unchecked all the while having FULL access to every bit of hardware in the system including main memory. No thank you.
* Infinite framebuffers and displays. The idea is this: a display is just a bitmap that is updated constantly to produce a moving picture. Why should framebuffers be fixed in size and number? The systems IOMMU can be linked to the graphics system to create a dynamic display system. Let's say we have our pipe dream Risc-V SoC in a mini case with 8 mini RapidIO ports (USB3/Thunderbolt replacement) and we have 8 RIO monitors (or active RIO->HDMI/DP/DVI/VGA dongles).
As you plug in each monitor, the RIO switch/controller interrupts the host OS and the RIO management system sees that a new device is plugged in. The RIO manager queries the device and sees that it's of type monitor and passes that info to a display manager. The display manager queries the monitor and finds the modes it can handle. The display manager sees that it's a 1920x1200 monitor and then tells the user a new display is ready and the user decides what to do with it. Once the user figures out what they want (a 1920x1200x32 60Hz workspace), the display manager tells the kernel it wants a new framebuffer and the kernel then tells the IOMMU to create a ~10MB frame buffer and map it to the GPU. The display manager then presents this to the display server and the display server creates the workspace and writes it to that framebuffer.
These framebuffers are also written to by the GPU meaning all accelerated 2d/3d graphics are also written to any frame buffer the user program specifies. This decouples the display from the rendering allowing for more flexibility. This same system can also create virtual framebuffers that can be mapped into I/O space and read by other hardware devices or read into software like the Linux virtual framebuffer. Perhaps multiple displays could be mapped to one buffer and a video wall can easily be created without the OS or it's components knowing how. Stream 3D rendered video directly over the internet via a compressed video stream? No problem!
Another idea is this framebuffer system can tie into the whole two-way trust VM by allowing overlapping frame buffers so secure video can be overlaid on top of the users desktop. The idea is a special hardware component would know the coordinate mapping between the two and as the data is copied to the actual display, the hardware would flip between the two buffers keeping them separate at all times. As the user moves the video display window, the host OS updates those coordinate positions and the hardware does the rest securely. This prevents the user from attempting to read the framebuffer back into the system to record the video. A bit like how the old PC TV tuners which fed video directly to the video card and a magenta chroma key was used to tell the GPU where to map the video in the framebuffer. Another method would be to create a write only port to a secure frame buffer that behaves like a DPRAM with a mask. As the desktop display system writes the data to the buffer, it gets copied until it hits the mask which represents the secure video window which is simply not written to. If the display is full screen, a flag would simply tell the OS to not bother writing to that framebuffer saving bandwidth. Of course, any outside paths would have to be secure so more secure hardware must be added to the system. But as I stated earlier, this can also be beneficial to the user since the user has the same exact access to the security facilities.
All of this requires a crazy IOMMU and memory system. And most likely impractical in terms of die size. Some of this may also decrease throughput as a result of increased memory operations. Fun to think about though.