2020-01-01 00:00:00 ..
2020-06-30 21:00:33 UTC
2020-07-01 02:02:58 UTC
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Researchers at Delft University of Technology have developed a sensor that is only 11 atoms in size. The sensor is capable of capturing magnetic waves and consists of an antenna, a readout capability, a reset button and a memory unit. The researchers hope to use their atomic sensor to learn more about the behaviour of magnetic waves, so that hopefully such waves can one day be used in green ICT applications.
In theory, engineers can make electronic data processing much more efficient by switching to spintronics. Instead of using electrical signals, this technology makes use of magnetic signals to transmit data. Unfortunately, magnetism tends to get incredibly complicated, especially at the tiny scale of computer chips. A magnetic wave can be viewed as millions of compass needles performing a complex collective dance. Not only do the waves propagate extremely quickly, causing them to vanish in mere nanoseconds, the tricky laws of quantum mechanics also allow them to travel in multiple directions at the same time. This makes them even more elusive.
Elbertse, R.J.G., Coffey, D., Gobeil, J., et al. Remote detection and recording of atomic-scale spin dynamics, (DOI: 10.5281/zenodo.3759448)
The sensor is intended to help make progress with spintronics.
Today for Arm's 2020 TechDay announcements, the company is not just releasing a single new CPU microarchitecture, but two. The long-expected Cortex-A78 is indeed finally making an appearance, but Arm is also introducing its new Cortex-X1 CPU as the company's new flagship performance design. The move is not only surprising, but marks an extremely important divergence in Arm's business model and design methodology, finally addressing some of the company's years-long product line compromises.
[...] The new Cortex-A78 pretty much continues Arm's traditional design philosophy, that being that it's built with a stringent focus on a balance between performance, power, and area (PPA). PPA is the name of the game for the wider industry, and here Arm is pretty much the leading player on the scene, having been able to provide extremely competitive performance at with low power consumption and small die areas. These design targets are the bread & butter of Arm as the company has an incredible range of customers who aim for very different product use-cases – some favoring performance while some other have cost as their top priority.
All in all (we'll get into the details later), the Cortex-A78 promises a 20% improvement in sustained performance under an identical power envelope. This figure is meant to be a product performance projection, combining the microarchitecture's improvements as well as the upcoming 5nm node advancements. The IP should represent a pretty straightforward successor to the already big jump that were the A76 and A77.
[...] The Cortex-X1 was designed within the frame of a new program at Arm, which the company calls the "Cortex-X Custom Program". The program is an evolution of what the company had previously already done with the "Built on Arm Cortex Technology" program released a few years ago. As a reminder, that license allowed customers to collaborate early in the design phase of a new microarchitecture, and request customizations to the configurations, such as a larger re-order buffer (ROB), differently tuned prefetchers, or interface customizations for better integrations into the SoC designs. Qualcomm was the predominant benefactor of this license, fully taking advantage of the core re-branding options.
[...] At the end of the day, what we're getting are two different microarchitectures – both designed by the same team, and both sharing the same fundamental design blocks – but with the A78 focusing on maximizing the PPA metric and having a big focus on efficiency, while the new Cortex-X1 is able to maximize performance, even if that means compromising on higher power usage or a larger die area.
While Cortex-A78 will only improve performance by around 7% from microarchitectural changes alone, Cortex-X1 will improve performance by up to 30% due to a wider design, doubling of most cache sizes, and other changes. Cortex-X1 cores are also expected to reach 3 GHz on a "5nm" node, delivering even more performance. The Cortex-X1 cores could use up to 50-100% more power than Cortex-A77/A78. Cores could be arranged in a 1+3+4 or 2+2+4 setup of Cortex-X1, Cortex-A78, and Cortex-A55 cores.
Jintide Montage might sound like the name of a punk group, but it's not. In fact, the Montage is an x86 processor with PrC (Pre-Check) and DSC (Dynamic Security Check) technologies that can be used in Jintide or other server platforms.
It shares common DNA with Intel, AMD and VIA and uses Skylake Xeon silicon at its core - and has already entered mass production.
According to the marketing materials, Jintide uses Tsinghua University's DSC technology to achieve "high-speed IO tracing, memory tracing and CPU behavioral checking via its built-in security check engine."
From the available information, we can tell this is not a consumer processor and there's no Core CPU coming from Jintide any time soon. The Montage is also aimed exclusively at the Chinese market, perhaps extending to the country's close allies.
Arthur T Knackerbracket has found the following story:
Revolutionary 'green' types of bricks and construction materials could be made from recycled PVC, waste plant fibers or sand with the help of a remarkable new kind of rubber polymer discovered by Australian scientists.
The rubber polymer, itself made from sulfur and canola oil, can be compressed and heated with fillers to create construction materials of the future, says a new paper unveiling a promising new technique just published in Chemistry—A European Journal.
"This method could produce materials that may one day replace non-recyclable construction materials, bricks and even concrete replacement," says organic chemistry researcher Flinders University Associate Professor Justin Chalker.
[...] "This new recycling method and new composites are an important step forward in making sustainable construction materials, and the rubber material can be repeatedly ground up and recycled," says lead author Flinders Ph.D. Nic Lundquist. "The rubber particles also can be first used to purify water and then repurposed into a rubber mat or tubing."
"This is also important because there are currently few methods to recycle PVC or carbon fiber," he says, with collaborators from Flinders, Deakin University and University of WA.
[...] The new manufacturing and recycling technique, called reactive compression molding, applies to rubber material that can be compressed and stretched, but one that doesn't melt. The unique chemical structure of the sulfur backbone in the novel rubber allows for multiple pieces of the rubber to bond together.
More information: Nicholas Lundquist et al. Reactive compression molding post‐inverse vulcanization: A method to assemble, recycle, and repurpose sulfur polymers and composites, Chemistry – A European Journal (2020). DOI: 10.1002/chem.202001841
So at it turns out, being cooped up due to COVID-19 causes your local resident NCommander to go on a retro computing spree. Last time, I dug into the nuts and bolts of Hello World for Windows 1.0.. Today, the Delorean is ready to take us to 1993 — the height of the "Network Wars" between Microsoft, Novell Netware, and many other companies competing to control your protocols — to take a closer look at one of Microsoft's offerings: Windows for Workgroups
As with the previous article, there's a YouTube video covering most of the content as well as: a live demonstration of Windows for Workgroups in action, my personal experiences, and the best startup chimes of the early 90s.
If the summary doesn't make you relive all those General Protection Faults, then slip past the fold to see what all the hubbub was about for Windows for Workgroups compared to its younger brother, Windows 3.1.
The 16-bit family tree of Microsoft Windows can be a tangled beast, especially when we get to the topic of Windows for Workgroups. You may have noticed I haven't used it's more common version number: Windows 3.11. That's because there was, in fact, a free-standing Windows 3.11. In truth, the following all existed at one point or another:
For clarity sake, if I speak of Windows 3.1, I'm speaking of the original release, while Windows for Workgroups (WfW) refers to the final release unless I specify otherwise.
The versioning and numbering is extremely misleading. At the earliest end of the chart clocking in from 1992 is the basic Windows 3.1 that most users are more familiar with. Windows 3.1 required an 80286 or better, and had no integrated network stack (although one could be added). When people are talking about 16-bit Windows, this is generally the version most people are referring to. Operation on an 80386 would bring Enhanced Mode which brought better performance and the possibility of 32-bit applications to consumer Windows.
Windows for Workgroups 3.11 meanwhile emerged mid-1993 and supplanted the original release. Requiring an 80386, this version brought 32-bit driver access and boasted faster performance and better stability. In addition, Workgroups 3.11 came with built-in networking support in the form of Microsoft's homegrown Windows Socket implementation with IPX/SFX, NetBIOS, and ARCnet included on the installation disks. TCP/IP was available as a free add-on.
Having had to pop open the kernel debugger due to system crashes (also detailed below), I can tell that Windows for Workgroups 3.11 is far closer to Windows 95 than it might appear at first glance and many of the foundations of what would become the 9x series of Windows would be laid here instead of being introduced with Windows 95 as is commonly believed.
That leaves the two remaining versions, Windows 3.11, and Windows for Workgroups 3.1.
Windows 3.11 is something of a mystery to me. The freestanding upgrade has been archived, and I even went as far as to install it to examine differences. In short, I found one:
Notably, Windows 3.11 still supported the 80286 and Standard Mode, and still brands itself as Windows 3.1 on the installer and splash screen. As such, it's actually a distinct kernel than the later WfW 3.11. This, of course, leaves the last version on the list: Windows for Workgroups 3.1.
As for Windows for Workgroups 3.1, what I can tell you is it was a bundled copy of Microsoft Windows 3.1, and Microsoft Workgroups Add-on for Windows which still primarily depended on DOS for networking and notably also supported the 80286. This might not sound like a big deal, but Windows for Workgroups actually played a starring role in bringing both networking and eventually Internet connectivity as something we take for granted as a part of the operating system.
Some broader context is needed to understand the implications here. As far back as PC-DOS 3.x, the need for networking support in the base operating system was well understood. PC-DOS 3.1 formally introduced the network redirector API, a mechanism where add-on software could attach network drives to DOS. The network redirector API was so well designed that it was later (ab)used for various non-network devices such as MSCDEX to provide CD-ROM support for DOS and Windows.
What DOS didn't provide however was a standard network API. Instead, Novell, IBM, Microsoft, Sun Microsystems, and many other companies provided their own packet interface and network stacks through a variety of APIs that were mutually incompatible. Microsoft itself has its own hat in the ring, its LAN Manager Server for OS/2, and Microsoft Workgroups Add-on for DOS.
Initially, this wasn't a problem. Most shops would have a single network (and of those, most commonly Novell NetWare) and set of programs. Windows 1.0 and 2.0 furthermore ran in real mode, which meant that Windows applications could simply talk to DOS APIs without a middle man. Problems began to emerge with Windows 286/386/3.0.
Internally, Microsoft was beginning to move towards Protected Mode. While the 80286 had a crippled ability to use memory above 1 MiB, 640 kilobytes was clearly becoming cramped. With OS/2 still struggling and Windows NT still far from shipping, Microsoft began to pivot on extending the useful lifespan of DOS-based Windows by embracing the then-new world of 32-bit computing. By having the core of the operating system running in Protected Mode, Windows could theoretically use up to 4 GiB of memory with each application having a 16 MiB chunk to itself.
Networking was also becoming more important in corporate environments. Ethernet (both in thicknet and thinnet), and Token Ring came out as front runners, and Novell's IPX/SFX competed with TCP/IP used on UNIX workstations. IBM and Microsoft meanwhile were backing NetBIOS for use on small LANs while supporting NetBIOS-over-IPX and NetBIOS-over-TCP in larger corporate networks.
Something was going to have to give, so Microsoft, in cooperation with Sun Microsystems and JSB Software, collaborated to write the Windows Socket API; it was later and more commonly known as Winsock. The intention of Winsock was to give Windows a standardized mechanism of interfacing with network cards (NDIS), network protocols, and programming interfaces. In short, the ability to embrace all the competing network technologies at once.
Windows 3.1 was the first version that could support a Winsock stack natively, but Microsoft didn't provide one in the box. Instead, Microsoft initially left this entire market to third-party developers, leading to one of the most pirated pieces of software of this era: Trumpet Winsock
Many other vendors and even ISPs shipped their own versions of Windows Sockets which powered the first Internet applications on Windows 3.1. For example, AOL for Windows became notable on the basis that it provided Winsock and gave those enjoying dial-up the ability to use applications like Netscape Navigator. This is in contrast to CompuServe and Prodigy Classic whom gave you their own walled gardens. CompuServe would eventually embrace the standard PPP protocol via the famous !go pppconnect to join the broader Internet.
Although most vendors simply shipped TCP/IP, a few vendors supported their own Layer 3 protocols, including DEC whose PATHWORKS product included full support for DECnet on Windows!
That state of affairs was going to change. With the famous Microsoft-IBM divorce of the 90s, OS/2 wasn't going to become the operating system of the future — the DOS-based versions of Windows got an unexpected stay of execution. Microsoft had developed its own version of Winsock which had premiered on Windows NT with support for IPX and NetBIOS standard, as well as a BSD STREAMS derived TCP/IP stack.
Windows for Workgroups 3.11 was Microsoft's first widely-distributed, network-enabled Windows. Although most home users had little use for it, it also would prove a widespread technological test. Windows for Workgroups 3.11 would mark the start of using 32-bit components in a fundamental way.
Compared to the rest of the system, the network stack was entirely 32-bit. Known as Shoebill, its internal implementation of the Windows Socket API was ported from Windows NT. Existing as a set of 386 VxD drivers and userland libraries, Shoebill would be the first version of Winsock shipped in the box for home and small office users. Internally, Shoebill provided the NDIS3 interface, a standardized model for writing network card drivers. NDIS has continued to be supported by Microsoft, and incidentally is the same technology used to allow Windows network drivers to be used by Linux and FreeBSD via ndiswrapper.
In the context of the era, Windows for Workgroups 3.11 was also an important stepping stone. More notably, it was the first major real-world test of thunking, a technology Microsoft used to support 16-bit and 32-bit Windows side by side for decades. While the home user may have not gotten much from Windows for Workgroups 3.11, it was an important milestone of what would become Chicago, and then Windows 95.
It would normally be at this point I'd start showing demonstrations of this cutting edge technology, but before we get there, I need to detour into the nightmare I had in actually getting Windows for Workgroups running.
Under normal circumstances, I like to use VirtualBox for running older versions of Windows, DOS, and Linux as it has excellent compatibility with the especially oddball systems like Xenix. It also has the ability to do internal networking and NAT Networks which avoid the pain of having to setup and configure TAP.
Furthermore, VirtualBox emulates an AMD PCNet PCI card, and works correctly with Super VGA modes so I could easily get high resolution and networking support in one easy package. I also knew Windows for Workgroups ran successfully as I had used it for testing the Windows 1.x binaries.
One problem: Windows suffered an EIOIO error and bit the farm when I installed the networking stack.
In truth, I didn't actually even get that far initially. Windows would just crash to a flashing cursor, and creating a bootlog gave me an empty file. All I knew was I was going belly up very early in the startup process. Some googling suggested that this was a problem with VirtualBox 6.1 removing the ability to run without Intel VTx, which had shipped on Ubuntu 20.04. The problem was more subtle than that, but I initially accepted it and tried other emulators.
My next GOTO was QEMU, but I had more problems here initially. Instead of a PCnet card which needs an add-on, I choose to emulate the more common and compatible NE2000 card whose drivers were shipped on the Workgroup disks. This sorta worked, but I kept getting lockups in both DOS and Windows. I eventually traced the lockups to the fact that QEMU's NE2000 by default sits on IRQ 9. This is a sane default for ISA based machines, but any PC with PCI uses IRQ 9 for the PCI bus. As such, the network adapter and the emulated PCI backplane conflicted and lead to lockups.
Annoying, support for ISA mode in QEMU (-M isapc) appears to have bitrotted out of the codebase, and I couldn't convince QEMU to move the NE2K card to a different interrupt or iobase. QEMU does, however, support the RTL8189 and the PCNet card, the latter of which I eventually got to work successfully. Of course, it wasn't that simple. My initial full system hang was replaced with GDI packing it up.
Trial and error showed that this was a problem trying to run networking combined with more than 16 colors at the same time. I would eventually determine that by using a Super VGA 800x600 16 color driver partially solved the issue. I could get Windows usable, but I still had screen corruption artifacts on startup. I could, fortunately, get around these by forcing the screen to redraw.
At this point, I was getting rather frustrated, and so I resorted to desperate measures. How desperate?
Desperate enough to setup a DEBUG build of Windows which in and of itself was its own set of fails.
Throughout the 16-bit era, it wasn't uncommon to have to use two computers (or at least a serial terminal) to DEBUG crashes as it was common for a system crash to take out the entire operating system. I alluded to this in Windows 1.0, but I honestly didn't expect to go down the rabbit hole of trying to setup a debug build of Windows to check under the hood. My first stop was Visual C++ 1.52 which was the last 16-bit version and included the Windows 3.1 SDK.
Setting up a debug build of Windows is a bit of an experience. Replacement files for core system are shipped in the SDK in the aptly named DEBUG folder, and the scripts n2d and d2n can convert your release to Windows from RELEASE to DEBUG and back. Furthermore, there were WIN31 and WIN311 folders. Perfect. Or so I thought.
As it turns out, the debug Windows kernel, win386.exe is not included on the disk. Without it, I couldn't get any useful debug information out of Windows. Some Googling pointed me to the Windows 16-bit DDK. This had a version of the debug WIN386 but Windows tapped out saying it couldn't load VMM and VMD when I tried it. A quick look at the dates made it clear that this WIN386.EXE was for Windows 3.1, and not for Windows for Workgroups.
At this point, I decided to use my "Phone a Friend" lifeline and called Michal Necasek curitator of the OS/2 Museum. Long time readers of SoylentNews might remember him from my Xenix rebuild series. Michal was able to point out to me that the necessary bits I needed were shipped as part of the free-standing Windows 3.1 SDK.
Much more disk searching later, I finally got my hands on the DEBUG kernel, and I could boot to Program Manager with the /N switch. Setting up an emulated serial port gave me some basic debug messages, but didn't give a clear hint on why we were going belly up.
WARNING: Device failed to initialize (DDB = 80060800) VPD BAD FAULT 0000 from VMM! Client Frame: AX=00100000 CS=0028 IP=80299DFF FS=0030 BX=80481000 SS=0030 SP=000000B0 GS=0030 CX=00000000 DS=0030 SI=80402090 BP=8004CE78 DX=00000000 ES=0030 DI=54006001 FL=00010246 WARNING: About to crash VM 80481000. FATAL ERROR: Attempt to crash System VM Windows protection error. You need to restart your computer. Windows/386 kernel reentered 0000 times Critical section claim count = 000186A0 VM handle = 80481000 Client pointer = 8004CE78 VM Status = 00000000 Stopped while VM executing OOOOPPPPSSSS! V86 CS = 0000. Probably not valid! BAD FAULT 0008 from VMM! Client Frame: AX=00000000 CS=3202 IP=00000000 FS=0000 BX=00000000 SS=0030 SP=00002600 GS=0000 CX=00000000 DS=0000 SI=00000000 BP=00000000 DX=00000000 ES=0000 DI=00000000 FL=0000036A Setting VM specific error on 80481000, error already set GetSetDetailedVMError WARNING: About to crash VM 80481000. FATAL ERROR: Attempt to crash System VM
At this point, I needed to use the WDEB386 kernel debugger. Now, I've worked in software engineering for over a decade. I've dealt with crappy programs, shitty debuggers, and much more. I've had the dubious honor of having to run "gdb /usr/bin/gdb". I thought I was immune to "shitty vendor software" surprises.
I feel like I need to quantify this. The previous "world's worst debugger" prize had belonged to GNU HURD's mach kernel debugger which was so terrible, I was more productive with putting inline assembly breakpoints and printf than actually using it to debug anything. WDEB386 stole the show.
The first part is WDEB386 is not well documented, and its built-in help is very bare-bones. What little documentation I could find related to using it on Windows 95. Almost every USENET post ended with a sentence like "Use SoftICE." This was an encouraging first sign. More warning signs cropped up when I had to apply a hex patch to run WDEB386 as it refers to Intel's removed tr6 and tr7 registers, which causes most vintage debuggers to tap out. Unlike most debuggers, symbol files have to be manually loaded one-by-one as line switches, and I ran into DOS's command-line limit. Even OS/2's debugger was less bad than this. In short, the amount of symbols isn't limited by memory, but how short your file paths are.
Secondly, WDEB386 wants a DOS PC or something very close to it on the other end. No amount of terminal fiddling could get it to properly recognize keypresses. PuTTY, which normally is "close enough" to work was better, but I still couldn't execute multicharacter commands.
At this point, I had already figured out a set of QEMU settings that worked to film my video. At this point, I should have realized I had already opened pandora's box. Instead, I doubled down.
This wasn't exactly an ideal solution to the problem, but it got the job done. I also managed to get actual crash messages, at which point WDEB386's terribleness struck further. I couldn't get a stack trace out of it, and I had trouble setting breakpoints.
Unlike most other debuggers, WDEB386 actually depends on the Windows kernel to give it hints of what modules are loaded and VxD operations. It's more of a command-line for the debug kernel. As such, most of the "extended" commands failed; I couldn't use .VM to see what, if any, drivers had loaded, and the message BAD FAULT 0000 sent me down another rabbit hole.
Through experimentation, Michal and I determined that Windows was tapping out somewhere in NDIS.SYS. My original theory was conflicts relating to the PCI bus, but I was able to reproduce the crash with the Remote Access Service Serial Driver and no network driver. Furthermore, forcing Windows to use a real mode driver also averted the crash.
Eventually Michal realized that BAD FAULT 0000 wasn't the debugger reporting a null pointer exception. Instead, it was an Intel processor exception code! What was the exception?
Divide by Zero.
Michal was able to isolate this to a timing loop that tries to determine the number of milliseconds between operations. As it turned out, my desktop runs so fast that when combined with VT-x, the whole thing goes *bang*. We haven't worked out a binary patch for the issue, but at least we know where and what the root cause is, and it shouldn't be THAT hard to fix for someone with a version of IDA that can disassemble linear executables.
The lesson that needs to be taken away from this is that while Intel platforms have outstanding backwards compatibility, a lot of legacy software has bugs when running on hardware tenfold faster than what they were designed for. A lot of full PC emulators like PCem have limited support for emulating network cards, and it's becoming harder and harder to document these legacies of the past.
Anyway, having segfaulted my way back to the original topic, let's actually get down and dirty with Windows for Workgroups 3.11!
Now, if you were lucky, you didn't have to install the debug build to actually experience Windows Networking. Assuming you're one of the anointed few, the network setup application is your key to the best of early 90s networking. Simply click the icon, insert your floppy disks, and away you go.
What isn't obvious though what's missing. The first quirk comes from the fact that TCP/IP (as previously noted) is nowhere to be found. This actually wasn't a big deal back in 1993. At the time neither RFC1918 network space nor DHCP existed, so (properly) setting up TCP/IP networking involved calling ARIN/RIPE/etc. and getting a Class C network or three. IPX is instead used as the default transport, primarily because it's both plug and play, and almost all network equipment of the era could easily route it. For those who needed it, TCP/IP was available as a download on ftp.microsoft.com free of charge.
Secondly, Shoebill had almost no support for dial-up networking. As such, the Network group would be an oddity for most home users. We'll come back to that later though.
The next step was getting QEMU TAP networking sorted. I'll spare you my pain and misery and just leave a Wireshark trace showing that we were infact talking on the network, complete with NetBIOS over IPX.
After Network Setup does its thing, we're left with a Network group with a lot of icons, and some additional functionality throughout the operating system.
Network Setup actually does quite a bit under the hood. First, it sets up PROTOCOL.INI and sets up the "For Workgroups" product to be able to run under DOS. One thing that isn't well documented is that you could actually run the networking parts of Windows independent of Windows. This still uses real mode drivers, and I suspect is identical to Workgroups for DOS.
Once setup though, many applications gained a large variety of network aware options that were previously hidden. File Manager gained file sharing and network drive functionality that is very close to what shipped in Windows 95. Print Manager likewise got the same upgrade. A few less obvious ones were that Clipbook Viewer now allowed you to share Clipbooks between users. This used NetDDE and essentially acted liked a shared network database. It only worked while the application was open, however.
Hearts, which also appeared in this version of Windows, could be played with three other players over the network. The application was called "The Microsoft Hearts Network" to emphasis this fact.
Most of this functionality persisted across multiple Windows versions, although as of Windows 10, only the file and printer sharing has more or less survived as-is. Of course, though, we still have a large suite of applications to look at though. As a note, because I was working across multiple machines and some of these screen shots are from my original Twitter thread, the colors vary depending on which machine I was using at a given time.
Microsoft Mail itself is an interesting topic that it will likely get its own video and article at a later point. As the name suggests, it is a simple mail client application capable of LAN email. Originally, Microsoft Mail was an independent product for DOS, Windows, Mac, and OS/2. The version shipped in Windows for Workgroups 3.11 was a stripdown version that only supported Windows, and the entire product itself would eventually morph into Exchange. As a bit of trivia, this is why the first version of Exchange was 4, as the previous Mail release was 3.2.
On a fresh installation, Microsoft Mail asks if you wish to create a Workgroup Postoffice. This is a shared mail database written to the file system. The idea is that the postoffice is shared across the network, and clients could directly read and write to the post office. That also meant Mail could be used with a NetWare server, or any product that could map network drive.
That being said, since that functionality was included in the default install, inter-office mail was entirely possible with just Windows for Workgroups 3.11 by itself with no additional add-on software. For communication with other systems, a daemon called EXTERNAL was available for OS/2 which could bridge Mail to UUCP networks and more standard SMTP. This was not tested for this article.
Rather notably, passwords are shown in the clear. In truth, there's no actual security in this product as any user could access the shared mailbox directory and download all the mail on the server. Preferences also have an interesting radio box for selecting your security level:
I leave it to the readers to comment on why this is such an inane switch!
Moving on, user mailboxes could be stored locally or on the backend server, and the Mail client easily allows one to move from place to place. This is especially important if multiple users were using one computer as they all end up sharing a single MMF mailbox on a local disk.
Once setup, the software is simple enough to use, although it's slightly quirky compared to modern clients as mail is only sent and received on timed intervals and I sometimes had to close and re-open the client to get it to actually work. Setup on a client machine is equally straight forward, with one simply needing to select the network share the WGPO folder was shared on.
A global address book and mailing lists are also available. All this functionality was also integrated into our next program, Schedule+. Finally, once setup, Mail would add an icon to File Manager's toolbar, allowing you to easily attach files. (The toolbar itself was also added in this version of File Manager).
Rather hilariously, Schedule+ suffers from a Y2020 problem, so I had to turn back the system clock to use it at all.
Once reality had been suitably rewritten, Schedule+ fired up without issue. Integration with Mail is quickly apparent as it immediately prompts to log into your Mail account, although it is possible to use Schedule+ in a local-only mode.
Schedule+ supported creating invites and had an integrated messaging function that worked off the Network Postoffice. Notably, these messages weren't showed in Mail at all, making meeting invites a bit haphazard. Other users' schedules could be shared and loaded off the postoffice, suitable for secretaries to be able to manage and manipulate their boss's schedule.
For a product of this time, Schedule+ has a fairly elaborate access control function. I've heard rumors that Microsoft used Schedule+ pretty extensively in-house, so this feature probably existed to help prevent developers from deleting meetings they didn't want to attend.
Beyond that, I really don't have much to say. It's a scheduling app. As a standalone application, it was included until Office 97 entirely absorbed it into Outlook.
Remote Access is the only application I can't actually demostrate as it requires a working modem. In short, it let you dial into a LAN Manager, Windows NT, or Windows 95 server and run NetBIOS applications between the two machines. Note that I didn't say network applications. Remote Access only supported the NetBIOS protocol, so it was useless for accessing the Internet even if a TCP/IP stack is installed.
NetWatcher, as the name suggests, gave you a look at the network stack on your machine. It could show whoever was connected to your PC, as well any files they had an exclusive lock on. If enabled in the control panel, the Event Log could also be viewed here to see who had accessed your machine as well as a record of last startups and shutdowns.
I don't even have a good screenshot for this one. Log On/Off was the dedicated utility for signing in and out of the network; this is the login asked for by Windows at startup and not shared with Mail or Schedule+. Windows Network logins were primarily used by WinPopup and also were used for accessing authenticated shares on LAN Manager and Windows NT machines (Windows 3.1 only supported a global password for sharing).
Ah, the great annoyance of the 90s and early 2000s. WinPopup is a simple broadcast message facility, that could send a message to a user, computer, or an entire workgroup all at once. This feature survived in one form or another until Windows XP, when the then-named "Messager Service" was disabled by default, and removed entirely with Vista.
Compared to later versions, this version comes with a GUI interface and also does not run by default. It has to be enabled to do so in the Network control panel.
Chat on the other hand is a keyboard to keyboard chat application. It acts more like a TTY device or the old school UNIX talk application that keyboard input is relayed in real-time. Like other programs, it needs to be open to actually work on both computers.
WinMeter finally tops out our collection of Windows for Workgroups 3.11 applications with simple performance statistics.
Windows for Workgroups provides a fascinating look at the dawn of the office network. While Windows wouldn't wrest control of the market from Novell for several more years, a lot of what we take for granted first appeared here, and many of the core technologies of Windows 95 first debuted in this oft-forgotten page of Windows history.
Windows for Workgroups was also one of the major instances of Microsoft beginning to incorporate the functionality of its competitors directly into the base product. Just like DR-DOS and MS-DOS, Windows for Workgroups gave you everything Personal NetWare did for no added charge. This behavior would eventually lead to the anti-trust suits of the later 90s, albeit over the inclusion of Internet Explorer instead of network and file-sharing capabilities.
As my emulation stories show, we're also beginning to lose the ability to virtualize and run these ancient versions of Windows (and other software) due to emulation bugs and the onward march of technology This emphasizes the importance of documenting this technology while we still can. There's still plenty to cover, so if you liked this article, give my video a like, subscribe (either to my channel or SoylentNews, both are appreciated!), and let me know your thoughts below!
I've got another topic in the works, so I'll leave you with this teaser screenshot to tide you over until the next time!
73 de NCommander
P.S.: Since recording the video and doing this write-up, I've come to learn that Windows for Workgroups 3.1 (not 3.11) actually has some notable changes. This is likely due to being based on the Microsoft Workgroup Add-on for Windows instead of the NT based Shoebill. I may come back to this topic and re-address it, especially if I can get my hands on the add-on.
Skoltech and MIPT scientists have predicted and then experimentally confirmed the existence of exotic hexagonal thin films of NaCl on a diamond surface. These films may be useful as gate dielectrics for field effect transistors in electric vehicles and telecommunication equipment. The research, supported by the Russian Science Foundation, was published in The Journal of Physical Chemistry Letters.
As graphene, the famous two-dimensional carbon, was experimentally prepared and characterized in 2004 by future Nobel laureates Andre Geim and Konstantin Novoselov, scientists started looking into other 2-D materials with interesting properties. Among these are silicene, stanene and borophene—monolayers of silicon, tin, and boron, respectively—as well as 2-D layers of MoS2, CuO, and other compounds.
[...] "Initially we decided to perform only a computational study of the formation of new 2-D structures on different substrates, driven by the hypothesis that if a substrate interacts strongly with the NaCl thin film, one can expect major changes in the structure of the thin film. Indeed, we obtained very interesting results and predicted the formation of a hexagonal NaCl film on the diamond substrate, and decided to perform experiments. Thanks to our colleagues who performed the experiments, we synthesized this hexagonal NaCl, which proves our theory," says Kseniya Tikhomirova, the first author of the paper.
Researchers first used USPEX, the evolutionary algorithm developed by Oganov and his students, to predict structures with the lowest energy based on just the chemical elements involved. After predicting the hexagonal NaCl film, they confirmed its existence by performing experimental synthesis and characterization by XRD (X-ray diffraction) and SAED (selected area electron diffraction) measurements. The average thickness of the NaCl film was about 6 nanometers—a thicker film would revert from hexagonal to cubic structure, typical for the table salt we know.
[...] "Our results show that the field of 2-D materials is still very young, and scientists have discovered only a small portion of possible materials with intriguing properties. [...] This shows that this simple and common compound, seemingly well-studied, hides many interesting phenomena, especially in nanoscale. This work is our first step towards the search for new materials like NaCl but having better stability (lower solubility, higher thermal stability, and so on) which then can be effectively used in many applications in electronics," notes Alexander Kvashnin, senior research scientist at Skoltech.
Kseniya A. Tikhomirova, et al. Exotic Two-Dimensional Structure: The First Case of Hexagonal NaCl, The Journal of Physical Chemistry Letters (2020). DOI: 10.1021/acs.jpclett.0c00874
The Internet Archive is alerting users when they've clicked on some stories that were debunked or taken down on the live web, following reports that people were spreading false coronavirus information through its Wayback Machine.
As NBC reporter Brandy Zadrozny noted on Twitter, the site includes a bright banner on one popular Medium post that was removed as misinformation. Its video archive also creates friction by making users log in to see some videos containing false information, like a reposted version of the conspiracy documentary Plandemic. These videos also include critical comments from Wayback Machine director Mark Graham who described the warnings to Zadrozny as an example of the "importance and value of context in archiving."
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.
Arthur T Knackerbracket has found the following story:
The study published today in Physical Review Research describes how tools from physics and complexity theory were used to determine the level of consciousness in fruit flies.
"This is a major problem in neuroscience, where it is crucial to differentiate between unresponsive vegetative patients and those suffering from a condition in which a patient is aware but cannot move or communicate verbally because of complete paralysis of nearly all voluntary muscles in the body," said study author Dr. Kavan Modi, from the Monash University School of Physics and Astronomy.
[...] "Our technique allows us to distinguish between flies that have been anesthetized and those that have not, by calculating the time-complexity of the signals," said Dr. Modi.
[...] The research team studied the brain signals produced by 13 fruit flies both when they were awake and when they were anesthetized. They then analyzed the signals to see how complex they were.
"We found the statistical complexity to be larger when a [fly] is awake than when the same [fly] is anaesthetized," Dr. Modi said.
[...] The researchers concluded that applying a similar analysis to other datasets, in particular, human EEG data could lead to new discoveries regarding the relationship between consciousness and complexity.
Roberto N. Muñoz, et. al. General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in Drosophila, Phys. Rev. Research 2, 023219 (2020)
Also reported at: www.scimex.org
After more than seven years of development, testing, and preparation, Virgin Orbit reached an important moment on Monday—dropping and igniting its LauncherOne rocket over the Pacific Ocean. Unfortunately, shortly after ignition an "anomaly" occurred, the company said.
"LauncherOne maintained stability after release, and we ignited our first stage engine, NewtonThree," the company stated on Twitter. "An anomaly then occurred early in first stage flight. We'll learn more as our engineers analyze the mountain of data we collected today."
This was the company's first attempt to ignite LauncherOne. Previously, it had strapped the liquid-fueled rocket to its modified 747 aircraft, and flown out over the Pacific Ocean, but not released the booster from beneath the plane's wing.
After Monday's launch attempt the crew on board the 747 and a chase plane safely made it back to the Mojave Air & Space Port without harm. The company stressed that it now has plenty of data to dig into, and is "eager" to get on to its next flight.
Here's the official press release from Virgin Orbit: Virgin Orbit
Arthur T Knackerbracket has found the following story:
Even seeing data breaches in the news, more than half of consumers are still reusing passwords.
More than half of people haven't changed their password in the last year – even after they've heard about a data breach in the news.
That’s according to a recent survey, “Psychology of Passwords: The Online Behavior That’s Putting You At Risk,” that examined the online security and password behaviors of 3,250 global respondents – and found that people still employ an alarming number of very common and very risky habits, even though they know better.
Researchers said that password reuse was the biggest security faux pas being committed by respondents. In fact, password reuse has actually gotten worse over the years: When asked how frequently they use the same password or a variation, 66 percent answered “always” or “mostly” – which is up 8 percent from the same survey in 2018.
Worse, 91 percent of respondents said they know using the same (or a variation of the same) password is a risk. They still do so anyways.
“Our survey shows that most people believe they are knowledgeable about the risks of poor password security; however, they are not using that knowledge to protect themselves from cyber threats,” said researchers with LastPass by LogMeIn, in a recent report.
[...] “People seem to be numb to the threats that weak passwords pose,” said researchers. “Technology like biometrics is making it easier for them to avoid text passwords all together and many people are simply comfortable using the ‘forgot password’ link whenever they get locked out of their accounts.”
Arthur T Knackerbracket has found the following story:
Xu Yi, assistant professor of electrical and computer engineering at the University of Virginia, collaborated with Yun-Feng Xiao's group from Peking University and researchers at Caltech to achieve the broadest recorded spectral span in a microcomb*.
Their peer-reviewed paper, "Chaos-assisted two-octave-spanning microcombs," was published May 11, 2020, in Nature Communications, a multidisciplinary journal dedicated to publishing high-quality research in all areas of the biological, health, physical, chemical and Earth sciences.
[...] The team applied chaos theory to a specific type of photonic device called a microresonator-based frequency comb, or microcomb. The microcomb efficiently converts photons from single to multiple wavelengths. The researchers demonstrated the broadest (i.e., most colorful) microcomb spectral span ever recorded. As photons accumulate and their motion intensifies, the frequency comb generates light in the ultraviolet to infrared spectrum.
"It's like turning a monochrome magic lantern into a technicolor film projector," Yi said. The broad spectrum of light generated from the photons increases its usefulness in spectroscopy, optical clocks and astronomy calibration to search for exoplanets.
The microcomb works by connecting two interdependent elements: a microresonator, which is a ring-shaped micrometer-scale structure that envelopes the photons and generates the frequency comb, and an output bus-waveguide. The waveguide regulates the light emission: only matched speed light can exit from the resonator to the waveguide. As Xiao explained, "It's similar to finding an exit ramp from a highway; no matter how fast you drive, the exit always has a speed limit."
[Ed Comment: See https://en.wikipedia.org/wiki/Frequency_comb]
More information: Hao-Jing Chen et al, Chaos-assisted two-octave-spanning microcombs, Nature Communications (2020). DOI: 10.1038/s41467-020-15914-5
Arthur T Knackerbracket has found the following story:
China is targeting a July launch for its ambitious plans for a Mars mission which will include landing a remote-controlled robot on the surface of the red planet, the company in charge of the project has said.
Beijing has invested billions of dollars in its space programme in an effort to catch up with its rival the United States and affirm its status as a major world power.
The Mars mission is among a number of new space projects China is pursuing, including putting Chinese astronauts on the moon and having a space station by 2022.
Beijing had been planning the Mars mission for sometime this year, but China Aerospace Science and Technology Corporation (CASC) has confirmed it could come as early as July.
"This big project is progressing as planned and we are targeting a launch in July," CASC said in a statement issued on Sunday.
CASC is the main contractor for China's space programme.
Called "Tianwen", the Chinese mission will put a probe into orbit around Mars and land the robotic rover to explore and analyse the surface.
A man from Washington state was arrested in May 2019 and was indicted on several charges related to robbery and assault. The suspect, Joseph Sam, was using an unspecified Motorola smartphone. When he was arrested, he says, one of the officers present hit the power button to bring up the phone's lock screen. The filing does not say that any officer present attempted to unlock the phone or make the suspect do so at the time.
In February 2020, the FBI also turned the phone on to take a photograph of the phone's lock screen, which displayed the name "Streezy" on it. Sam's lawyer filed a motion arguing that this evidence should not have been sought without a warrant and should therefore be suppressed.
District Judge John Coughenour of the US District Court in Seattle agreed. In his ruling, the judge determined that the police looking at the phone at the time of the arrest and the FBI looking at it again after the fact are two separate issues. Police are allowed to conduct searches without search warrant under special circumstances, Coughenour wrote, and looking at the phone's lock screen may have been permissible as it "took place either incident to a lawful arrest or as part of the police's efforts to inventory the personal effects" of the person arrested. Coughenour was unable to determine how, specifically, the police acted, and he ordered clarification to see if their search of the phone fell within those boundaries.
But where the police actions were unclear, the FBI's were both crystal clear and counter to the defendant's Fourth Amendment rights, Coughenour ruled. "Here, the FBI physically intruded on Mr. Sam's personal effect when the FBI powered on his phone to take a picture of the phone's lock screen." That qualifies as a "search" under the terms of the Fourth Amendment, he found, and since the FBI did not have a warrant for that search, it was unconstitutional.
[...] Basically, he ruled, the FBI pushing the button on the phone to activate the lock screen qualified as a search, regardless of the lock screen's nature.
U.S. regulators are moving ahead with a crackdown on scores of antibody tests for the coronavirus that have not yet been shown to work.
The Food and Drug Administration on Thursday published a list of more than two dozen test makers that have failed to file applications to remain on the market or already pulled their products.
The agency said in a statement that it expects the tests "will not be marketed or distributed." It was unclear if any of the companies would face additional penalties.
Most companies faced a deadline earlier this week to file paperwork demonstrating their tests' performance. Regulators required it after previously allowing tests to launch with minimal oversight, which critics said had created a "Wild West" of unregulated testing.
[...] Under pressure to increase testing options, the FDA in March essentially allowed companies to begin selling antibody tests as long as they notified the agency of their plans and provided disclaimers, including that they were not FDA-approved.
The FDA is now working with the National Institutes of Health and other federal health agencies to vet the accuracy of the tests and determine how they can be used to track and contain the virus.
A technology developed by researchers at the U.S. Department of Energy's Pacific Northwest National Laboratory could pave the way for increased fuel economy and lower greenhouse gas emissions as part of an octane-on-demand fuel-delivery system.
Designed to work with a car's existing fuel, the onboard separation technology is the first to use chemistry—not a physical membrane—to separate ethanol-blended gasoline into high- and low-octane fuel components. An octane-on-demand system can then meter out the appropriate fuel mixture to the engine depending on the power required: lower octane for idling, higher octane for accelerating.
Studies have shown that octane-on-demand approaches can improve fuel economy by up to 30 percent and could help reduce greenhouse gas emissions by 20 percent. But so far, the pervaporation membranes tested for octane on demand leave nearly 20 percent of the valuable high-octane fuel components in the gasoline.
In proof-of-concept testing with three different chemistries, PNNL's patent-pending onboard separation technology separated 95 percent of the ethanol out of commercial gasoline. The materials are also effective for separating butanol, a promising high-octane renewable fuel component.
More information: Katarzyna Grubel et al. Octane-On-Demand: Onboard Separation of Oxygenates from Gasoline, Energy & Fuels (2019). DOI: 10.1021/acs.energyfuels.8b03781