Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2020-01-01 to 2020-06-30
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$2347.65
67.0%

Covers transactions:
2020-01-01 00:00:00 ..
2020-06-02 11:21:59 UTC
(SPIDs: [1207..1325])
Last Update:
2020-06-02 11:25:04 UTC
--martyb


Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Which of the following "Dilbert" characters would your co-workers say best resembles you?

  • Dilbert
  • Dogbert
  • Wally
  • Alice
  • Catbert
  • Intern
  • PHB
  • Other (please specify)

[ Results | Polls ]
Comments:73 | Votes:110

posted by Fnord666 on Tuesday June 02, @05:11PM   Printer-friendly [Skip to comment(s)]
from the cloud-of-junk dept.

Orbital Use Fees Proposed As the Most Effective Way to Solve the Space Junk Problem:

The most effective way to solve the space junk problem, according to a new study, is not to capture debris or deorbit old satellites: it's an international agreement to charge operators "orbital-use fees" for every satellite put into orbit.

Orbital use fees would also increase the long-run value of the space industry, said economist Matthew Burgess, a CIRES Fellow and co-author of the new paper. By reducing future satellite and debris collision risk, an annual fee rising to about $235,000 per satellite would quadruple the value of the satellite industry by 2040, he and his colleagues concluded in a paper published today in the Proceedings of the National Academy of Sciences.

"Space is a common resource, but companies aren't accounting for the cost their satellites impose on other operators when they decide whether or not to launch," said Burgess, who is also an assistant professor in Environmental Studies and an affiliated faculty member in Economics at the University of Colorado Boulder. "We need a policy that lets satellite operators directly factor in the costs their launches impose on other operators."

[...] A better approach to the space debris problem, Rao and his colleagues found, is to implement an orbital-use fee — a tax on orbiting satellites. "That's not the same as a launch fee," Rao said, "Launch fees by themselves can't induce operators to deorbit their satellites when necessary, and it's not the launch but the orbiting satellite that causes the damage."

[...] "In our model, what matters is that satellite operators are paying the cost of the collision risk imposed on other operators," said Daniel Kaffine, professor of economics and RASEI Fellow at the University of Colorado Boulder and co-author on the paper.

Reference:
Akhil Rao, Matthew G. Burgess and Daniel Kaffine, Orbital-use fees could more than quadruple the value of the space industry", Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.1921260117


Original Submission

posted by Fnord666 on Tuesday June 02, @03:02PM   Printer-friendly [Skip to comment(s)]
from the to-boldly-go-where-only-a-few-men-have-gone-before dept.

Third European Service Module for Artemis Mission to Land Astronauts on the Moon:

It's official: when astronauts land on the Moon in 2024 they will get there with help from the European Service Module. The European Space Agency signed a contract with Airbus to build the third European Service Module for NASA's Orion spacecraft that will ferry the next astronauts to land on the Moon.

NASA's Artemis program is returning humans to the Moon with ESA's European Service Module supplying everything needed to keep the astronauts alive on their trip in the crew module – water, air, propulsion, electricity, a comfortable temperature as well as acting as the chassis of the spacecraft.

The third Artemis mission will fly astronauts to Earth's natural satellite in 2024 – the first to land on the Moon since Apollo 17 following a hiatus of more than 50 years.

ESA's director of Human and Robotic Exploration David Parker said: "By entering into this agreement, we are again demonstrating that Europe is a strong and reliable partner in Artemis. The European Service Module represents a crucial contribution to this, allowing scientific research, development of key technologies, and international cooperation – inspiring missions that expand humankind's presence beyond Low Earth Orbit."

[...] The first European Service Module is being handed over to NASA at their Kennedy Space Center for an uncrewed launch next year, and the second is in production at the Airbus integration hall in Bremen, Germany.


Original Submission

posted by Fnord666 on Tuesday June 02, @12:50PM   Printer-friendly [Skip to comment(s)]
from the about-time dept.

Dangerous SHA-1 crypto function will die in SSH linking millions of computers:

Developers of two open source code libraries for Secure Shell—the protocol millions of computers use to create encrypted connections to each other—are retiring the SHA-1 hashing algorithm, four months after researchers piled a final nail in its coffin.

The moves, announced in release notes and a code update for OpenSSH and libssh respectively, mean that SHA-1 will no longer be a means for digitally signing encryption keys that prevent the monitoring or manipulating of data passing between two computers connected by SSH—the common abbreviation for Secure Shell. (Wednesday's release notes concerning SHA-1 deprecation in OpenSSH repeated word for word what developers put in February release notes, but few people seemed to notice the planned change until now.)

Cryptographic hash functions generate a long string of characters that are known as a hash digest. Theoretically, the digests are supposed to be unique for every file, message, or other input fed into the function. Practically speaking, digest collisions must be mathematically infeasible given the performance capabilities of available computing resources. In recent years, a host of software and services have stopped using SHA-1 after researchers demonstrated practical ways for attackers to forge digital signatures that use SHA-1. The unanimous agreement among experts is that it's no longer safe in almost all security contexts.

"Its a chainsaw in a nursery," security researcher Kenn White said of the hash function, which made its debut in 1995.

[...] The final death knell for SHA-1 sounded in January, when researchers unveiled an even more powerful collision attack that cost as little as $45,000. Known as a chosen prefix collision, it allowed attackers to impersonate a target of their choosing, as was the case in the MD5 attack against Microsoft's infrastructure.

It was in this context that OpenSSH developers wrote in release notes published on Wednesday:

It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K. For this reason, we will be disabling the "ssh-rsa" public key signature algorithm by default in a near-future release.

This algorithm is unfortunately still used widely despite the existence of better alternatives, being the only remaining public key signature algorithm specified by the original SSH RFCs.

[...] In an email, Gaëtan Leurent, an Inria France researcher and one of the co-authors of the January research, said he didn't expect OpenSSH developers to implement the deprecations quickly. He wrote:

When they completely disable SHA-1, it will become impossible to connect from a recent OpenSSH to a device with an old SSH server, but they will probably take gradual steps (with big warnings) before that. Also, embedded systems with an SSH access that have not been updated in many years probably have a lot of security issues, so maybe it's not too bad to disrupt them...

In any case, I am quite happy with this move, this is exactly what we wanted to achieve :-)


Original Submission

posted by Fnord666 on Tuesday June 02, @10:39AM   Printer-friendly [Skip to comment(s)]
from the how-heavy-is-a-light-ring? dept.

A new theorem predicts that stationary black holes must have at least one light ring:

Researchers at the Max Planck Institute for Gravitational Physics in Germany and Universidade de Aveiro in Portugal have recently introduced a theorem that makes predictions about the light rings around stationary black holes. Their theorem, presented in a paper published in Physical Review Letters, suggests that equilibrium black holes must, as a general rule, have at least one light ring in each of their sense of rotation.

"Remarkably, the properties of light rings can encode much relevant black hole information," Pedro Cunha and Carlos Herdeiro, the two researchers who carried out the study, told Phys.org via email. "Measuring these properties grants a direct window into the elusive and yet fairly uncharted regime of very strong gravity close to a black hole. At this moment it is still unclear whether Einstein's theory of general relativity remains a good description of the laws of gravity under such extreme conditions. Therefore, a key question is: does any black hole model, in any theory of gravity, need to have a light ring?"

[...] "In our paper, we introduce a generic and mathematically innovative argument that establishes that an equilibrium black hole must indeed have, as a rule, at least one standard light ring in each rotational sense," Cunha and Herdeiro said. "To analyze light rings, typically, one considers families of solutions of a given theory of gravity, like general relativity, or some particular model of modified gravity. Here, however, the argument is of a topological nature."

[...] "The prediction that black holes always have light rings and they are always outside the horizon has important consequences," Cunha and Herdeiro say. "For instance, it implies that the silhouette of a black hole, known as the black hole shadow, is generically different and usually larger than what one would expect the size of the black hole itself to be. So the shadow should always be a magnification of the black hole."

[...] "One key assumption of our theorem is that far away from the black hole there is no gravitational field," Cunha and Herdeiro said. "However, in the Universe there is a cosmological constant that drives the expansion of the Cosmos. This creates a tiny gravitational field no matter how far away from the black hole one is. It would be very interesting to understand if this slight change in assumption would change our theorem's conclusions."

Journal Reference:
Pedro V. P. Cunha, Carlos A. R. Herdeiro. Stationary Black Holes and Light Rings, Physical Review Letters (DOI: 10.1103/PhysRevLett.124.181101)

A light ring is a subset of a photon sphere, which has some interesting properties:

The photon sphere is located farther from the center of a black hole than the event horizon. Within a photon sphere, it is possible to imagine a photon that's emitted from the back of one's head, orbiting the black hole, only then to be intercepted by the person's eyes, allowing one to see the back of the head. For non-rotating black holes, the photon sphere is a sphere of radius 3/2 rs (the Schwarzschild radius). There are no stable free fall orbits that exist within or cross the photon sphere. Any free fall orbit that crosses it from the outside spirals into the black hole. Any orbit that crosses it from the inside escapes to infinity or falls back in and spirals into the black hole. No unaccelerated orbit with a semi-major axis less than this distance is possible, but within the photon sphere, a constant acceleration will allow a spacecraft or probe to hover above the event horizon.

Another property of the photon sphere is centrifugal force (nb: not centripetal) reversal. Outside the photon sphere, the faster one orbits the greater the outward force one feels. Centrifugal force falls to zero at the photon sphere, including non-freefall orbits at any speed, i.e. you weigh the same no matter how fast you orbit, and becomes negative inside it. Inside the photon sphere the faster you orbit the greater your felt weight or inward force. This has serious ramifications for the fluid dynamics of inward fluid flow.

A rotating black hole has two photon spheres. As a black hole rotates, it drags space with it. The photon sphere that is closer to the black hole is moving in the same direction as the rotation, whereas the photon sphere further away is moving against it. The greater the angular velocity of the rotation of a black hole, the greater the distance between the two photon spheres. Since the black hole has an axis of rotation, this only holds true if approaching the black hole in the direction of the equator. If approaching at a different angle, such as one from the poles of the black hole to the equator, there is only one photon sphere. This is because approaching at this angle the possibility of traveling with or against the rotation does not exist.

See also: Max-Planck-Instituts für Gravitationsphysik


Original Submission

posted by martyb on Tuesday June 02, @08:28AM   Printer-friendly [Skip to comment(s)]
from the don't-steal-the-cake dept.

Wired is reporting that Walmart employees have serious concerns about the effectiveness of the company's anti-shoplifting "AI" technology (reprint), including unnecessarily breaking COVID-19 related social distancing guidelines.

[...] The employees said they were "past their breaking point" with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.

[...] The coronavirus pandemic has given their concerns more urgency. One Concerned Home Office Associate said they worry false positives could be causing Walmart workers to break social-distancing guidelines unnecessarily. When Everseen flags an issue, a store associate needs to intervene and determine whether shoplifting or another problem is taking place. In an internal communication from April obtained by WIRED, a corporate Walmart manager expressed strong concern that workers were being put at risk by the additional contact necessitated by false positives and asked whether the Everseen system should be turned off to protect customers and workers.

Before COVID-19, "it wasn't ideal, it was a poor customer experience," the worker said. "AI is now creating a public health risk."

[...] at least 20 Walmart associates have now died after contracting the coronavirus, according to United For Respect.

[...] A spokesperson for Walmart said the company has been working diligently to protect customers and its workforce, and believes the rate at which associates have contracted Covid-19 is lower than that of the general US population.

[...] The company said it has taken a number of steps to ensure people are protected during these interactions, including regularly cleaning self-checkout kiosks and providing employees with protective equipment. In addition, workers are given handheld devices that allow them to handle most interventions from a distance, the company said.


Original Submission

posted by martyb on Tuesday June 02, @06:17AM   Printer-friendly [Skip to comment(s)]
from the light-emitting-silly-putty dept.

New stretchable, self-healing and illuminating electronic material for wearables and soft robots:

Imagine a flexible digital screen that heals itself when it cracks, or a light-emitting robot that locates survivors in dark, dangerous environments or carries out farming and space exploration tasks. A novel material developed by a team of NUS researchers could turn these ideas into reality.

The new stretchable material, when used in light-emitting capacitor devices, enables highly visible illumination at much lower operating voltages, and is also resilient to damage due to its self-healing properties.

This innovation, called the HELIOS (which stands for Healable, Low-field Illuminating Optoelectronic Stretchable) device, was achieved by Assistant Professor Benjamin Tee and his team from the NUS Institute for Health Innovation & Technology and NUS Materials Science and Engineering.

[...] Unlike existing stretchable light-emitting capacitors, HELIOS enabled devices can turn on at voltages that are four times lower, and achieve illumination that is more than 20 times brighter. It also achieved an illumination of 1460 cd/m2 at 2.5 V/µm, the brightest attained by stretchable light-emitting capacitors to date, and is now comparable to the brightness of mobile phone screens. Due to the low power consumption, HELIOS can achieve a longer operating lifetime, be utilized safely in human-machine interfaces, and be powered wirelessly to improve portability.

The researchers say the material promises durability and efficiency.

Journal Reference
Yu Jun Tan, Hareesh Godaba, Ge Chen, et al. A transparent, self-healing and high- κ dielectric for low-field-emission stretchable optoelectronics, Nature Materials (DOI: 10.1038/s41563-019-0548-4)


Original Submission

posted by martyb on Tuesday June 02, @04:04AM   Printer-friendly [Skip to comment(s)]
from the TSMC-havs-fabs-in-Russia? dept.

Russia's Elbrus 8CB Microarchitecture: 8-core VLIW on TSMC 28nm

All of the world's major superpowers have a vested interest in building their own custom silicon processors. The vital ingredient to this allows the superpower to wean itself off of US-based processors, guarantee there are no supplemental backdoors, and if needed add their own. As we have seen with China, custom chip designs, x86-based joint ventures, or Arm derivatives seem to be the order of the day. So in comes Russia, with its custom Elbrus VLIW design that seems to have its roots in SPARC.

Russia has been creating processors called Elbrus for a number of years now. For those of us outside Russia, it has mostly been a big question mark as to what is actually under the hood – these chips are built for custom servers and office PCs, often at the direction of the Russian government and its requirements. We have had glimpses of the design, thanks to documents from Russian supercomputing events, however these are a few years old now. If you are not in Russia, you are unlikely to ever get your hands on one at any rate. However, it recently came to our attention of a new programming guide listed online for the latest Elbrus-8CB processor designs.

The latest Elbrus-8CB chip, as detailed in the new online programming guide published this week, built on TSMC's 28nm, is a 333 mm2 design featuring 8 cores at 1.5 GHz. Peak throughput according to the documents states 576 GFLOPs of double precision, with the chip offering four channels of DDR4-2400, good for 68.3 GB/s. The L1 and L2 caches are private, with a 64 kB L1-D cache, a 128 kB L1-I cache, and a 512 kB L2 cache. The L3 cache is shared between the cores, at 2 MB/core for a total of 16 MB. The processor also supports 4-way server multiprocessor combinations, although it does not say on what protocol or what bandwidth.

It is a compiler focused design, much like Intel's Itanium, in that most of the optimizations happen at the compiler level. Based on compiler first designs in the past, that typically does not make for a successful product. Documents from 2015 state that a continuing goal of the Elbrus design is x86 and x86-64 binary translation with only a 20% overhead, allowing full support for x86 code as well as x86 operating systems, including Windows 7 (this may have been updated since 2015).

Previously: Russian Homegrown Elbrus-4C CPU Released


Original Submission

posted by martyb on Tuesday June 02, @01:52AM   Printer-friendly [Skip to comment(s)]
from the how-do-I-convert-my-existing-files? dept.

Google Docs vs. Microsoft Word: Which works better for business?:

Have you been thinking of reassessing which word processor your business should standardize on? The obvious choices are the two best known: Microsoft Word and Google Docs. But which is better?

Several years ago, the answer to that would have been easy: Microsoft Word for its better editing, formatting and markup tools; Google Docs for its better collaboration. But both applications have been radically updated since then. Word now has live collaboration tools, and Google has added more sophisticated formatting, editing and markup features to Docs.

TFA requires free registration, but the question is an interesting one: Have Google Docs arrived at parity with, or surpassed, Microsoft Word for business needs? How much work is required to transition existing documents, macros, and workflows?


Original Submission

posted by martyb on Monday June 01, @11:39PM   Printer-friendly [Skip to comment(s)]
from the could-use-a-little-pruning dept.

Plum pickings: ancient fruit ripe for modern plates:

An Indigenous fruit which is one of the earliest known plant foods eaten in Australia could be the next big thing in the bush foods industry.

The University of Queensland research team is led by bush foods researcher Associate Professor Yasmina Sultanbawa, who said the green plum not only tasted delicious but contained one of the highest known folate levels of any fruit on the commercial market.

"This is really exciting because folate is an important B-group vitamin, and what's great about the green plum is that the folate is in a natural form so the body absorbs it more easily than in a capsule," Dr Sultanbawa said.

[...] "There is recent evidence discovered in West Arnhem Land which shows the green plum was eaten by Aboriginal people as far back as 53,000 years ago."

Will mass cultivation disrupt aboriginal communities?


Original Submission

posted by martyb on Monday June 01, @09:32PM   Printer-friendly [Skip to comment(s)]
from the it's-a-wrap-R.I.P. dept.

Christo, the artist who wrapped the world, dies at 84:

Christo, the Bulgarian-born artist, best known for his monumental installations that wrapped some of the world's most celebrated buildings and played with people's perceptions of landscape and the outdoors, died on Sunday at his home in New York.

He was 84.

[...] At the time of his death, Christo was working on a project to wrap the Arc de Triomphe in Paris in 25,000 square metres (269,100 square feet) of recyclable polypropylene fabric in silvery blue and 7,000 metres (23,000 feet) of red rope.

It will still go ahead.

Whether you liked his work or not, he was one of a kind.

Also at: The Guardian, The New York Times, and npr.


Original Submission

posted by martyb on Monday June 01, @07:24PM   Printer-friendly [Skip to comment(s)]
from the cow-a-bunga? dept.

Researchers control cattle microbiomes to reduce methane and greenhouse gases:

Ben-Gurion University of the Negev (BGU) researchers have learned to control the microbiome of cattle for the first time which could inhibit their methane production, and therefore reduce a major source of greenhouse gasses.

[...] The animal microbiome is a scientifically unexplored area. It protects against germs, breaks down food to release energy, and produces vitamins and exerts great control over many aspects of animal and human physical systems. Microbes are introduced at birth and produce a unique microbiome that evolves over time.

Mizrahi and his group have been conducting a three-year experiment with 50 cows divided into two groups. One group gave birth naturally, and the other through cesarean section. That difference was enough to change microbiome development and composition microbiome of the cows from each group.

Changing the birthing method changed the microbiome of the calves.

Journal Reference:
Ori Furman, Liat Shenhav, Goor Sasson, et al. Stochasticity constrained by deterministic effects of diet and age drive rumen microbiome assembly dynamics [open], Nature Communications (DOI: 10.1038/s41467-020-15652-8)

Previously:
(2019-06-19) Seaweed Feed Additive Cuts Livestock Methane but Poses Questions
(2018-09-01) Researchers Feed Seaweed To Dairy Cows To Reduce Emissions


Original Submission

posted by martyb on Monday June 01, @05:18PM   Printer-friendly [Skip to comment(s)]
from the now-we-need-some-tiny-little-checkers dept.

Tiny Three-Dimensional Chessboards Could Lead to "Paper Electronics":

Researchers at The Institute of Scientific and Industrial Research at Osaka University introduced a new liquid-phase fabrication method for producing nanocellulose films with multiple axes of alignment. Using 3D-printing methods for increased control, this work may lead to cheaper and more environmentally friendly optical and thermal devices.

[...] Many existing optical devices, including liquid-crystal displays (LCDs) found in older flat-screen televisions, rely on long needle-shaped molecules aligned in the same direction. However, getting fibers to line up in multiple directions on the same device is much more difficult. Having a method that can reliably and cheaply produce optical fibers would accelerate the manufacture of low-cost displays or even "paper electronics" — computers that could be printed from biodegradable materials on demand.

[...] In newly published research from the Institute of Scientific and Industrial Research at Osaka University, nanocellulose was harvested from sea pineapples, a kind of sea squirt. They then used liquid-phase 3D-pattering, which combined the wet spinning of nanofibers with the precision of 3D-printing. A custom-made triaxial robot dispensed a nanocellulose aqueous suspension into an acetone coagulation bath.

[...] "Our findings could aid in the development of next-generation optical materials and paper electronics," says senior author Masaya Nogi. "This could be the start of bottom-up techniques for building sophisticated and energy-efficient optical and thermal materials."

Journal Reference:
Uetani, Kojiro, Koga, Hirotaka, Nogi, Masaya. Checkered Films of Multiaxis Oriented Nanocelluloses by Liquid-Phase Three-Dimensional Patterning, Nanomaterials (DOI: 10.3390/nano10050958)


Original Submission

posted by janrinok on Monday June 01, @03:13PM   Printer-friendly [Skip to comment(s)]
from the slip-slidin'-away-now dept.

Antarctic ice sheets capable of retreating up to 50 meters per day:

The study, led by the Scott Polar Research Institute at the University of Cambridge, used patterns of delicate wave-like ridges on the Antarctic seafloor to calculate how quickly the ice retreated roughly 12,000 years ago during regional deglaciation.

The ridges were produced where the ice sheet began to float, and were caused by the ice squeezing the sediment on the seafloor as it moved up and down with the movement of the tides. The images of these landforms are at unprecedented sub-metre resolution and were acquired from an autonomous underwater vehicle (AUV) operating about 60 metres above the seabed. The results are reported in the journal Science.

While modern satellites are able to gather detailed information about the retreat and thinning rates of the ice around Antarctica, the data only goes back a few decades. Calculating the maximum speed at which an ice sheet can retreat, using sets of these seafloor ridges, reveals historic retreat rates that are almost ten times faster than the maximum observed rates of retreat today.

"By examining the past footprint of the ice sheet and looking at sets of ridges on the seafloor, we were able to obtain new evidence on maximum past ice retreat rates, which are very much faster than those observed in even the most sensitive parts of Antarctica today," said lead author Professor Julian Dowdeswell, Director of the Scott Polar Research Institute.

[...] They calculated that the ice was retreating as much as 40 to 50 metres per day during this period, a rate that equates to more than 10 kilometres per year. In comparison, modern satellite images show that even the fastest-retreating grounding lines in Antarctica today, for example in Pine Island Bay, are much slower than these geological observations, at only about 1.6 kilometres per year.

"The deep marine environment is actually quite quiet offshore of Antarctica, allowing features such as these to be well-preserved through time on the seafloor," said Dowdeswell. "We now know that the ice is capable of retreating at speeds far higher than what we see today. Should climate change continue to weaken the ice shelves in the coming decades, we could see similar rates of retreat, with profound implications for global sea level rise."


Original Submission

Journal Reference
J. A. Dowdeswell, C. L. Batchelor, A. Montelli, et al. Delicate seafloor landforms reveal past Antarctic grounding-line retreat of kilometers per year [$], Science (DOI: 10.1126/science.aaz3059)

posted by Fnord666 on Monday June 01, @01:04PM   Printer-friendly [Skip to comment(s)]
from the OAuth2-isn't-that-hard dept.

What if I say, your Email ID is all I need to takeover your account on your favorite website or an app. Sounds scary, right? This is what a bug in Sign in with Apple allowed me to do.

When Apple announced Sign in with Apple at the June 2019 worldwide developers conference, it called it a "more private way to simply and quickly sign into apps and websites." The idea was, and still is, a good one: replace social logins that can be used to collect personal data with a secure authentication system backed by Apple's promise not to profile users or their app activity.

One of the plus points that got a lot of attention at the time was the ability for a user to sign up with third-party apps and services without needing to disclose their Apple ID email address. Unsurprisingly, it has been pushed as being a more privacy-oriented option than using your Facebook or Google account.

Fast forward to April 2020, and a security researcher from Delhi uncovered a critical Sign in with Apple vulnerability that could allow an attacker to potentially take over an account with just an email ID. A critical vulnerability that was deemed important enough that Apple paid him $100,000 (£81,000) through its bug bounty program by way of a reward.

Considering the level of embarrassment possible for basically leaving the front door unlocked, I'd say the reward was on the light side.

I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple's public key, they showed as valid. This means an attacker could forge a JWT by linking any Email ID to it and gaining access to the victim's account.


Original Submission

posted by NCommander on Monday June 01, @10:53AM   Printer-friendly [Skip to comment(s)]
from the causing-thousand-yard-stares-while-ABENDing-all-the-way dept.

On what is becoming a running theme here on SoylentNews, we're reliving the early 90s, and picking up right where I left off from Windows for Workgroups, it was time to look at the 800-pound gorilla: Novell NetWare.

Unlike early Mac, UNIX and Windows, I didn't actually have any personal experience with NetWare back in the day. Instead, my hands were first guided on a stream of my weekly show, HACK-ALT-NCOMMANDER, hosted as part of DEFCON 201, combined with a binge reading marathon of some very hefty manuals. In that vein, this is more of my impressions of what NetWare user and administration is like, especially compared to the tools of the day.

Ultimately, I found NetWare a very strange experience, and there were a lot of pluses and minuses to cover, so as usual, here's the tl;dr video summary, followed by more in-depth write-up.

Novell NetWare video

If you haven't ABENDed your copy of server.exe, click below the fold to learn what all the hubbub was about!

The Network Operating System

In the simplest terms possible, NetWare was a dedicated network operating system. It was designed around fast and reliable network operations at the expense of almost everything else. Novell had invested massive amounts of research in figuring out how to do fast I/O and minimizing any delays from hardware related sources. The end result was a very lean system that remained stable and performant with a large number of clients attached. As networking was Novell's bread and butter, NetWare had excellent support for everything: clients were available for DOS, Windows, UNIX, Macintosh, OS/2 and probably other platforms I've never even heard of.

The early history of NetWare is very muddled, and pre-2.0 versions have been lost to time. This compounded with poor documentation has made it very difficult to trace the early history of the product. However, while NetWare was not the first (or only) network product for IBM PCs, it quickly became the largest, displacing IBM's PC Network, and laughed at Microsoft's LAN Manager, and IBM OS/2 LAN Server.

While NetWare did compete on UNIX, Sun had already gotten their foot in the door by porting NFS and making it the de-facto solution for all UNIXs of the era, as well as Linux. Meanwhile, Apple held onto AppleTalk which itself survived well into the early 2000s when NetWare had already disappeared into the aether. The explosion of Wintel PCs throughout the 90s had given NetWare a market position that should have been very difficult to dislodge.

The full story of NetWare's fall from grace is a story for another time, but I do want to go into the more technical aspects that were both the boon and bane of NetWare. Much of NetWare's success can be attributed to its own IPX protocol which made networking plug and play and drastically lowered latencies compared to NetBIOS or even TCP/IP.

Internetwork Packet eXchage (IPX)

While NetWare itself doesn't pre-date TCP/IP, like many products of the era it used its own routing protocol known as IPX. IPX wasn't specific to NetWare, and for those of us who remember early LAN parties, IPX powered many Doom, Duke Nukem 3D, and StarCraft multiplayer matches. IPX itself isn't, conceptually, that different from TCP/IP. Each host is given a 32-bit network identifier and a 48-bit host identifier. In absence of a router, link-local networking was available through the pseudo-zero-net.

In truth, IPX actually resembles IPv6 much more than IPv4. I wouldn't be surprised if a lot of IPv6's design decisions were modeled around IPX, such as the fact IPv6 uses a 48-bit network prefix, a 16-bit subnet prefix, and a 64-bit host network adapter. Furthermore, unlike IPv4, IPX is entirely self-configuring, and routing information is broadcast as needed (once again, similar to IPv6's Stateless Address Autoconfiguration (SLAAC). There's no direct equivalent of DHCP for IPX.

NetWare servers — as well as most vintage networking equipment — also natively supported IPX routing which was essentially a plug and play affair.

That's not to say IPX is perfect. One major pain point is that IPX comes in four different variants, raw, LLC, SNAP, and what Novell calls ETHERNET_II. This is due to the fact that IPX was used heavily on non-Ethernet networks; the protocol itself can change depending on what physical hardware it's running on. For example, it's not possible to use Ethernet frame routing over Token Ring networks. All four variants are incompatible with each other and a misconfiguration means hosts will not see each other on the network. This is in contrast to TCP/IP which was specifically designed to be independent of the network layers below it.

Despite this and compared to TCP/IP, IPX was a breeze to setup and administrate. Neither DHCP nor RFC1918 address spaces (that is 192.168.0.0/16, 10.0.0.0/8 and 172.16.0.0/20) existed at the time. That meant for proper TCP/IP deployment, an allocation had to be requested from your local Regional Internet Registry such as ARIN. Adding salt in the wound, TCP/IP was still classful at the time, meaning network blocks were only available in /8, /16, and /24 sizes.

I disagree with the contention that Novell's slow adoption of TCP/IP was part of the reason why they fell into irrelevance. TCP/IP has unfortunate characteristics when dealing with non-switched networks which made it undesirable for LAN usage. TCP/IP has built-in rate limiting designed to handle packet loss issues; the idea is that if packets are being lost in flight, the link is saturated and the sender must slow down. This is part of the TCP/IP specification.

On packet-switched networks and fixed links that made up ARPANET, this was a desirable property. Token Ring and ARCnet also provided full switching for local offices. However, while ARCnet managed to carve a niche in SCADA systems, Token Ring priced itself out of the market and was already vanishing in favor of Ethernet. This lead to a rather unfortunate mess. At the time, thicknet 10BASE2 and Ethernet hubs dominated the low-cost market. Up until switched Ethernet and 10BASET became commonplace, packet collisions were common. This combined with TCP/IP rate limiting meant that TCP/IP would run much slower than it otherwise would.

In contrast, AppleTalk survived well into the mid-2000s before it was given the "Old Yeller" treatment and DECnet is still commonly used with OpenVMS. While supporting multiple Layer 3 routing protocols can be an additional burden, it's not a show stopper. The problem is that IPX was the one real killer feature of NetWare, and without it, NetWare was simply more expensive, and difficult to administer.

Certain design decisions within NetWare would also make it an evolutionary dead end.

Ring 0 and NLMs

One of NetWare's notable features is that it's one of the few products that actually took advantage of the 286's protected mode and got actual speed and usability improvements out of it without concern for legacy backwards compatibility. The catch was that Novell took the protected part of protected mode out to do so.

To prevent this article from becoming yet another rant about the 286, I'm going to summarize what ring 0 means. In modern operating systems, user applications are separated from the low-level guts of the operating system's core code, or OS kernel. On the 8086, no such separation existed, and all programs have full and unrestricted access to hardware and memory. On the 286 and later, Intel added the concept of rings, which divide applications in groups. The kernel exists in ring 0, and user applications are in ring 3. Rings 1-2 were rarely used (with OS/2 being the one notable exception) to provide finer control.

Intel Processor Rings

Aside: Rings were informally scrapped with 64-bit architectures. While they still "exist" in the GDT Global Descriptor Table, the removal of segmentation and the addition of SYSENTER means that only ring 0 and 3 can be used in practice.

I'm going to gloss over the details here, but the very short version is that there are two ways to switch rings in protected mode: call gates and interrupt vectors. Both cause a context switch and have a fairly high-performance penalty associated with it. Novell's solution was instead to just not bother. All code within a NetWare server ran in Ring 0, and essentially ignored any security and reliability features offered by the processor.

Both drivers and add-on software took the form of NetWare Loadable Modules, or NLMs. These are effectively kernel modules that can be dynamically loaded and unloaded. To further aid performance, cooperative multitasking was used to prevent delays inherent in modern-day pre-emptive multitasking. That meant a misbehaving app could not only lock a NetWare server up, it could also trash the hardware on the way out.

What this meant in practice is that NetWare had best the performance possible from a 32-bit processor combined with the stability and reliability of Windows 3.1.

If this is sounding like a nightmare, it's not quite as bad as it sounds. Novell's engineers were quite good at their job, and I was very impressed at the performance and stability of NetWare's file and print servers. The problem was that, for reasons that will become clear, it wasn't uncommon to have a fair number of add-on NLMs to ease system administration or provide services like BTrieve SQL server. One bug in these NLMs could ABEND a server, or simply cause a deadlock.

Later versions of NetWare did provide some isolation for misbehaving NLMs, but the system as a whole remained cooperatively multitasked. This is in stark contrast to Windows NT where a misbehaving application wouldn't bring the system crashing down.

NOTE: Since it came up in the previous article, I did attach a debugger to NetWare and dump the processor's GDT, you can see there are no Ring 3 segments:

VBoxDbg> dg
0000 DataRW Bas=00000000 Lim=fffff000 DPL=0 P NA G BIG AVL=0 L=0
0008 CodeER Bas=00000000 Lim=00fff000 DPL=0 P A G BIG AVL=0 L=0
0010 DataRW Bas=00000000 Lim=00fff000 DPL=0 P A G BIG AVL=0 L=0
0018 CodeER Bas=0001b9e0 Lim=0000ffff DPL=0 P A AVL=0 L=0
0020 DataRW Bas=0001b9e0 Lim=0000ffff DPL=0 P A AVL=0 L=0
0028 VERR_INVALID_SELECTOR
VBoxDbg> 

LOAD INSTALL

NetWare 3.12 was contemporary with Windows NT 3.1 as well as Windows for Workgroups. While version 4 had also shipped at this time and introduced Novell's Directory Service, I felt 3.12 would capture NetWare at its zenith before Windows NT would eat its lunch. At a minimum, it would represent the mountain Microsoft would have to scale to compete.

3.12 was available both as a set of both floppy disks and in CD-ROM form. The first disk, System_1 was also personalized to show your license of NetWare, and the disks are bootable, loading up DR-DOS 5 without issue. NetWare's first real quirk comes from the fact that it needs DOS as a bootloader. Early versions could either replace DOS in a dedicated fashion with a cold boot loader or co-exist with both running side by side. NetWare 3 and later instead used DOS as a bootloader, and had limited access to DOS devices. In what is becoming a now-familiar sentence, MS-DOS 6 was installed to a 32 MiB partition, with the rest of the 4 GiB drive set aside for NetWare.

As I had no desire to feed a large number of floppy disks, OAKCDROM.SYS was added to DOS, and installation was kicked off from the CD.

OAKCDROM.SYS

The initial installation is very bare bones, with the only network-related question coming in the form of selecting the server name and internal IPX network number

IPX SETUP

After providing the System_1 disk, SETUP kicks off NetWare, and drops you at the server console after offering to modify AUTOEXEC.BAT to automatically start NetWare server. At this point, installation continues from within NetWare itself.

NetWare Console

The first step is loading a device driver for the hard disks. The installation manual (and media) comes with an extremely large set of drivers, including ones for Microchannel, and EDSI Enhanced Small Disk Interface) disks, technologies that have long disappeared. In this case, LOAD ISADRIVE is the correct driver as it's used for AT-compatible hard drives. The next step is then finding a way to load the NetWare system files. LOAD is NetWare's command for installing a NetWare Loadable Module, so it's used for both drivers and applications.

The easiest option (and what I went with) was simply letting NetWare use the CD-ROM driver in DOS. This doesn't require any configuration: NetWare itself loads above the 1 MiB memory line, and DOS itself is still resident in conventional memory including any device drivers loaded. Other options include: LOAD CDROM, loading the files via network, or loading from a DOS partition.

LOAD INSTALL takes over from here, and is one of the few graphical applications on the server console. Partitioning is straight forward, and RAID can also be configured. Unlike DOS, NetWare is relatively happy with larger disks, however the ISADISK driver refused to touch a hard drive larger than 4 GiB, which isn't uncommon for the era.

Disk Partitioning

After the partition is mounted, volumes must be created. This is conceptually similar to BSD disklabels, or LVM logical partitions. The main NetWare partition is SYS: which holds the core system files, and is also the default directory for user home folders. One quirk I ran into was that volumes are not automatically mounted upon creation, but this was otherwise straightforward.

MOUNT SYS

Finally, the installation is finished with "Copy Public and System Files". This option isn't actually specific to NetWare's installation as both add-on software and patches could use this mechanism. Typing in "d:\netware.312\server" set NetWare away from copying files from the CD. It should be noted this process was abnormally slow and took several minutes running in a VM to complete. This is likely due to the fact that NetWare must suspend itself, load data from DOS through OAKCDROM, re-enter protected mode, and then continue.

NOTE: I briefly looked with VirtualBox's debugger to see how this worked. The very poor speed made me wonder if they were doing some sort of software trickery like running DOS on OS/2 on a 286, or using Virtual 8086 mode. My guess is the former; VirtualBox says that no Task Select Segment is loaded, likely due to no ring 3 code, although I can't definitively say this isn't using Virtual 8086 mode somewhere.

FILE INSTALLATION

Once file copying was finally finished, the last ugly truth comes up. Administrating NetWare is impossible without a second machine, and this also applied to the patching process. This is especially egregious as older "Nondedicated" versions of NetWare could run the DOS based administration tools on the same machine. That meant the network had to come up to continue.

A PCNet Experience

NetWare's support for NICs, as you may expect, is excellent with support for Token Ring, ARCnet, and many Ethernet adapters. Novell also had their own line of network cards, the NE1000 and NE2000 which were so successful that the NE2000 became a de facto standard for 16-bit computers. VirtualBox also has the ability to emulate an ISA-based NIC via a poorly-documented advanced option, but I opted to go with the more standard PCI AMD PCNet device offered in the GUI.

Having been burned with unstable NE2K emulation under QEMU, I really didn't want to risk having undocumented IRQ conflicts, so this was the path of least resistance. AMD did publish loadable drivers for NetWare 4. However, the ODI33G patch can allow these to be used under NetWare 3.12. Driver installation was slightly odd and in hindsight, foreshadowed the lengthy patching process.

The first step was simply returning to DOS. This is done by DOWN and followed by EXIT. ODI33G was first installed. This took the form of a self-extracting ARJ archive, and files from that archive must replace basic files in the C:\server.312. It's pretty clear that unless a system administrator was diligent, it would be exceptionally easy to accidentally downgrade server components during an "upgrade".

Following that, the PCNTNW driver was loaded across and NetWare restarted. Then came the more difficult process of configuring the network. NetWare is controlled by two main startup files: STARTUP.NCF, and AUTOEXEC.NCF. STARTUP is used for any "early initialization" steps (such as in my case LOAD ISADISK) and is executed immediately during the DOS startup phase. AUTOEXEC.NCF meanwhile is loaded when the system is fully up and running. It's here where networking is configured.

AUTOEXEC.NCF

The first two lines of the file were added during server installation and set the server name and loopback IPX network number. Below that, I needed to load the msm31x, or the "Media Support Module" as it's called in the documentation. Then I could load the PCNTNW driver. Both these drivers exist on the DOS partition, and DOS-style file paths are used to access them. The next step is binding.

Although Novell primarily used IPX protocols for file sharing and printing, NetWare did support others; specifically TCP/IP was included in this version, as well as RPL for netbooting in the pre-PXE era. Add-on software also provided support for AppleTalk and even NFS based file sharing. Network protocols are attached to interface cards via "BIND". In that case, the frame type and network number must also be entered. Restarting NetWare showed that the PCNTNW driver was loaded and initialized. Further, Wireshark was showing IPX traffic over the network!

NetWare booted with drivers

At this point, we can LOAD MONITOR, and bring up the most common screenshot seen of NetWare systems (shown below). Next, it was time to jump into the pit of the NetWare Requester for DOS.

LOAD MONITOR

As an aside, there is one more step to installation that I've omitted. NetWare did support UNIX, Mac, and OS/2 clients. Each of these systems had add-in support modules that had to be loaded server-side. For example, Macintosh requires support for storing resource and data forks, while OS/2 could use long filename and extended attribute bits. For this article (and video), I didn't bother installing these as I'm only going to use DOS-based clients. If we re-visit NetWare, I'll document the experience then.

NWClient Install

I'd like to say that compared to the server setup experience, the client was better. It wasn't, although this wasn't entirely Novell's fault. Included with the server discs are two client installers, one for both DOS and Windows 3.1, and the other for OS/2 with NWClient 3.12. Our client machine got the DOS 6 treatment followed by Windows 3.1 (not Windows for Workgroups) with a dash of OAKCDROM for seasoning. Once again, there was a long list of drivers, and frustratingly, no AMD PCNet PCI card one. More annoyingly, I have a PCNet driver from AMD but the driver disk wasn't recognized by NWClient.

The files I have from AMD don't indicate a version number, but the example net.cfg suggests that this driver was for a 16-bit NetWare 4 client. One interesting design choice of these older NetWare ODI clients is that they're extremely modular in nature and have exceptionally low memory requirements. ODI is a fairly lengthy topic in and of itself, but the short story is that instead of one giant monolithic blob, each part of the network stack was separated into add-on layers. It is very reminiscent of the Winsock API and I wouldn't be surprised if some Winsock aspects were modeled around it.

Since ODI was supported for a long time, relatively stable, and — most importantly — highly modular, there was a decent chance I could get this to work.

On the AMD disk I had was a replacement LSL (Link Support Layer) module, and a startnet.bat which would load the basic drivers just fine. As I found out later, if I ran startnet.bat from the disk, NWClient's installer would see it, and write net.cfg for me properly. Instead, I ultimately ended up monkeying for about half an hour before finding the right set of client switches and options, and AUTOEXEC.BAT lines to let this work.

Wireshark saw IPX announce and NCP handshakes go which told me I was going in the right direction, but I got slightly stumped. There were no client utilities on the disk and I knew from the documentation that I was looking for LOGIN.EXE. What actually happens is a little more interesting.

fileman nwclient

NetWare servers announce their presence on the network via Service Announcement Protocols or SAP, which include their network number. The client can then use that information to determine the closest NetWare server and use that to access the SYS:LOGIN share which has LOGIN.EXE. As it turned out, the moment I got the network client running, NetWare announced its IPX address, and SYS:LOGIN was silently mounted to F:

This is part of the plug and play nature of IPX, but was also something of a double-edged sword. On the upside, patches to NetWare client software automatically got delivered to users as most of the brains were on the NetWare server — not the client. The first downside though is as far as I can tell, there's no way to force the client requester to go to a specific NetWare server. I'm also not entirely sure how this mechanism worked with OS/2. There is a LOGIN/OS2 folder with its own version of LOGIN, and reading suggests THAT folder gets mounted to F: instead of the general public one.

This likely wasn't a big deal in NetWare 2.x and 3.x, but I can see a potential for problems in NetWare 4 which replaced bindery with NDS. This was also a problem for netbooting clients as NetWare RPL looks for SYS:PUBLIC to find NET$OS.SYS. The documentation has a fairly large section documenting nearest server and how to reconfigure NetWare, but is maddeningly vague on detailing the behavior.

Regardless, with LOGIN.EXE found, I could login as SUPERVISOR, and enter a world of hurt and MAP SYS:

One final bit I found humorous though, was that as with most DOS software, NWClient offers to backup AUTOEXEC.BAT before modifying it. The original version is written as AUTOEXEC.BNW, which either stands for "Before NetWare", or "Brave New World". I like to think it's the latter and I am John trying to relate to a society that no longer exists.

LOGIN SUPERVISOR

With our entry into the new world and its strange utilities, it was now time to get NetWare to learn the fact that we can no longer party like it's 1999.

The Bumpy Road to Y2K Compliance

Up to this point I've glossed over why I needed a client system to patch the server. The simple reason is that NetWare doesn't have any tools to manipulate files on its own NetWare FileSystem (NWFS) partitions. Aside from the EDIT NLM, the disk might as well not exist as far as the server console is concerned. A large number of add-on NLMs were available to fix this, but Novell didn't support these out of the box. NetWare's own documentation specifically states that patch files needed to be loaded to SYS: from a client partition.

Supposedly, it is possible to load NLMs from the DOS boot partition, but I couldn't get this to work properly. Another option would have been to use Copy System and Public from INSTALL, but for the sake of not "rat holing" myself, I stayed with what Novell's READMEs said to do. Not that they were super helpful. What I had was a large number of patch files, and no clear order to install. For example, the LIBC upgrade said that it should only be used with some patch revisions and not others.

Ultimately (through some trial and error) I discovered that to fix the Y2K problems I was having on the server, I needed the base PTD312 patch, followed by the Y2K patch. The other patch files handled bug fixes in CDROM.NLM and Printer Services, which I skipped as I didn't need either of them. Compounding the pain is that NetWare patches have no uniform installation process. Some simply replace files, others include a PATCH NLM, and some require going into DOS on the server to perform surgery.

Starting with PTD312, the patch directory was extracted from its archive and XCOPY-ed to the SYS partition. Then I needed to LOAD PATCH312 on the console which updated and installed all the necessary bits.

PTD312.NLM

The Y2K patch was much messier. Instead of having an installation NLM, the replacement patches were copied to SYS, and then the server restarted. However, the date still came up as 1920. Careful reading made me realize I skipped a step.

NetWare's kernel exists in the form of server.exe, and consists of both a loader and the NetWare kernel wrapped together. The Y2K patch requires the loader component to be replaced. To actually do this, I had to copy the replacement LOADER.EXE to a floppy disk and then use the Loader SWAP (LSWAP) utility to write a new server.exe. Having climbed the mountain, NetWare finally recognized what century we're in.

LSWAP

2020 boot date

Sad to say, this was far more painful than it really needed to be. That also would prove to be a wonderful foreshadowing of what the NetWare experience was actually like.

The User Experience

Things continued to go downhill. By default, the rest of NetWare's management utilities are mounted to the Z:\PUBLIC drive. Notably, this isn't added to the PATH so you need to switch to that directory manually if no additional setup is done. File sharing can be managed either through the command line's MAP and ATTACH utilities or through the graphical SESSION NWClient for Windows. SESSION suffers from poor user experience. NetWare is rather consistent in the fact that its user interfaces are extremely inconsistent. For example, the UI doesn't note that the insert and delete keys are used to add new mount points. Furthermore, NetWare presents no graphical viewer for shares or available. Instead, a user must write a path like SERVER_NAME\VOLUME:PATH_TO_DIR.

SESSION

To actually find mountable directories, NDIR and FILER are available. FILER is both a user and administrative tool as it's the only graphical interface that allows for easy viewing of access permissions and browsing of remote servers. One can't, however, actually mount or attach drives directly with FILER. On the other hand, FILER could also be used to view and manage Macintosh and UNIX files.

FILER

Once a directory is mapped to a drive letter, it works like any normal DOS share. The performance was excellent and I didn't have any stutters or stalls as I did with Windows for Workgroups. This is likely due to the low overhead of IPX and the Network Control Protocol that NetWare uses for file management. Printers are manageable through the PCONSOLE command and are mapped to local LPT ports.

In theory, most of the terribleness of these utilities was supposed to be mitigated by the fact that the network administrator can set login scripts to automatically setup drive mappings. In theory, you'd simply LOGIN and go. Login scripts are marred, however, by the fact that they execute as part of LOGIN and not "in the context of COMMAND.COM, which means that they're limited to whatever functionality Novell provided.

While that may or may not have been the case, Novell was seemingly aware how bad this situation actually was and a Windows 3.1 application for share management was provided in the box. Unfortunately, I'm not actually sure I can call it an 'improvement'.

NWClient Windows

The first stumbling block is that logging in to and out of the NetWare server is not directly possible in the NWClient/Windows. You're expected to do so from DOS, and I only saw login prompts show up during resuming drive shares. That of course leads us into the second problem. The Windows client has the ability to make shares "permanent" which automatically connects and disconnects to them. However this functionality is entirely disconnected from NetWare logins. Instead, NetWare simply assumes your computer has one user and it was trivial to make NetWare try to mount folders I didn't have permissions for.

Resuming Connections

This is exceptionally ugly. Although Windows for Workgroups 3.11 had no real concept of a user, permanent shares were still stored independently through a pseudo-login prompt shown at startup. This also remained true through the 9x series of Windows. Then there is the very strong disconnect between the DOS and Windows worlds going on which is a bit harder to explain.

Back in the Windows 3.1 days, it wasn't uncommon to switch back and forth from Windows to DOS. While DOS applications could be run within Windows, it was also common to use programs like WordPerfect from DOS. This is especially important because a lot of DOS add-on software for WordPerfect and other word processors worked in the form of TSRs which hooked the original application — that wouldn't work while Windows was open. Reference Manager for WordPerfect was one such example I specifically remember.

NetWare client for DOS essentially assumed that your network administrator would handle all the hard parts of handling network MAPs for you. NetWare for Windows meanwhile made it easy for end-users to manage this themselves. The problem is these two environments don't talk to each other. Persistent shares set in Windows are not automatically loaded on DOS startup. Instead one needs to take a trip into Windows, and then quit back to DOS to have those shares actually become available.

The practical upshot is that I can easily see a world where a user had to start Windows to simply quit to DOS. End users couldn't manage their own login scripts, so it was either use the graphical Windows client, or write your own batch shell scripts to run the necessary MAP commands. This is an utter mess to say the least and I can only assume the same held true on Mac and OS/2 clients.

The next oddity I found is that NetWare installs its own Winsock implementation. That, in and of itself, is not that unusual. As I previously mentioned, Microsoft originally left it to third-party developers to write Windows Sockets implementations before it released Shoebill as part of Windows for Workgroups 3.11. NWClient showed up as version 4, likely due to the updated NIC components I had to add for PCNTNW support. The Network control panel icon also reappears, although it just launches the configuration page from NetWare Tools for Windows.

Windows SETUP

What makes it strange is that there appears to be absolutely no support for TCP/IP here. While Windows for Workgroups didn't ship with TCP/IP, it was available as a free add-on. Other Winsock implementations I've seen such as DEC PATHWORKS included TCP/IP in addition to their own protocols. That meant NetWare Tools for Windows was inherently incompatible with the Internet. It might also be possible to install a TCP/IP add-on into the NWClient, but this wasn't available (or even confirmed to exist). Some early comments on the video said that major NetWare users had never seen the Windows client prior to Windows 95, and I can't help but think this was the reason.

This is also damning based on the fact that NetWare server 3.12 supported TCP/IP for NLMs!

To sum up, NetWare Client Tools for Windows, at least in this version, were a mess. NetWare Client 4 would improve the situation somewhat, but it should be noted that NetWare 3.12 was released 1993, and NW3.12 was very popular. Windows 3.x had been on the market for three years at this point, and Windows already had become a staple at that point. More damning is Windows for Workgroups and the Microsoft Workgroups Add-On for Windows had already been released at this point.

To sum up, the user experience was poor on DOS — and amazingly — even worse on Windows. I really don't know how Novell botched this so badly, but they did. The administration experience was as bad if not worse overall.

There's also another glaring omission to talk about: as far as I can tell, there is no support for dial-up networking in any form with NetWare. Modems were immensely popular in this era for home and mobile users. TCP/IP over dial-up eventually codified around SLIP and later PPP to handle modems, while Microsoft provided Remote Access in Windows 3.1/NT for NetBIOS and later Dial-Up Networking in 95 for PPP/SLIP, and NetBIOS. I also believe IBM offered NetBIOS with its own LAN Server product line. I can't find an equivalent technology for IPX or NetWare in general. Microsoft notably got NetBIOS to work which has no concept of routing at all! The only references I could find to modem support is relating to ACONSOLE which allows a network administrator to dial into a NetWare server.

From what shreds of documentation I can find on the subject, dial-up networking for Windows 95 did, in fact, support IPX over modems... but it assumes that it's dialing into Windows NT Remote Access Service which then routes to NetWare. If there was a first-party solution from Novell, I'm unaware of it. I also have to assume there are add-on NLMs for this, because NetWare does support serial ports so it's not like attaching a modem would be hard.

If the community knows, let me know and I'll post an update/correction. Having thoroughly thrashed NetWare's user experience, it's now time to step into the role of the World Controller, and see how Mustapha Mond managed NetWare.

The Administration Experience

If Mond is arguing against the value of free will, then as John, I can only state that Novell showed great creativity in the inconsistencies with their administration experience. Let's start with the simplest to describe: RCONSOLE

It's not exactly unheard of that NetWare servers ended up getting walled in. After all, the server console isn't exactly a useful feature post-installation. In a stunning display of usability, Novell allows network administrators to use the RCONSOLE command to remotely access the console in a 1:1 copy of the server framebuffer via the RSPX protocol. As such, walling up a server is an entirely practical means of physical security.

Configuration on the server side is relatively simple. LOAD REMOTE enables remote administration, and LOAD RSPX allows for RCONSOLE over IPX. Modem support (via ACONSOLE) and serial terminal (LOAD RS232) were also available:

LOAD REMOTE

RCONSOLE menu

At first glance, this seems somewhat useless, given what I've said about the server console before. However, a hidden menu pops up if "*" is pressed on the numeric keypad (and it has to be the numeric keypad):

RCONSOLE Start menu

Beside the option to switch servers, RCONSOLE does provide a somewhat handy mechanism to copy files. Unfortunately, this directory copy isn't recursive, which means its pretty useless for installing patch files, and while the file browser exists, you have to manually type paths to use it to just add that final bit of salt in the wound. The other main option, Copy System and Public Files we've already seen before: this was used to install some patches, add-on software, and functions essentially identically to INSTALL. The primary purpose of this function, though, is to actually fully install NetWare from a second machine as a kludge around the lack of client-side sharing options.

The real meat of NetWare system administration comes from SYSCON which is used to actually create users, groups, and manage permissions (functionality also shared with FILER).

SYSCON

It wasn't immediately obvious on how to actually add a new user however. The secret is the INSERT and DELETE keys which, again, the UI doesn't point out. User permissions and groups are much more similar to what we got in Windows NT than POSIX groups, and I'm curious if Microsoft copied NetWare in this regard or if the basic layout was due to government mandates for security.

SYSCON add users

Both system and user login scripts can be edited through SYSCON, as well as specific permission grants. Something that doesn't translate well is how clunky actually setting ACL permissions is. Instead, it's easier to look at how Novell officially recommended working out user and group permissions.

ACL Worksheets

I can only assume the conversation at Novell went something like this:

Manager 1: So we have this large massive product with amazing access control features, what tools should we give our customers to control these?

Manager 2: Well, we have the FLAG command, and maybe we could add them to SYSCON, and let you look at them in FILER. Maybe if we gave them some sorta template to work off?

Manager 1: Perfect, ship it!

I legitimately have no idea how you could easily audit permissions on a NetWare server given the total lack of scripting support and poor tooling.

I did notice this gem in the documentation, the eXecute Only flag, which acted as a form of DRM for network installation applications:

Prevents copying or backing up files. Attribute cannot be removed. Assign only to files with an .EXE or .COM extension (program files).

Keep a duplicate of these files in case they become corrupted and need to be replaced.

CAUTION: Some programs do not execute properly if flagged Execute Only.

How this actually works is unclear — files obviously have to be read to be loaded into local system memory — and I'm not sure I want to find out.

Besides SYSCON, NetWare also came with built-in utilities for printer management (PCONSOLE), creating network bootable DOS disks (DOSGEN), talking to BTreieve, and much more. Here's a list of all the programs just to give you an idea of scope.

DIR * /w

So much of this was a chore. I didn't exactly expect software of this vintage to be especially user-friendly. But, given how much Novell was charging for NetWare, I was expecting something less clunky. This is the case even before mentioning that add-on software could bring NetWare crashing down with an ABEND.

ABEND

It's rather telling that there is an entire book in the box dedicated to this topic.

The Theorical Printer Experience

As stated, I didn't test NetWare's print capability, as the only printer I have is several decades too new. I did, however, read the documentation, and I immediately spotted a problem. Printers had to be connected to a NetWare server of some form. Unlike NetBIOS, NetWare and its Network Control Protocol work in a client-server architecture instead of a peer-to-peer mode. While this is usually desirable for things like files, printers benefit from being physically accessible.

NetWare clients were that, clients. They have no inherent ability to share files or printers from the local system. The problem becomes obvious, you needed a NetWare server physically attached to the printer, or at least something that could fake it. NetWare had two separate products for peer-to-peer sharing, NetWare Lite, and Personal NetWare which both in DOS. Personal NetWare was later bundled with DR-DOS after Novell bought out Digital Research. How these integrate into an environment that has a proper NetWare server is at best unclear, especially in regards to access control lists, and the later Novell Directory Service.

I suspect I'm going to have a "Personal NetWare" experience video/article, but from what I can tell from the documentation, Personal NetWare simply showed up as a normal NetWare server on the network and didn't operate in true P2P mode. I suspect this wasn't a problem in practice as IPX SAP could easily fake true P2P connectivity.

The problem is, however, that NetWare Lite and Personal NetWare were separate retail products, and required individual licenses for each machine with serialized disks used to install it; the client won't allow it to be used if it sees the same license being used. I also know that several very high end printers such as HP's LaserJet 4 could have a network card installed and the printer shared to a NetWare network, likely appearing as an independent server. HP also offered JetDirect which had a parallel port on one end, and a network plug on the other which converted most of their printer line into network capable beasts.

I'd like to hear personal experiences about NetWare printing, because frankly, it looks pretty bad. NetWare printers are mapped to LPT ports, and DOS applications/Windows had to provide their own drivers (Windows for Workgroups also has the same problem), and I can't imagine how this could possible work with UNIX, or Macintosh clients.

I do get the sense that "Yes, it works with NetWare" was more marketing hype than actual truth. Maybe "It sorta works with NetWare if your network administrator is a god among mortals" might have been more accurate.

The Conclusions from Experience

To be perfectly frank, I'm not sure what I was expecting when I dove into NetWare. I knew about some of its technical quirks such as running in Ring 0 and cooperative multitasking, but as a whole it's an entirely different experience. NetWare's IPX protocol was its killer feature, but because internet TCP/IP required every computer in a NetWare shop to be dual-stacked, it quickly became irrelevant.

Since you already had to climb Mt. TCP/IP to get on the Internet, IPX's plug and play features stop being relevant as a sales point. Furthermore, if you had UNIX in any form — which was almost universally TCP/IP — you were in the same spot. Without it, you're left with a very expensive file and print server which wasn't really better than a DOS box at running add-on software. IPX's second major feature — good performance on non-switched networks — became less and less important as switched Ethernet became dominant and as cheaper switches could replace NetWare boxes for routing. You're also left with a product that your system administrators hate; given a lack of lock-in, they could (and did) jump ship as it became viable to do so.

Now, to Novell's credit, IPX and the base NetWare package felt rock solid, and the DOS ODI stack is incredibly small for its size. The latter I really need to emphasize, I've struggled with conventional memory with MSNet for DOS, but Novell's stack just sips memory which meant it was considerably less likely to conflict with DOS applications which were always competing for conventional memory. As for IPX, most of what it gave the world would only reappear in IPv6, and even then, the latter is nowhere as plug-and-play friendly (DHCPv6, I'm looking at you).

Novell would have stayed relevant in this space if their product hadn't stagnated for five years. NetWare 2.x was not very different from 3.x from what I can glean from its documentation in either user experience or administration. NetWare was essentially called feature complete in the late 80s, and that was that. Compounding the problem was that NetWare Directory Services was not well received and NetWare 4 initially had a stupidly high price tag compared to NetWare 3.

Windows NT 3.5 and 4.0 would prove to be a wakeup call, and Novell went on a buying spree to try and compete. DR-DOS, DESQview, WordPerfect, and NetWare Server for OS/2 couldn't stop the ship from sinking at that point. Ultimately, and what could be said as irony, Novell, who had bought UNIX from the remains of Ma Bell, with the intention to kill it ended up migrating their product line onto SuSE Linux. The final versions of NetWare either ran Linux as a NLM, or itself ran as an application under Linux.

My personal opinion is that Microsoft won this battle by having a better and cheaper product. While Microsoft did pull some underhanded moves (which eventually lead to Novell v. Microsoft over WordPerfect for Windows), the fact is I don't have a lot to recommend NetWare instead of Windows NT. NetWare might have been faster, but processor speed increases meant the network, not the CPU, quickly became the bottleneck.

In short, innovate or die ...

73 de NCommander

P.S. I fully expect to come back to NetWare as I want to try and write a definitive account of Novell's fall from grace in the 90s. That means tracking down hardware that can run NetWare 2.01 (the earliest surviving version), and either a 3C501 card and thicknet cabling (or adapters), as well as later NetWare versions, especially 6.x. I especially want to see how NDS actually compared to Active Directory as it came first. I also want to collect user experiences, so write yours below so I can share your memories as part of these articles.

P.P.S. I'm also on the lookout for Banyan VINES. VINES has a hardware dongle protection measure, and I've yet to find a complete set on eBay or similar. That being said, I think we're soon going to be touching into the history of Windows NT, as I managed to score a copy on eBay for an unboxing and exploration video.

BackOffice Server