Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
BofA Phish Gets Around DMARC, Other Email Protections:
A credential-phishing attempt that relies on impersonating Bank of America has emerged in the U.S. this month, with emails that get around secure gateway protections and heavy-hitting protections like DMARC.
The campaign involves emails that ask recipients to update their email addresses, warning users that their accounts could be recycled if this isn’t done.
“The email language and topic was intended to induce urgency in the reader owing to its financial nature,” according to analysis from Armorblox. “Asking readers to update the email account for their bank lest it get recycled is a powerful motivator for anyone to click on the URL and follow through.”
The messages contain a link that purports to take visitors to a site to update their information – but clicking the link simply takes the recipients to a credential-phishing page that closely mirrors a legitimate Bank of America home page, researchers said.
The attack flow also included a page that asked readers for their ‘security challenge questions’, both to increase legitimacy as well as get further identifying information from targets, researchers said in a posting on Thursday.
“With the enforcement of Single Sign On (SSO) and two-factor authentication (2FA) across organizations, adversaries are now crafting email attacks that are able to bypass these measures,” Chetan Anand, co-founder and architect of Armorblox, told Theatpost. “This credential-phishing attack is a good example. Firstly, it phishes for Bank of America credentials, which are likely not to be included under company SSO policies. Secondly, it also phishes for answers to security-challenge questions, which is often used as a second/additional form of authentication. Asking security-challenge questions not only increases the legitimacy of the attack, but also provides the adversaries with vital personal information about their targets.”
Graphdiyne as a Functional Lithium-Ion Storage Material:
Carbon materials are the most common anode materials in lithium-ion batteries. Their layered structure allows lithium ions to travel in and out of the spaces between layers during battery cycling, they have a highly conductive two-dimensional hexagonal crystal lattice, and they form a stable, porous network for efficient electrolyte penetration. However, the fine-tuning of the structural and electrochemical properties is difficult as these carbon materials are mostly prepared from polymeric carbon matter in a top-down synthesis.
Graphdiyne is a hybrid two-dimensional network made of hexagonal carbon rings bridged by two acetylene units (the "diyne" in the name). Graphdiyne has been suggested as a nanoweb membrane for the separation of isotopes or helium. However, its distinct electronic properties and web-like structure also make graphdiyne suitable for electrochemical applications.
Journal Reference:
Chipeng Xie, Xiuli Hu, Zhaoyong Guan, et al. Tuning the Properties of Graphdiyne by Introducing Electron-Withdrawing/Donating Groups, Angewandte Chemie International Edition (DOI: 10.1002/anie.202004454)
Will graphdiyne enable flatter batteries, given its near two-dimensional structure?
Zoom will provide end-to-end encryption to all users:
Zoom's CEO Eric S. Yuan today announced that end-to-end encryption (E2EE) will be provided to all users (paid and free) after verifying their accounts by providing additional identification info such as their phone number.
"We are also pleased to share that we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform," Yuan said.
"This will enable us to offer E2EE as an advanced add-on feature for all of our users around the globe – free and paid – while maintaining the ability to prevent and fight abuse on our platform."
This update in Zoom's plans comes after the company announced on May 27 that E2EE will be available only to paying customers, with free/basic users to only get access to 256-bit GCM encryption.
[...] To provide all Zoom users with access to E2EE, Yuan says that they will have first verify their accounts through various means such as by verifying their phone numbers via text messages.
"Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts," Yuan explained.
"We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse."
An initial draft cryptographic design for Zoom's planned E2EE offering was published on GitHub on May 22 and a second updated version was committed today (a list of all the changes is available here).
According to an update to the company's 90-day security plan, "end-to-end encryption won't be compatible with an older version of the Zoom client, and all participants must have an E2EE-enabled client to join the meeting."
The company also said that it will not force users with free accounts to use E2EE as both free and paid users will have the choice to enable it for their meetings.
Previously:
(2020-06-06) Zoom Says Free Users Won’t Get End-to-End Encryption so FBI and Police Can Access Calls
(2020-05-07) Zoom Acquires Keybase to Bring End-to-End Encryption to Video Platform
(2020-04-21) This Open-Source Program Deepfakes You During Zoom Meetings, in Real Time
(2020-04-20) Every Security Issue Uncovered so far in the Zoom Video Chat App
(2020-04-15) Over 500,000 Zoom Accounts Sold on Hacker Forums, the Dark Web
(2020-04-13) Zoom Admits Data Got Routed Through China
A deep-learning E-skin decodes complex human motion:
A deep-learning powered single-strained electronic skin sensor can capture human motion from a distance. The single strain sensor placed on the wrist decodes complex five-finger motions in real time with a virtual 3D hand that mirrors the original motions. The deep neural network boosted by rapid situation learning (RSL) ensures stable operation regardless of its position on the surface of the skin.
Conventional approaches require many sensor networks that cover the entire curvilinear surfaces of the target area. Unlike conventional wafer-based fabrication, this laser fabrication provides a new sensing paradigm for motion tracking.
The research team, led by Professor Sungho Jo from the School of Computing, collaborated with Professor Seunghwan Ko from Seoul National University to design this new measuring system that extracts signals corresponding to multiple finger motions by generating cracks in metal nanoparticle films using laser technology. The sensor patch was then attached to a user's wrist to detect the movement of the fingers.
[...] This sensory system can track the motion of the entire body with a small sensory network and facilitate the indirect remote measurement of human motions, which is applicable for wearable VR/AR systems.
Journal Reference:
Kim, K. K., et al. A deep-learned skin sensor decoding the epicentral human motions. Nature Communications, 2020 DOI: 10.1038/s41467-020-16040-y29
The approach could ease VR/AR implementations.
As oil slumps, Norway explores new fields in the Arctic:
But the move does make you look askance at Norway. This week, MPs in the super-rich oil nation are expected to vote against further protection of one of the world's most important biological hotspots, so enabling continued exploration in the Barents Sea.
This comes off the back of a pledge to delay more than $10bn in taxes for petroleum companies, to spur investment which will help fund drilling in a uniquely biodiverse area called the marginal ice zone.
[...] But then Norway is environmentally at odds with itself.
You have the oil that made it one of the richest nations on earth. Then walk around Oslo and you will see electric cars all over the place - in fact, three out of four cars now sold in Norway are either wholly or partially electric.
And 98 percent of Norway's electricity comes from renewable energy, of which hydropower is the main source. The nation talks highly of its own sustainable prowess. And well it might.
But all those fossil fuels Norway extracts? They go overseas. The nation may not emit too many greenhouse gases, but it exports them on a colossal scale. Norway's wealth is someone else's smog.
Perhaps Norwegians welcome global warming?
Astronomers just discovered the youngest ever 'baby' dead star:
A suite of space-based telescopes operated by NASA and the European Space Agency have discovered the youngest known magnetar to date. At just 240 years old, this extreme, cosmic infant could help astronomers understand how these dead, dense stars come to be and how they evolve.
In a study, published in The Astrophysical Journal Letters on Wednesday, researcher describe Swift J1818.0-1607, a very young magnetar first spotted by NASA's Neil Gehrel's Swift Observatory on March 12 after it let out a mighty, explosive burst of X-rays. Magnetars are a rare kind of neutron star (the collapsed cores of huge stars) with extreme magnetic fields. They pack a huge amount of mass into a tiny space, which generates a huge amount of weird physical phenomena. Their magnetic fields can be up to 1,000 times stronger than your regular, run-of-the-mill neutron star.
[...] This particularly[sic] magnetar is only around 16,000 light-years from the Earth -- practically our backyard -- and located in the constellation Sagittarius. Astronomers have only detected a few dozen magnetars and none have ever been detected so shortly after they have formed.
Journal Reference:
Zhongyang Wang. Reactant-Transport Engineering Approach to High-Power Direct Borohydride Fuel Cells, Cell Reports Physical Science (DOI: 10.1016/j.xcrp.2020.100084)
This magnetar is 16,000 light years away from Earth.
Tech and social media are making us feel lonelier than ever:
You've had a social day. Two hundred Facebook friends posted birthday messages, your video of Mr. Meow shredding the toilet paper stash got dozens of retweets, and all the compliments on your latest Instagram selfie have you strutting with an extra swagger. Still, you can't help but notice an ache that can only be described as loneliness.
That we feel this way even when hyperconnected might seem like a contradiction. But the facts are clear: Constant virtual connections can often amplify the feeling of loneliness.
"Internet-related technologies are great at giving us the perception of connectedness," says Dr. Elias Aboujaoude, a Stanford University psychiatrist who's written about the intersection of psychology and tech. The truth, he says, is the time and energy spent on social media's countless connections may be happening at the expense of more rooted, genuinely supportive and truly close relationships.
If virtual socializing cannot substitute for the real thing, will social media prove out to be nothing more than a fad of the late 20th and early 21st centuries?
Movie theaters will look vastly different if they survive COVID-19:
thanks to mass closings and skyrocketing debt for theater franchises during COVID-19, the future of the businesses that offered me so much comfort as a teen is in peril. In uncertain times, one thing seems increasingly clear: The theater industry must change to survive. Here's how movie theaters might look in the future.
[...] Sure, companies like AMC hated the super cheap subscription-based app Moviepass, but the subscription model is an increasingly popular and time-tested method of ensuring revenue -- some theaters in the UK have been using such services for more than a decade.
[...] Drive-in theaters, which thrived in the '50s and early '60s, are already finding a second (or third) life amid the pandemic, thanks to the built-in social distancing and -- for the reason many of them still survived before COVID-19 -- nostalgia.
[...] How exactly this will look remains to be seen, but tech and streaming giants like Apple, Amazon and Netflix have either considered buying theaters or already committed to doing so. While wholesale corporate takeovers are probably a long shot, Silicon Valley has the capital to buy out floundering theater franchises and incorporate them into their existing integrative business models -- and doing so could dramatically reorient the movie theater landscape.
Or, more of them could serve food and beer like Alamo Drafthouse Cinema.
Three US Navy carriers deployed to Indo-Pacific waters:
The Trump administration deployed the USS Ronald Reagan, USS Theodore Roosevelt and the USS Nimitz to the region, with each containing more than 60 aircraft.
[...] The Chinese government, which has also increased its military presence in the region, responded swiftly, warning that “countermeasures” could be taken against the US.
[...] The USS Ronald Reagan and the USS Theodore Roosevelt are currently patrolling in the western Pacific, while the USS Nimitz is in the east, according to US Navy press releases.
It’s an unusual move; the last deployment of US aircraft carriers in the Pacific of this size was back in 2017, when tensions with North Korea over nuclear weapons were peaking.
The US launched the deployment on June 4, after a coronavirus outbreak forced the USS Roosevelt into port in Guam in March, which saw more than 1000 of the ship's nearly 4900-member crew test positive for the virus.
[...] “Carriers and carrier strike groups writ large are phenomenal symbols of American naval power. I really am pretty fired up that we’ve got three of them at the moment,” said Rear Admiral Stephen Koehler, director of operations at Indo-Pacific Command in Hawaii.
[...] “By massing these aircraft carriers, the US is attempting to demonstrate to the whole region and even the world that it remains the most powerful naval force, as they could enter the South China Sea and threaten Chinese troops on the Xisha and Nansha islands (Paracel and Spratly Islands) as well as vessels passing through nearby waters, so the US could carry out its hegemonic politics,” the Global Times report quoted Li Jie, a Beijing-based naval expert, as saying.
It also noted that Beijing could hold drills in response to show off its firepower.
NASA Curiosity rover snaps captivating view of Earth and Venus from the surface of Mars:
NASA's Curiosity rover recently snapped a lovely panorama of its home planet and Venus from the surface of Mars.
The rover captured Earth and Venus on June 5 after sunset. "Both planets appear as mere pinpoints of light, owing to a combination of distance and dust in the air; they would normally look like very bright stars," NASA said in a release on Monday.
The image from Curiosity's Mast Camera combines two shots into one and also shows a silhouette of the top of Tower Butte, a landscape feature in the Gale Crater. If you want to test your eagle-eye vision, you can try to pick out the planets in an un-annotated version of the image.
Also at phys.org:
Massive spying on users of Google's Chrome shows new security weakness
A newly discovered spyware effort attacked users through 32 million downloads of extensions to Google’s market-leading Chrome web browser, researchers at Awake Security told Reuters, highlighting the tech industry’s failure to protect browsers as they are used more for email, payroll and other sensitive functions.
Alphabet Inc’s (GOOGL.O) Google said it removed more than 70 of the malicious add-ons from its official Chrome Web Store after being alerted by the researchers last month.
“When we are alerted of extensions in the Web Store that violate our policies, we take action and use those incidents as training material to improve our automated and manual analyses,” Google spokesman Scott Westover told Reuters.
Most of the free extensions purported to warn users about questionable websites or convert files from one format to another. Instead, they siphoned off browsing history and data that provided credentials for access to internal business tools.
Based on the number of downloads, it was the most far-reaching malicious Chrome store campaign to date, according to Awake co-founder and chief scientist Gary Golomb.
Brain research sheds light on the molecular mechanisms of depression:
Researchers of the national Turku PET Centre have shown that the opioid system in the brain is connected to mood changes associated with depression and anxiety.
Depression and anxiety are typically associated with lowered mood and decreased experience of pleasure. Opioids regulate the feelings of pain and pleasure in the brain. The new study conducted in Turku shows that the symptoms associated with depression and anxiety are connected to changes in the brain's opioid system already in healthy individuals.
- We found that the more depressive and anxious symptoms the subjects had, the less opioid receptors there were in their brain.
[...] These results show that the mood changes indicating depression can be detected in the brain already early on.
Journal Reference:
Lauri Nummenmaa, Tomi Karjalainen, Janne Isojärvi, et al. Lowered endogenous mu-opioid receptor availability in subclinical depression and anxiety, Neuropsychopharmacology (DOI: 10.1038/s41386-020-0725-9)
20200618_214854 UTC Update: yes, some of these pictures are... large. Placed in <spoiler> tags for now; click each one it to see/hide the picture. --martyb]
Well, here we go again! Coming off the Novell NetWare experience, I had intended to go straight into Windows NT. After two attempts of shooting a video and much swearing, I decided to shelve that project for the moment. Furthermore, a lot of the feedback from my previous articles talked about early Linux.
That gave me a thought. Why not dig out the grandfather of modern Linux distributions and put it on camera? Ladies and gentlemen, let me introduce you: the Softlanding Linux System, complete with XFree86 1.2!
Honestly, there's a lot of good and bad to say about Softlanding Linux, and while SLS is essentially forgotten, its legacy birthed the concept of the Linux distribution and its bugginess also lead to the creation of both Slackware and Debian. It also made me remember a lot of the bad that came with Linux of this era.
Assuming the summary hasn't scared you off, get ready to write your Xconfig, strap in your Model Ms, and LOADLIN your way below the fold!
I'm pretty sure we all know the early story of Linux, and the post to comp.os.minix that started it all, but just in case:
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
Linus's hobbyist project would quickly become a cornerstone of the free software movement, the beginning of the end of commercial UNIX, and eventually, the core of what would power all Android smartphones today. Not bad for something that was supposed to only run on a 386, right?
Linus's Linux would go through several releases, but Linux immediately became popular because it was a UNIX-like compatible kernel that wasn't tainted by AT&T code. Linux came at exactly the right moment when the world was looking for a free operating system, especially after Linus embraced the GPLv2.
To understand the implications, we need to step back for a moment. Richard Stallman, of the Free Software Foundation, was still putting the pieces together to create a free-as-in speech operating system. This project was known as GNU (a recursive acronym: "GNU is not UNIX") noting that while the tools and system were UNIX-like, they had no code from Bell Labs and were freely available under the terms of the General Public License or GPL.
GNU was intended to form a full operating system, but one critical component, the kernel, was missing.
In 1991, the BSD flavors of UNIX were tied up in the USL v. BSDi lawsuit, and 386BSD was still months away from its first public release. At the time, it was unclear if the BSD-derived operating systems would ever get clear of the taint from Bell Systems.
Stallman and the Free Software Foundation had instead embraced the Mach microkernel. At the time, microkernels were seen as the "future" of software development. This lead to the creation of Hurd, with its first release in 1990. However, design flaws inherent to mach's design prevented Hurd from actually being useable in any meaningful way. In short, the project was stalled, and it was unclear if the mach pit could be climbed out of. It was still possible to use components of the GNU system such as bash on commercial UNIX systems, and MINIX.
MINIX was the closest to a usable "free" operating system at the time. Created by Andrew S. Tanenbaum as a teaching example, MINIX was a small microkernel system that was system call compatible with 7th Edition of UNIX. However, it's origin as a teaching tool presented a problem. Tanenbaum's publisher was unwilling to allow MINIX to be freely distributed, so a licensing fee of ~$70 dollars was attached, with source code available. MINIX also had the advantage it ran on commodity x86 hardware, and it had a small but cult following. The end result was a UNIX-like operating system with source available.
Linux, on the other hand, was not only freely distributable, but it was also as close to a drop-in replacement for MINIX as far as an end-user was concerned. MINIX's userbase quickly abandoned the platform for Linux as it matured, and it's few remaining users migrated to 386BSD, or one of its descents. However, Linux was not a complete system in and of itself; it was just a kernel. To use the earliest versions, you had to cross-compile from UNIX or MINIX, and add software to taste. In a sense, its was like Linux from Scratch minus the good manual.
Softlanding would become the basis of the modern Linux distribution.
At this point, I was continuing to write this article when after doing a bit of fact-checking, I stumbled upon MCC Interim Linux, which Wikipedia claims is the first Linux distribution as it predates SLS. This is technically true, but MCC Interim Linux didn't offer a package manager or much in terms of add-on software.
In that regard, SLS was much closer to what "modern" Linux distribution provides than MCC Interim. That being said, it might be worth diving into in addition to the early boot disk versions of Linux in a later article.
What was most astounding about this project was the utter lack of information about Softlanding entirely. From the README file, Softlanding came with a 60 page manual and was available on both 3 1⁄2 and 5 1/4 floppy disks, CD-ROM, and QIC40 tape. Copies were also available through FTP and various bulletin board systems. The last release has only survived in such a form.
Reading through the sparse README files, Softlanding primarily billed itself as providing "soft landings for DOS bailouts", and a lot of the default software load seems to be converting DOS users. As I would find out, while Softlanding did provide X, it was in such poor shape that it really couldn't be said to be competitive with Windows of the time period.
SLS has a reputation for being rather buggy, and both Debian and Slackware were both started due to frustration with the initial project. Nonetheless, it was best described as the Ubuntu of its era in its relative ease of use, and focus on being user friendly. As I found, Softlanding also provided a fair bit of effort in making Linux easier to use with their mesh shell, built-in emulators, and rather complete set of software for the era.
That being said, SLS died for a reason. To understand that reason, I needed to get it installed. To get it installed, I needed working media.
One rather curious fact about the digital download version of Softlanding is it isn't in the form of disk images. Instead, it's set of files complying with the 8.3 DOS naming scheme of the era. Only the initial boot disk, A1, is provided as a raw image.
The README file goes into more detail, and as far as I can tell, the intent was allow people to download what they need, and then use DOS to create the disks. This is understandable as even in the era where floppies were common, 31 floppies is a bit of a tall order. To put that in context, DOS 6.2 and Windows 3.1 were 3 and 6 floppies respectively. Windows 95 itself was available on 13.
I had to convert these into a more usable form. My solution was a simple Python script that simply created each high-density floppy disk image, formatted them with a FAT12 filesystem, and then mcopy the files on. This removed what would have otherwise been a rather tedious process.
The next step was figuring out how to run this. I knew from the beginning that I wanted to get X going, and I knew that would be a challenge in and of itself. I also knew that early Linux couldn't use LBA addressing and needed some BIOS support to find the hard disk. Sarah Walker's PCem is my usual weapon of choice when I need era correct emulation. It also has the advantage of running relatively close to the original system speed so I could get a good idea of what performance was actually like.
I started with an Award 486DX Clone, and 8 MiB of memory. Since PCem emulates a full system, and Linux was dependent on CHS addressing, I had to set up the drive parameters manually in BIOS. I also needed to set the clock back. I had found warnings that there were known issues with SLS in relation to years past 1996 due to faulty leap year calculations. With BIOS set, I was ready to go.
I popped disc A1 into my emulated floppy drive, and got rewarded with the LILO prompt!
The A1 disk is actually a full live system, and even goes as far as providing "root" and "install" users depending on what you need to do. Some handy notes on SLS told me I needed to partition the disk, and that meant fdisk.
Unfortunately, my 2020 Linux brain at this point forgot it was common practice at the time to use a swap partition and more than one partition for the filesystem. Instead, I just made one single large one for root, and called it good. This didn't prove to be a problem in practice but it does highlight a problem with SLS. fdisk isn't exactly an intuitive step and even DOS 5 and 6 will both offer to partition a hard disk graphically if needed.
Another quirk that tripped me up is that to format a partition, the utility is mke2fs. This might be because ext2 was then new, and the default file system was still Minix. That however I attribute more to not having the manual and 26 years since this software was released. With a tap of the power button, I was now ready to install Softlanding.
Softlanding's installation process is clean and straight to the point. It's better than some other distributions even in 2020. Tap the number, follow the directions and go. The only hiccup is creating a swapfile failed, but the install soldiered on regardless.
That isn't to say that it went without issue. There's the fundamental problem that I'm still stuck feeding 31 floppies. Part of the problem is while Linux 0.99 does in fact support CD-ROMs, it doesn't support ATAPI-based drives. As far as I can tell, it only supports Creative SoundBlaster CD-ROM controllers and SCSI ones. However, I couldn't get SCSI to work at all. It was mostly 30 minutes of me browsing the Internet and changing disks from time to time.
That is until the installer suddenly cried out it couldn't find X2
X1 had been happy grinding away when the message popped on screen that X2 couldn't be found. Actually inserting X2 and retrying got the install going would foreshadow what happened at the end of installation. I'm still not sure what happened here, but I do suspect it was one of SLS's packaging bugs coming to the forefront.
One of the largest issues with SLS was it was just flat out buggy, and I suspect the notice for X2 was just that: a bug. It wouldn't be the last.
Towards the end of installation, SLS prompts to create a boot disk, and then installs LILO to hard disk. It also prompts if you want to setup dual-boot with DOS. This is pretty standard for operating systems of this vintage and I didn't think much about it at the time. What I didn't notice was the mistake in how SLS installs LILO. This would represent the first major footgun I would run into.
Unaware that there was a lurking time bomb, I removed the last floppy, rebooted the system, and was greeted with a system hang.
What had happened was a perfect storm of failure. Normally, when faced with a non-bootable disk, BIOS should give the typical "Non-system disk message", or dump to BASIC depending on the vintage. However, for whatever reason, I ended up with a flashing prompt. I was well aware that LILO would sometimes have issues booting from the hard disk, so I didn't initially give it a second thought. I had the boot disk SLS had made during installation, and that allowed me to start-up Linux.
I should have taken a closer look at what SLS had actually done. It wasn't until much later I had actually pieced together the series of events that had taken place.
Let's step back for a moment and talk about how a PC boots. In the most basic form, booting from either a floppy disk or a hard disk is done by BIOS loading the first sector of a given disk into memory, then executing it. On PCs, sectors are typically 512 bytes, and this forms what's known as the Master Boot Record.
Beside the initial bootstrap code, the MBR also contains the partition map and some information on how the disk was formatted. However, 512 bytes is a bit too small to do anything useful. This is where the Partition Boot Record comes into play. The PBR is a secondary holding area for bootloader code, and the PBR is what gets loaded at startup when a partition is marked 'active'. Microsoft's DOS MBR uses the PBR to load the rest of DOS and then eventually load COMMAND.COM. This is a fairly well-documented process, but it's slightly problematic that your MBR and PBR must agree with how the system is started.
It also means that a MBR had to be installed in the first place. I had started with a blank hard disk which meant there was no MBR. FDISK had written a partition table, but the actual bootstrap code portion was still NULLed out. What I hadn't noticed was that SLS had installed LILO to /dev/hda1, or the partition boot record. This meant that there was no MBR to start the system, leading to the hang.
In general, I find PBR based booting rather unreliable at best. This is compounded by the fact that Microsoft has a very bad habit of trashing boot code. My fix was to simply change lilo.conf and then re-run lilo to re-install to the MBR. This let me boot from the hard disk!
With that interlude aside, it was time to actually take a closer look at the Softlanding System itself.
Softlanding only provides root as a default user with no password. After loggining in, I'm greeted by this login banner:
Softlanding Software (604) 592-0188, gentle touchdowns from DOS bailouts. Welcome to Linux SLS 1.05. Type "mesh" for a menu driven interface. Fresh installations should use "syssetup" to link X servers, etc.
The phrase "softlanding for DOS bailouts" appears on most of SLS's media, and from what I can tell, SLS was intended to be that: a better DOS. This becomes very clear when we follow the instructions to load 'mesh'.
If this looks familiar, it probably because you're familiar with Norton Commander for DOS or one of its clones
mesh is entirely something Softlanding cooked up for SLS. Source code isn't provided, and its LICENSE file states it can only be distributed with SLS. One thing though has to be said is that I have to give SLS props here, this is a really good way to help users soft land from DOS. Norton Commander was exceptionally popular with DOS, and I even remember it holding in until Windows 95. By giving the console a decent UI with familiar functionality, you've basically eliminated an entire cliff in migrating.
I do wonder how much Softlanding was trying to mimic DOS. At no point did the installer or any official instructions tell me to make another user. Although even in the era, running as root 24/7 would have been a bad practice, it would have made Linux resemble DOS a lot more than it did out of the box. Once again, I don't have the manual so I have no real idea how much any of this is intentional.
However, one thing is noticeable is that the default software load is very much tailored to help those migrating from DOS.
The first notable addition to the party is the joe editor. For those not familiar with joe, it's a clone of Wordstar. For those not familiar with Wordstar, it was the emacs to WordPerfect's vi.
NOTE: I'm not apologizing for the above.
Joking aside, WordStar has a rather diehard userbase and there are quite a few writers who still get by on the old CP/M and DOS-based versions of WordStar. Including joe as a default editor in addition to the more common vi and emacs would help those familiar with WordStar make the migration a bit easier.
Ease of migration from DOS also shows up with DOSemu, which is included in the box. DOSemu as the name suggests is a full-functioned DOS emulator. It can either work with an existing DOS partition, or with a micro hdimage. In either case, you need an actual copy of DOS to use it. Linux folders can be mapped into the emulator via the LREDIR command, and a compatibility list is provided.
WordPerfect loaded up just fine. DOOM crashed the emulator. Windows 3.0 interestingly is marked as "working in real mode", but trying to install it just leads to a hang. Most Linux users might be realizing that I've been up to this point tap-dancing around a rather large pain point of early Linux.
NOTE: Still not apologizing for that joke.
If you watched the video, you might have seen the rather large failure montage that went with my attempts to get X up and running.
Let me be rather blunt about this. X was the reason that Linux on the desktop was a fracking disaster throughout the 90s and early 2000s. The problem isn't with the X protocol or design, it's entirely with the driver stack. While X of SLS 1.0.5 may get a partial pass because it predates the VESA VBE BIOS extensions, X was an utter nightmare up until Xorg finally plastered over most of the bullshit with working autoconfiguration.
A lot of people are going to yell at me, and say "Oh, but graphics card vendors didn't publish docs.". Maybe that's true, but even in cases where there is a working X server, you still have to do a lot of manual configuration to get it working. Red Hat specifically went to get drivers available as free software when possible; my ThinkPad's NeoMagic card has a driver due to these efforts. Remember, back in this time period, there was more choice than AMD, NVIDIA or Intel GMA. It gets worse if you can believe it: X specifically also requires timing information relating to the refresh rates, and arcane bullshit that no other operating system needs. EDID initially appeared in 1994, and was more standardized by 1996.
Let me give you an example of an X modeline:
Modeline syntax: pclk hdisp hsyncstart hsyncend htotal vdisp vsyncstart vsyncend vtotal [flags] Flags (optional): +HSync, -HSync, +VSync, -VSync, Interlace, DoubleScan, CSync, +CSync, -CSync Modeline "1600x1200" 155 1600 1656 1776 2048 1200 1202 1205 1263 # (Label) (clk) (x-resolution) (y-resolution) # | # (pixel clock in MHz)
This is utter bull. Games that talk to hardware directly such as the DOS-based DOOM or Duke Nukem 3D could get 800x600 or better. Duke3D could theoretically even go as high as 1600x1200. Windows 3.0 and 3.1, by comparison, was archaic because it required you to still run SETUP to set the graphics mode and have a driver for what you specifically needed.
Linux ran on 386+ and higher processors. Virtual 8086 Mode is there; a 16-bit BIOS driver is not an excuse on why this was so bad. Even if we jump ahead to 1998 when 32-bit VBE was standardized, it still. didn't. work.
A 486 had enough horsepower to do unaccelerated X with something like fvwm without issue. While SLS had some specific pain points related to X, this mess lasted well up until the 2000s. KNOPPIX was the first time I specifically remember where X autodetection had a semi-decent chance of working. Most X applications on Linux were ports of software originally written for UNIX. A lot of this software that assumed higher resolutions 640x480. While X can run at standard VGA resolution, the default config was entirely broken and it got set in virtual desktop mode which is basically unusable. Even if I fixed the resolution, a lot of apps would generate oversize windows because they didn't expect non-workstation style monitors.
I eventually got X working, by getting lucky by finding the right README, and a graphics card that PCem can emulate. That moment of joy lasted right up when I found that the mouse didn't work. This was primarily because I was too used to what modern Linux distros do and either set Xconfig correctly or have working autodetection. Mouse and X driver configurations are handled by syssetup, but what it does is non-obvious and incorrect. What syssetup does is rather silly. When you select either the X-Window option or Mouse option, it helpfully prompts you to select a mouse driver and seemingly sets everything for you. Unless you're using a Microsoft 2-button serial mouse, the default settings won't work. This is because syssetup only changes the /dev/mouse link to point to the correct device. It doesn't edit Xconfig or prompts you to do so. I can only assume this was in the missing manual, but behavior like this is not going to reflect well on SLS, and likely in part why it was considered so buggy.
With a working mouse driver, I could finally use X.
SLS's default environment is the tried and true fvwn which is essentially a visual clone of Motif. Unfortunately, even when working, X, at least as shipped by SLS, is not great. Many of the menus have links to broken or missing applications. This can even be seen in the quick launch bar at the bottom which references file shares to machines that don't exist.
SLS was pretty innovative for it's time. Compared to even modern Linux distributions, it's relatively straight forward, and is better than a lot of software of the era. While Linux was still a newborn project, it was already making a lot of strides as a stable and useful workstation and server host. Driver compatibility quickly became better as Red Hat and other companies began to involve themselves in Linux.
As a replacement for DOS, it fulfills that role well. Since a full set of development tools came in the box, including GCC, and Smalltalk, it was also pleasant to use as a hobbyist or developers system. I can't find much about Softlanding Software itself, but I get the impression it was a very small company at best. One thing I will note is that when compared to Debian or Slackware from this era, SLS is both simple to setup and relatively easy to use.
While Microsoft basically forced everyone out of the market through OEM agreements, Linux could have been a more serious competitor on the desktop in the days before the Microsoft monopoly was fully formed if it wasn't for the aforementioned issues. I would remind people that companies like Caldera and Corel did make quite a few efforts in this space throughout the mid to late 90s. I can't say that Slackware or Debian, even now, put much stock in having an easy migration path from Windows.
SLS, on the other hand, provided decent online help, for example, "install.info" on the install disk gives you a step by step help for every aspect of installation. mesh helped with users migrating from DOS and Norton Commander-like shells. I can't blame SLS for the disaster that was Xfree86, but it didn't help matters either.
I apologize if the above is a bit of a rant, but "usability" really wasn't a focus through the free software ecosystem until Ubuntu tried with the release of Warty Warthog. I do want to explore more in this space, and I'll likely be digging out Yggdrasil, early Slack and Debian, as well as the BSD series for test drives to document the history. Suggestions welcome on what to try out!
Normally, I'd end with a teaser on what is coming up next. More specifically, I want to explore more about the graphical and networking aspect. However, during the video, I said if we reached 250 subscribers, I'd do a special. At the time of recording, I was at 150 subs. I figured I had a few weeks/months before we hit that threshold. In the 24-ish hours since I posted the video, and now, my channel is nearly at 300.
For those who didn't watch the post, you might wonder what that special is?
It's mastering SLS to actual floppy disks, and seeing if I can get SLS installed on real hardware from 1997, which is three years later than SLS. I did a video about this ThinkPad and its history, including dumping its HDD via serial but never did a writeup as I actually did was install a RAM upgrade.
This might not sound super interesting, but I already know I'm going to have to write some kernel patches just to get a basic installation going. As of the time of writing, I've had to also partially rewrite the ATA driver. Theories welcome as to why!
I already expected this specific failure, but I suspect I'm going to have more surprises on real hardware. I don't know if X will be possible. As far as I know, this laptop is entirely ISA based; no PCI. According to ThinkWiki, the graphics chip is a NeoMagic MagicGraph 128V. This was a bit of a surprise as I thought this laptop had the more common Cirrus Logic chips that were prevalent throughout the 90s. This chip didn't exist until a 1996-1997, and while there's an X driver available, I'm not entirely sure I can port the SVGA XFree86 driver to run on it.
Until the next time, NCommander, signing off ...
Facebook to let users turn off political adverts:
Facebook boss Mark Zuckerberg says users will be able to turn off political adverts on the social network in the run-up to the 2020 US election.
In a piece written for USA Today newspaper, he also says he hopes to help four million Americans sign up as new voters.
Facebook has faced heavy criticism for allowing adverts from politicians that contain false information.
Rival social platform Twitter banned political advertising last October.
"For those of you who've already made up your minds and just want the election to be over, we hear you -- so we're also introducing the ability to turn off seeing political ads," Mr Zuckerberg wrote.
Facebook and its subsidiary Instagram will give users the option to turn off political adverts when they appear or they can block them using the settings features.
Users that have blocked political adverts will also be able to report them if they continue to appear.
The feature, which will start rolling out on Wednesday, allows users to turn off political, electoral and social issue adverts from candidates and other organisations that have the "Paid for" political disclaimer.
The company said it plans to make the feature available to all US users over the next few weeks and will offer it in other countries this autumn.
The DOJ is proposing scaling back protections for large social media companies outlined in The 1996 Communications Decency Act. In section 230 of the act it states
no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This has protected the platforms from liability over user-generated content through the years and enabled the incredible growth of social media. An executive order signed last month directed the FCC to review whether social media companies "actions to remove, edit or supplement users' content" invalidated the protections they enjoyed from liability. It seems we have an answer:
In a press release, the Justice Department said that the past 25 years of technological change "left online platforms unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability."
The new rules will be aimed at "incentivizing platforms to address the growing amount of illicit content online," the department said; the revisions will also "promote free and open discourse online," "increase the ability of the government to protect citizens from unlawful conduct," and promote competition among Internet companies.
In announcing the [requested] changes to the 26-year-old rules on Wednesday, Attorney General William Barr said: "When it comes to issues of public safety, the government is the one who must act on behalf of society at large."
"Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do," he said.
The full review of section 230 by the DOJ is available here. Key Takeaways and Recommendations are here.
Also at: Justice Department proposes major overhaul of Sec. 230 protections