Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Brain research sheds light on the molecular mechanisms of depression:
Researchers of the national Turku PET Centre have shown that the opioid system in the brain is connected to mood changes associated with depression and anxiety.
Depression and anxiety are typically associated with lowered mood and decreased experience of pleasure. Opioids regulate the feelings of pain and pleasure in the brain. The new study conducted in Turku shows that the symptoms associated with depression and anxiety are connected to changes in the brain's opioid system already in healthy individuals.
- We found that the more depressive and anxious symptoms the subjects had, the less opioid receptors there were in their brain.
[...] These results show that the mood changes indicating depression can be detected in the brain already early on.
Journal Reference:
Lauri Nummenmaa, Tomi Karjalainen, Janne Isojärvi, et al. Lowered endogenous mu-opioid receptor availability in subclinical depression and anxiety, Neuropsychopharmacology (DOI: 10.1038/s41386-020-0725-9)
20200618_214854 UTC Update: yes, some of these pictures are... large. Placed in <spoiler> tags for now; click each one it to see/hide the picture. --martyb]
Well, here we go again! Coming off the Novell NetWare experience, I had intended to go straight into Windows NT. After two attempts of shooting a video and much swearing, I decided to shelve that project for the moment. Furthermore, a lot of the feedback from my previous articles talked about early Linux.
That gave me a thought. Why not dig out the grandfather of modern Linux distributions and put it on camera? Ladies and gentlemen, let me introduce you: the Softlanding Linux System, complete with XFree86 1.2!
Honestly, there's a lot of good and bad to say about Softlanding Linux, and while SLS is essentially forgotten, its legacy birthed the concept of the Linux distribution and its bugginess also lead to the creation of both Slackware and Debian. It also made me remember a lot of the bad that came with Linux of this era.
Assuming the summary hasn't scared you off, get ready to write your Xconfig, strap in your Model Ms, and LOADLIN your way below the fold!
I'm pretty sure we all know the early story of Linux, and the post to comp.os.minix that started it all, but just in case:
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
Linus's hobbyist project would quickly become a cornerstone of the free software movement, the beginning of the end of commercial UNIX, and eventually, the core of what would power all Android smartphones today. Not bad for something that was supposed to only run on a 386, right?
Linus's Linux would go through several releases, but Linux immediately became popular because it was a UNIX-like compatible kernel that wasn't tainted by AT&T code. Linux came at exactly the right moment when the world was looking for a free operating system, especially after Linus embraced the GPLv2.
To understand the implications, we need to step back for a moment. Richard Stallman, of the Free Software Foundation, was still putting the pieces together to create a free-as-in speech operating system. This project was known as GNU (a recursive acronym: "GNU is not UNIX") noting that while the tools and system were UNIX-like, they had no code from Bell Labs and were freely available under the terms of the General Public License or GPL.
GNU was intended to form a full operating system, but one critical component, the kernel, was missing.
In 1991, the BSD flavors of UNIX were tied up in the USL v. BSDi lawsuit, and 386BSD was still months away from its first public release. At the time, it was unclear if the BSD-derived operating systems would ever get clear of the taint from Bell Systems.
Stallman and the Free Software Foundation had instead embraced the Mach microkernel. At the time, microkernels were seen as the "future" of software development. This lead to the creation of Hurd, with its first release in 1990. However, design flaws inherent to mach's design prevented Hurd from actually being useable in any meaningful way. In short, the project was stalled, and it was unclear if the mach pit could be climbed out of. It was still possible to use components of the GNU system such as bash on commercial UNIX systems, and MINIX.
MINIX was the closest to a usable "free" operating system at the time. Created by Andrew S. Tanenbaum as a teaching example, MINIX was a small microkernel system that was system call compatible with 7th Edition of UNIX. However, it's origin as a teaching tool presented a problem. Tanenbaum's publisher was unwilling to allow MINIX to be freely distributed, so a licensing fee of ~$70 dollars was attached, with source code available. MINIX also had the advantage it ran on commodity x86 hardware, and it had a small but cult following. The end result was a UNIX-like operating system with source available.
Linux, on the other hand, was not only freely distributable, but it was also as close to a drop-in replacement for MINIX as far as an end-user was concerned. MINIX's userbase quickly abandoned the platform for Linux as it matured, and it's few remaining users migrated to 386BSD, or one of its descents. However, Linux was not a complete system in and of itself; it was just a kernel. To use the earliest versions, you had to cross-compile from UNIX or MINIX, and add software to taste. In a sense, its was like Linux from Scratch minus the good manual.
Softlanding would become the basis of the modern Linux distribution.
At this point, I was continuing to write this article when after doing a bit of fact-checking, I stumbled upon MCC Interim Linux, which Wikipedia claims is the first Linux distribution as it predates SLS. This is technically true, but MCC Interim Linux didn't offer a package manager or much in terms of add-on software.
In that regard, SLS was much closer to what "modern" Linux distribution provides than MCC Interim. That being said, it might be worth diving into in addition to the early boot disk versions of Linux in a later article.
What was most astounding about this project was the utter lack of information about Softlanding entirely. From the README file, Softlanding came with a 60 page manual and was available on both 3 1⁄2 and 5 1/4 floppy disks, CD-ROM, and QIC40 tape. Copies were also available through FTP and various bulletin board systems. The last release has only survived in such a form.
Reading through the sparse README files, Softlanding primarily billed itself as providing "soft landings for DOS bailouts", and a lot of the default software load seems to be converting DOS users. As I would find out, while Softlanding did provide X, it was in such poor shape that it really couldn't be said to be competitive with Windows of the time period.
SLS has a reputation for being rather buggy, and both Debian and Slackware were both started due to frustration with the initial project. Nonetheless, it was best described as the Ubuntu of its era in its relative ease of use, and focus on being user friendly. As I found, Softlanding also provided a fair bit of effort in making Linux easier to use with their mesh shell, built-in emulators, and rather complete set of software for the era.
That being said, SLS died for a reason. To understand that reason, I needed to get it installed. To get it installed, I needed working media.
One rather curious fact about the digital download version of Softlanding is it isn't in the form of disk images. Instead, it's set of files complying with the 8.3 DOS naming scheme of the era. Only the initial boot disk, A1, is provided as a raw image.
The README file goes into more detail, and as far as I can tell, the intent was allow people to download what they need, and then use DOS to create the disks. This is understandable as even in the era where floppies were common, 31 floppies is a bit of a tall order. To put that in context, DOS 6.2 and Windows 3.1 were 3 and 6 floppies respectively. Windows 95 itself was available on 13.
I had to convert these into a more usable form. My solution was a simple Python script that simply created each high-density floppy disk image, formatted them with a FAT12 filesystem, and then mcopy the files on. This removed what would have otherwise been a rather tedious process.
The next step was figuring out how to run this. I knew from the beginning that I wanted to get X going, and I knew that would be a challenge in and of itself. I also knew that early Linux couldn't use LBA addressing and needed some BIOS support to find the hard disk. Sarah Walker's PCem is my usual weapon of choice when I need era correct emulation. It also has the advantage of running relatively close to the original system speed so I could get a good idea of what performance was actually like.
I started with an Award 486DX Clone, and 8 MiB of memory. Since PCem emulates a full system, and Linux was dependent on CHS addressing, I had to set up the drive parameters manually in BIOS. I also needed to set the clock back. I had found warnings that there were known issues with SLS in relation to years past 1996 due to faulty leap year calculations. With BIOS set, I was ready to go.
I popped disc A1 into my emulated floppy drive, and got rewarded with the LILO prompt!
The A1 disk is actually a full live system, and even goes as far as providing "root" and "install" users depending on what you need to do. Some handy notes on SLS told me I needed to partition the disk, and that meant fdisk.
Unfortunately, my 2020 Linux brain at this point forgot it was common practice at the time to use a swap partition and more than one partition for the filesystem. Instead, I just made one single large one for root, and called it good. This didn't prove to be a problem in practice but it does highlight a problem with SLS. fdisk isn't exactly an intuitive step and even DOS 5 and 6 will both offer to partition a hard disk graphically if needed.
Another quirk that tripped me up is that to format a partition, the utility is mke2fs. This might be because ext2 was then new, and the default file system was still Minix. That however I attribute more to not having the manual and 26 years since this software was released. With a tap of the power button, I was now ready to install Softlanding.
Softlanding's installation process is clean and straight to the point. It's better than some other distributions even in 2020. Tap the number, follow the directions and go. The only hiccup is creating a swapfile failed, but the install soldiered on regardless.
That isn't to say that it went without issue. There's the fundamental problem that I'm still stuck feeding 31 floppies. Part of the problem is while Linux 0.99 does in fact support CD-ROMs, it doesn't support ATAPI-based drives. As far as I can tell, it only supports Creative SoundBlaster CD-ROM controllers and SCSI ones. However, I couldn't get SCSI to work at all. It was mostly 30 minutes of me browsing the Internet and changing disks from time to time.
That is until the installer suddenly cried out it couldn't find X2
X1 had been happy grinding away when the message popped on screen that X2 couldn't be found. Actually inserting X2 and retrying got the install going would foreshadow what happened at the end of installation. I'm still not sure what happened here, but I do suspect it was one of SLS's packaging bugs coming to the forefront.
One of the largest issues with SLS was it was just flat out buggy, and I suspect the notice for X2 was just that: a bug. It wouldn't be the last.
Towards the end of installation, SLS prompts to create a boot disk, and then installs LILO to hard disk. It also prompts if you want to setup dual-boot with DOS. This is pretty standard for operating systems of this vintage and I didn't think much about it at the time. What I didn't notice was the mistake in how SLS installs LILO. This would represent the first major footgun I would run into.
Unaware that there was a lurking time bomb, I removed the last floppy, rebooted the system, and was greeted with a system hang.
What had happened was a perfect storm of failure. Normally, when faced with a non-bootable disk, BIOS should give the typical "Non-system disk message", or dump to BASIC depending on the vintage. However, for whatever reason, I ended up with a flashing prompt. I was well aware that LILO would sometimes have issues booting from the hard disk, so I didn't initially give it a second thought. I had the boot disk SLS had made during installation, and that allowed me to start-up Linux.
I should have taken a closer look at what SLS had actually done. It wasn't until much later I had actually pieced together the series of events that had taken place.
Let's step back for a moment and talk about how a PC boots. In the most basic form, booting from either a floppy disk or a hard disk is done by BIOS loading the first sector of a given disk into memory, then executing it. On PCs, sectors are typically 512 bytes, and this forms what's known as the Master Boot Record.
Beside the initial bootstrap code, the MBR also contains the partition map and some information on how the disk was formatted. However, 512 bytes is a bit too small to do anything useful. This is where the Partition Boot Record comes into play. The PBR is a secondary holding area for bootloader code, and the PBR is what gets loaded at startup when a partition is marked 'active'. Microsoft's DOS MBR uses the PBR to load the rest of DOS and then eventually load COMMAND.COM. This is a fairly well-documented process, but it's slightly problematic that your MBR and PBR must agree with how the system is started.
It also means that a MBR had to be installed in the first place. I had started with a blank hard disk which meant there was no MBR. FDISK had written a partition table, but the actual bootstrap code portion was still NULLed out. What I hadn't noticed was that SLS had installed LILO to /dev/hda1, or the partition boot record. This meant that there was no MBR to start the system, leading to the hang.
In general, I find PBR based booting rather unreliable at best. This is compounded by the fact that Microsoft has a very bad habit of trashing boot code. My fix was to simply change lilo.conf and then re-run lilo to re-install to the MBR. This let me boot from the hard disk!
With that interlude aside, it was time to actually take a closer look at the Softlanding System itself.
Softlanding only provides root as a default user with no password. After loggining in, I'm greeted by this login banner:
Softlanding Software (604) 592-0188, gentle touchdowns from DOS bailouts. Welcome to Linux SLS 1.05. Type "mesh" for a menu driven interface. Fresh installations should use "syssetup" to link X servers, etc.
The phrase "softlanding for DOS bailouts" appears on most of SLS's media, and from what I can tell, SLS was intended to be that: a better DOS. This becomes very clear when we follow the instructions to load 'mesh'.
If this looks familiar, it probably because you're familiar with Norton Commander for DOS or one of its clones
mesh is entirely something Softlanding cooked up for SLS. Source code isn't provided, and its LICENSE file states it can only be distributed with SLS. One thing though has to be said is that I have to give SLS props here, this is a really good way to help users soft land from DOS. Norton Commander was exceptionally popular with DOS, and I even remember it holding in until Windows 95. By giving the console a decent UI with familiar functionality, you've basically eliminated an entire cliff in migrating.
I do wonder how much Softlanding was trying to mimic DOS. At no point did the installer or any official instructions tell me to make another user. Although even in the era, running as root 24/7 would have been a bad practice, it would have made Linux resemble DOS a lot more than it did out of the box. Once again, I don't have the manual so I have no real idea how much any of this is intentional.
However, one thing is noticeable is that the default software load is very much tailored to help those migrating from DOS.
The first notable addition to the party is the joe editor. For those not familiar with joe, it's a clone of Wordstar. For those not familiar with Wordstar, it was the emacs to WordPerfect's vi.
NOTE: I'm not apologizing for the above.
Joking aside, WordStar has a rather diehard userbase and there are quite a few writers who still get by on the old CP/M and DOS-based versions of WordStar. Including joe as a default editor in addition to the more common vi and emacs would help those familiar with WordStar make the migration a bit easier.
Ease of migration from DOS also shows up with DOSemu, which is included in the box. DOSemu as the name suggests is a full-functioned DOS emulator. It can either work with an existing DOS partition, or with a micro hdimage. In either case, you need an actual copy of DOS to use it. Linux folders can be mapped into the emulator via the LREDIR command, and a compatibility list is provided.
WordPerfect loaded up just fine. DOOM crashed the emulator. Windows 3.0 interestingly is marked as "working in real mode", but trying to install it just leads to a hang. Most Linux users might be realizing that I've been up to this point tap-dancing around a rather large pain point of early Linux.
NOTE: Still not apologizing for that joke.
If you watched the video, you might have seen the rather large failure montage that went with my attempts to get X up and running.
Let me be rather blunt about this. X was the reason that Linux on the desktop was a fracking disaster throughout the 90s and early 2000s. The problem isn't with the X protocol or design, it's entirely with the driver stack. While X of SLS 1.0.5 may get a partial pass because it predates the VESA VBE BIOS extensions, X was an utter nightmare up until Xorg finally plastered over most of the bullshit with working autoconfiguration.
A lot of people are going to yell at me, and say "Oh, but graphics card vendors didn't publish docs.". Maybe that's true, but even in cases where there is a working X server, you still have to do a lot of manual configuration to get it working. Red Hat specifically went to get drivers available as free software when possible; my ThinkPad's NeoMagic card has a driver due to these efforts. Remember, back in this time period, there was more choice than AMD, NVIDIA or Intel GMA. It gets worse if you can believe it: X specifically also requires timing information relating to the refresh rates, and arcane bullshit that no other operating system needs. EDID initially appeared in 1994, and was more standardized by 1996.
Let me give you an example of an X modeline:
Modeline syntax: pclk hdisp hsyncstart hsyncend htotal vdisp vsyncstart vsyncend vtotal [flags] Flags (optional): +HSync, -HSync, +VSync, -VSync, Interlace, DoubleScan, CSync, +CSync, -CSync Modeline "1600x1200" 155 1600 1656 1776 2048 1200 1202 1205 1263 # (Label) (clk) (x-resolution) (y-resolution) # | # (pixel clock in MHz)
This is utter bull. Games that talk to hardware directly such as the DOS-based DOOM or Duke Nukem 3D could get 800x600 or better. Duke3D could theoretically even go as high as 1600x1200. Windows 3.0 and 3.1, by comparison, was archaic because it required you to still run SETUP to set the graphics mode and have a driver for what you specifically needed.
Linux ran on 386+ and higher processors. Virtual 8086 Mode is there; a 16-bit BIOS driver is not an excuse on why this was so bad. Even if we jump ahead to 1998 when 32-bit VBE was standardized, it still. didn't. work.
A 486 had enough horsepower to do unaccelerated X with something like fvwm without issue. While SLS had some specific pain points related to X, this mess lasted well up until the 2000s. KNOPPIX was the first time I specifically remember where X autodetection had a semi-decent chance of working. Most X applications on Linux were ports of software originally written for UNIX. A lot of this software that assumed higher resolutions 640x480. While X can run at standard VGA resolution, the default config was entirely broken and it got set in virtual desktop mode which is basically unusable. Even if I fixed the resolution, a lot of apps would generate oversize windows because they didn't expect non-workstation style monitors.
I eventually got X working, by getting lucky by finding the right README, and a graphics card that PCem can emulate. That moment of joy lasted right up when I found that the mouse didn't work. This was primarily because I was too used to what modern Linux distros do and either set Xconfig correctly or have working autodetection. Mouse and X driver configurations are handled by syssetup, but what it does is non-obvious and incorrect. What syssetup does is rather silly. When you select either the X-Window option or Mouse option, it helpfully prompts you to select a mouse driver and seemingly sets everything for you. Unless you're using a Microsoft 2-button serial mouse, the default settings won't work. This is because syssetup only changes the /dev/mouse link to point to the correct device. It doesn't edit Xconfig or prompts you to do so. I can only assume this was in the missing manual, but behavior like this is not going to reflect well on SLS, and likely in part why it was considered so buggy.
With a working mouse driver, I could finally use X.
SLS's default environment is the tried and true fvwn which is essentially a visual clone of Motif. Unfortunately, even when working, X, at least as shipped by SLS, is not great. Many of the menus have links to broken or missing applications. This can even be seen in the quick launch bar at the bottom which references file shares to machines that don't exist.
SLS was pretty innovative for it's time. Compared to even modern Linux distributions, it's relatively straight forward, and is better than a lot of software of the era. While Linux was still a newborn project, it was already making a lot of strides as a stable and useful workstation and server host. Driver compatibility quickly became better as Red Hat and other companies began to involve themselves in Linux.
As a replacement for DOS, it fulfills that role well. Since a full set of development tools came in the box, including GCC, and Smalltalk, it was also pleasant to use as a hobbyist or developers system. I can't find much about Softlanding Software itself, but I get the impression it was a very small company at best. One thing I will note is that when compared to Debian or Slackware from this era, SLS is both simple to setup and relatively easy to use.
While Microsoft basically forced everyone out of the market through OEM agreements, Linux could have been a more serious competitor on the desktop in the days before the Microsoft monopoly was fully formed if it wasn't for the aforementioned issues. I would remind people that companies like Caldera and Corel did make quite a few efforts in this space throughout the mid to late 90s. I can't say that Slackware or Debian, even now, put much stock in having an easy migration path from Windows.
SLS, on the other hand, provided decent online help, for example, "install.info" on the install disk gives you a step by step help for every aspect of installation. mesh helped with users migrating from DOS and Norton Commander-like shells. I can't blame SLS for the disaster that was Xfree86, but it didn't help matters either.
I apologize if the above is a bit of a rant, but "usability" really wasn't a focus through the free software ecosystem until Ubuntu tried with the release of Warty Warthog. I do want to explore more in this space, and I'll likely be digging out Yggdrasil, early Slack and Debian, as well as the BSD series for test drives to document the history. Suggestions welcome on what to try out!
Normally, I'd end with a teaser on what is coming up next. More specifically, I want to explore more about the graphical and networking aspect. However, during the video, I said if we reached 250 subscribers, I'd do a special. At the time of recording, I was at 150 subs. I figured I had a few weeks/months before we hit that threshold. In the 24-ish hours since I posted the video, and now, my channel is nearly at 300.
For those who didn't watch the post, you might wonder what that special is?
It's mastering SLS to actual floppy disks, and seeing if I can get SLS installed on real hardware from 1997, which is three years later than SLS. I did a video about this ThinkPad and its history, including dumping its HDD via serial but never did a writeup as I actually did was install a RAM upgrade.
This might not sound super interesting, but I already know I'm going to have to write some kernel patches just to get a basic installation going. As of the time of writing, I've had to also partially rewrite the ATA driver. Theories welcome as to why!
I already expected this specific failure, but I suspect I'm going to have more surprises on real hardware. I don't know if X will be possible. As far as I know, this laptop is entirely ISA based; no PCI. According to ThinkWiki, the graphics chip is a NeoMagic MagicGraph 128V. This was a bit of a surprise as I thought this laptop had the more common Cirrus Logic chips that were prevalent throughout the 90s. This chip didn't exist until a 1996-1997, and while there's an X driver available, I'm not entirely sure I can port the SVGA XFree86 driver to run on it.
Until the next time, NCommander, signing off ...
Facebook to let users turn off political adverts:
Facebook boss Mark Zuckerberg says users will be able to turn off political adverts on the social network in the run-up to the 2020 US election.
In a piece written for USA Today newspaper, he also says he hopes to help four million Americans sign up as new voters.
Facebook has faced heavy criticism for allowing adverts from politicians that contain false information.
Rival social platform Twitter banned political advertising last October.
"For those of you who've already made up your minds and just want the election to be over, we hear you -- so we're also introducing the ability to turn off seeing political ads," Mr Zuckerberg wrote.
Facebook and its subsidiary Instagram will give users the option to turn off political adverts when they appear or they can block them using the settings features.
Users that have blocked political adverts will also be able to report them if they continue to appear.
The feature, which will start rolling out on Wednesday, allows users to turn off political, electoral and social issue adverts from candidates and other organisations that have the "Paid for" political disclaimer.
The company said it plans to make the feature available to all US users over the next few weeks and will offer it in other countries this autumn.
The DOJ is proposing scaling back protections for large social media companies outlined in The 1996 Communications Decency Act. In section 230 of the act it states
no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This has protected the platforms from liability over user-generated content through the years and enabled the incredible growth of social media. An executive order signed last month directed the FCC to review whether social media companies "actions to remove, edit or supplement users' content" invalidated the protections they enjoyed from liability. It seems we have an answer:
In a press release, the Justice Department said that the past 25 years of technological change "left online platforms unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability."
The new rules will be aimed at "incentivizing platforms to address the growing amount of illicit content online," the department said; the revisions will also "promote free and open discourse online," "increase the ability of the government to protect citizens from unlawful conduct," and promote competition among Internet companies.
In announcing the [requested] changes to the 26-year-old rules on Wednesday, Attorney General William Barr said: "When it comes to issues of public safety, the government is the one who must act on behalf of society at large."
"Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do," he said.
The full review of section 230 by the DOJ is available here. Key Takeaways and Recommendations are here.
Also at: Justice Department proposes major overhaul of Sec. 230 protections
T-Mobile's outage yesterday was so big that even Ajit Pai is mad:
T-Mobile's network suffered an outage across the US yesterday, and the Federal Communications Commission is investigating.
FCC Chairman Ajit Pai, who takes an extremely hands-off approach to regulating telecom companies, used his Twitter account to say, "The T-Mobile network outage is unacceptable" and that "the FCC is launching an investigation. We're demanding answers—and so are American consumers."
No matter what the investigation finds, Pai may be unlikely to punish T-Mobile or impose any enforceable commitments. For example, an FCC investigation last year into mobile carriers' response to Hurricane Michael in Florida found that carriers failed to follow their own previous voluntary roaming commitments, unnecessarily prolonging outages. Pai himself called the carriers' response to the hurricane "completely unacceptable," just like he did with yesterday's T-Mobile outage. But Pai's FCC imposed no punishment related to the bad hurricane response and continued to rely on voluntary measures to prevent recurrences.
[...] Mobile voice services like T-Mobile's are still classified as common-carrier services under Title II of the Communications Act, but the FCC under Pai deregulated the home and mobile broadband industry and has taken a hands-off approach to ensuring resiliency in phone networks.
"This is, once again, where pretending that broadband is not an essential telecommunications service completely undermines the FCC's ability to act," longtime telecom attorney and consumer advocate Harold Feld, the senior VP of advocacy group Public Knowledge, told Ars today. "We're not talking about an assumption that T-Mobile necessarily did anything wrong. But when we have something this critical to the economy, and where it is literally life and death for people to have the service work reliably, it's not about 'trusting the market' or expecting companies to be on their best behavior. We as a country need to know what is the reality of our broadband networks, the reality of their resilience and reliability, and the reality of what happens when things go wrong. That takes a regulator with real authority to go in, ask hard questions, seize documents if necessary, and compel testimony under oath."
Several provisions of Title II common-carrier rules that Pai has fought against "give the FCC authority to make sure the network is resilient and reliable," Feld said. The FCC gutting its own authority "influences how the FCC conducts its investigations," he said. "[FCC] staff and the carriers know very well that if push comes to shove, companies can simply refuse to give the FCC information that might be too embarrassing. So the FCC is stuck now playing this game where they know they can't push too hard or they get their bluff called. Carriers have incentive to play along enough to keep the FCC or Congress from re-regulating, but at the end of the day it's the carriers—not the FCC—that gets to decide how much information to turn over."
USC researchers unlock fatal vulnerability in many cancer cells
Like any cells in the body, cancer cells need sugar - namely glucose - to fuel cell proliferation and growth. Cancer cells in particular metabolize glucose at a much higher rate than normal cells. However researchers from USC Viterbi's Mork Family Department of Chemical Engineering and Materials Science have unlocked a weakness in a common type of cancer cell: sugar inflexibility. That is, when cancer cells are exposed to a different type of sugar - galactose - the cells can't adapt, and will die.
[...] The paper describes how oncogenes, the genes that cause cancer, can also lead cancer cells to become inflexible to changes in their sugar supply. Normally, cells grow by metabolizing glucose, but most normal cells can also grow using galactose. However, the team discovered that cells possessing a common cancer-causing gene named AKT cannot process galactose, and therefore they die when exposed to this type of sugar.
[...] The team's findings also showed that while the oxidative process brought on by galactose did result in cell death in AKT-type cancer cells, when the cells were given a different genetic mutation, MYC, the galactose did not kill the cells.
[...] The researchers also discovered after around 15 days in galactose, some cancer cells started to reoccur.
Journal Reference:
Dongqing Zheng, Jonathan H. Sussman, Matthew P. Jeon, et al. AKT but not MYC promotes reactive oxygen species-mediated cell death in oxidative culture [$], Journal of Cell Science (DOI: 10.1242/jcs.239277)
The theft of top-secret computer hacking tools from the CIA in 2016 was the result of a workplace culture in which the agency's elite computer hackers "prioritized building cyber weapons at the expense of securing their own systems," according to an internal report prepared for then-director Mike Pompeo as well as his deputy, Gina Haspel, now the director.
The breach — allegedly committed by a CIA employee — was discovered a year after it happened, when the information was published by WikiLeaks in March 2017. The anti-secrecy group dubbed the release "Vault 7," and U.S. officials have said it was the biggest unauthorized disclosure of classified information in the CIA's history, causing the agency to shut down some intelligence operations and alerting foreign adversaries to the spy agency's techniques.
The October 2017 report by the CIA's WikiLeaks Task Force, several pages of which were missing or redacted, portrays an agency more concerned with bulking up its cyber arsenal than keeping those tools secure. Security procedures were "woefully lax" within the special unit that designed and built the tools, the report said.
Without the WikiLeaks disclosure, the CIA might never have known the tools had been stolen, according to the report. "Had the data been stolen for the benefit of a state adversary and not published, we might still be unaware of the loss," the task force concluded.
The task force report was provided to The Washington Post by the office of Sen. Ron Wyden (D-Ore.), a member of the Senate Intelligence Committee, who has pressed for stronger cybersecurity in the intelligence community. He obtained the redacted, incomplete copy from the Justice Department.
The breach came nearly three years after Edward Snowden, then a National Security Agency contractor, stole and disclosed classified information about the NSA's surveillance operations.
"CIA has moved too slowly to put in place the safeguards that we knew were necessary given successive breaches to other U.S. Government agencies," the report said, finding that "most of our sensitive cyber weapons were not compartmented, users shared systems administrator-level passwords, there were no effective removable media [thumb drive] controls, and historical data was available to users indefinitely."
Bedrock type under forests greatly affects tree growth, species, carbon storage:
A forest's ability to store carbon depends significantly on the bedrock beneath, according to Penn State researchers who studied forest productivity, composition and associated physical characteristics of rocks in the Appalachian ridge and Valley Region of Pennsylvania.
The results have implications for forest management, researchers suggest, because forests growing on shale bedrock store 25% more live, aboveground carbon and grow faster, taking up about 55% more carbon each year than forests growing on sandstone bedrock.
[...] To reach their conclusions, researchers analyzed forest inventory data from 565 plots on state forest and game lands managed by the Pennsylvania Department of Conservation and Natural Resources and the state Game Commission in the Appalachian Ridge and Valley Region. They used a suite of GIS-derived landscape metrics, including measures of climate, topography and soil physical properties, to identify drivers of live forest carbon dynamics in relation to bedrock.
Those forest plots contained more than 23,000 trees, ranging from 20 to 200 years old, with most being 81 to 120 years old, according to the most recent available forest inventory data. In the study dataset, 381 plots were on sandstone bedrock and 184 were on shale—a similar ratio to the amount of Pennsylvania public land on each bedrock type in the Ridge and Valley Region.
[...] While forests underlain by both shale and sandstone bedrock were oak dominated, the tree communities are quite different, Reed pointed out. Northern red oak is more dominant on shale bedrock, and chestnut oak dominates on sandstone. Most species in the forest tend to be more productive on shale, and the diversity of tree species is higher in sites on shale bedrock.
Journal Reference:
Warren P. Reed, et al. Bedrock type drives forest carbon storage and uptake across the mid-Atlantic Appalachian Ridge and Valley, U.S.A., Forest Ecology and Management (DOI: 10.1016/j.foreco.2020.117881)
The FDA just approved the first prescription video game:
It might not look like much of a video game, but Akili Interactive's EndeavorRX, formerly Project EVO, may go down in history: it's the first video game that can legally be marketed and prescribed as medicine in the US.
That's the landmark decision from the Food and Drug Administration (FDA), which is authorizing doctors to prescribe the iPhone and iPad game for kids between ages eight and 12 years old with ADHD, after it underwent seven years of clinical trials that studied over 600 children to figure out whether a game could actually make a difference.
According to the company's favorite of the five studies, the answer is yes: one-third of kids treated "no longer had a measurable attention deficit on at least one measure of objective attention" after playing the obstacle-dodging, target-collecting game for 25 minutes a day, five days a week for four weeks.
"Improvements in ADHD impairments following a month of treatment with EndeavorRx were maintained for up to a month," the company cites, with the most common side effects being frustration and headache — seemingly mild compared to traditional drugs, as you'd hope from so-called virtual medicine.
That said, we are talking about a study by doctors who work for the game's developer, according to disclosures at the bottom of the study, and even their conclusion is that the results "are not sufficient to suggest that AKL-T01 should be used as an alternative to established and recommended treatments for ADHD."
Akamai, Amazon Mitigate Massive DDoS Attacks:
The first week of June 2020 arrived with a massive 1.44 TBPS (terabytes per second) distributed denial of service (DDoS) attack, Akamai reveals.
Lasting for two hours and peaking at 385 MPPS (million packets per second), the assault was the largest Akamai has even seen in terms of BPS, but also stood out from the crowd because of its complexity.
Aimed at an Internet hosting provider (which Akamai would not name), the attack appears to have been a planned and orchestrated effort. The intent, the company says, was to inflict maximum damage.
While typical DDoS attacks show geographically concentrated traffic, this assault was different, with the traffic being globally distributed. However, "a higher percentage of the attack traffic was sourced in Europe," Roger Barranco, Akamai VP of Global Security Operations, told SecurityWeek.
The geographic distribution of the attack traffic, Barranco says, surpasses that of Internet of Things (IoT) botnet Mirai, which "had some continental and geographic distribution, but not to this extent."
Nine different attack vectors were used in this attack, namely ACK Flood, CLDAP Reflection, NTP FLOOD, RESET Flood, SSDP Flood, SYN Flood, TCP Anomaly, UDP Flood, and UDP Fragment. Furthermore, Akamai noticed multiple botnet attack tools being leveraged.
Since the attack is still under investigation, Barranco wouldn't share details on who might have been behind the operation or the type of devices employed.
Soap bubbles pollinated a pear orchard without damaging delicate flowers:
After confirming through optical microscopy that soap bubbles could, in fact, carry pollen grains, Miyako and Xi Yang, his coauthor on the study, tested the effects of five commercially available surfactants on pollen activity and bubble formation. The neutralized surfactant lauramidopropyl betain (A-20AB) won out over its competitors, facilitating better pollen germination and growth of the tube that develops from each pollen grain after it is deposited on a flower. Based on a laboratory analysis of the most effective soap concentrations, the researchers tested the performance of pear pollen grains in a 0.4% A-20AB soap bubble solution with an optimized pH and added calcium and other ions to support germination. After three hours of pollination, the pollen activity mediated through the soap bubbles remained steady, while other methods such as pollination through powder or solution became less effective.
Miyako and Yang then loaded the solution into a bubble gun and released pollen-loaded bubbles into a pear orchard, finding that the technique distributed pollen grains (about 2,000 per bubble) to the flowers they targeted, producing fruit that demonstrated the pollination's success. Finally, the researchers loaded an autonomous, GPS-controlled drone with functionalized soap bubbles, which they used to direct soap bubbles at fake lilies (since flowers were no longer in bloom) from a height of two meters, hitting their targets at a 90% success rate when the machine moved at a velocity of two meters per second.
Journal References:
Xi Yang, Eijiro Miyako. Soap Bubble Pollination, iScience (DOI: 10.1016/j.isci.2020.101188)
Svetlana A.Chechetka, Yue Yu, Masayoshi Tange, Eijiro Miyako. Materially Engineered Artificial Pollinators [open], Chem (DOI: 10.1016/j.chempr.2017.01.008)
Primitive stem cells point to new bone grafts for stubborn-to-heal fractures:
Previous studies have shown that stem cells, particularly a type called mesenchymal stem cells, can be used to produce bone grafts that are biologically active. In particular, these cells convert to bone cells that produce the materials required to make a scaffolding, or the extracellular matrix, that bones need for their growth and survival.
However, these stem cells are usually extracted from the marrow of an adult bone and are, as a result, older. Their age affects the cells' ability to divide and produce more of the precious extracellular matrix, Kaunas said.
To circumvent this problem, the researchers turned to the cellular ancestors of mesenchymal stem cells, called pluripotent stem cells. Unlike adult mesenchymal cells that have a relatively short lifetime, they noted that these primitive cells can keep proliferating, thereby creating an unlimited supply of mesenchymal stem cells needed to make the extracellular matrix for bone grafts. They added that pluripotent cells can be made by genetically reprogramming donated adult cells.
When the researchers experimentally induced the pluripotent stem cells to make brand new mesenchymal stem cells, they were able to generate an extracellular matrix that was far more biologically active compared to that generated by mesenchymal cells obtained from adult bone.
[...] To test the efficacy of their scaffolding material as a bone graft, they then carefully extracted and purified the enriched extracellular matrix and then implanted it at a site of bone defects. Upon examining the status of bone repair in a few weeks, they found that their pluripotent stem-cell-derived matrix was five to sixfold more effective than the best FDA-approved graft stimulator.
Journal Reference:
Eoin P. McNeill, Suzanne Zeitouni, Simin Pan, et al. Characterization of a pluripotent stem cell-derived matrix with powerful osteoregenerative capabilities [open], Nature Communications (DOI: 10.1038/s41467-020-16646-2)
Andy Maxwell over at TorrentFreak informs us Removing "Annoying" Windows 10 Features is a DMCA Violation, Microsoft Says:
Ninjutsu OS, a new software tool that heavily modifies Windows 10 with a huge number of tweaks, mods and extra tools, has been hit with a DMCA complaint by Microsoft. According to the copyright notice, the customizing, tweaking and disabling of Windows 10 features, even when that improves privacy, amounts to a violation of Microsoft's software license.
Since Windows was first released, people have been modifying variants of the world-famous operating system to better fit their individual requirements.
Many of these tweaks can be carried out using tools provided within the software itself but the recently-released Ninjutsu OS aims to take Windows 10 modding to a whole new level.
Released on May 7, Ninjutsu OS claims to take Windows 10 and transform it into a penetration testing powerhouse, adding huge numbers of tools (around 800) aimed at security experts, a few for regular users (qBitTorrent and Tor Browser, for example) while also removing features considered unwanted or unneeded in such an environment.
[...] According to the complaint, the above actions by Ninjutsu OS as mentioned on its Github page provide a "work around technical restrictions of the software", something which supposedly violates Microsoft's software license terms.
[...] "As such, we request that you please act expeditiously to remove or disable access to the specific pages/links described above, and thereby prevent the illegal reproduction and distribution of Microsoft content, via your company's network, pursuant to 17 U.S.C. §512(d)," the DMCA complaint adds.
At first view, some may conclude that Ninjutsu OS amounts to a heavily modified yet pirated version of Windows 10. However, a video explaining how the software works [24m28s] suggests that users will actually need their own license for a genuine copy of Windows 10 to get the modifications up and running properly. Ninjutsu's creator informs TF that's indeed the case.
The folks over at at TechRaptor bring us word (recently updated) that Valve Implementing New Steam Comment Moderation Bot:
Steam's forums are an enjoyable place to be when you are discussing the latest happening in the gaming world. It is common to run into internet trolls and the likes, but there is nothing like keeping up with the spam comments giving people unsafe links to click through. Some of those links directed to Counter Strike: Global Offensive skin trading and gambling sites, and other non-safe places where they ask you for sensitive and personal information.
Recently, Steam users went to Reddit to report a new message that appeared to them for a few seconds whenever they comment in forum threads. Not only that, it apparently shows for users as well who are posting reviews of their recently played games.
Reportedly, the following message normally only shows for a few seconds before your comment gets approved, which means the comment moderation bot is only looking for links or any harmful content.
"This comment is awaiting analysis by our automated content check system. It will be temporarily hidden until we verify that it does not contain harmful content (e.g. links to websites that attempt to steal information)."
Valve later got back to TechRaptor with the following message:
Yes, we are scanning the forums and hiding posts that contain links to malicious sites attempting to steal user’s Steam information. We are always looking for ways to improve with new updates, fixes, and features.
Apolitical? Check. Narrowly-scoped? Check. No ideological argument or stretching of the definition of "harm" necessary? Check. Botting like a boss, guys.
[Belated Note: SoylentNews does not use automated moderation. We stick you poor folks with the work instead. --TMB]
On July 7, AMD will launch three refreshed Zen 2 "Matisse" desktop CPUs with slightly higher boost clocks than the previous versions:
The 3900XT and 3800XT will not come with a bundled cooler, unlike the 3900X and 3800X (the top-of-the-line 16-core 3950X also did not come with a cooler). 3600XT will come with a Wraith Spire cooler.
The "suggested etailer price" (SEP) is the same as the launch prices for the previous CPUs ($499, $399, $249), but the 3900X is often sold for $400-$420 instead of $500, for example. So customers may end up paying between 10-25% more for a 2-5% potential performance gain, unless retailers drop the prices soon after launch.
The new 3000XT family of processors focuses mostly on boosting the turbo frequency by 100-200 MHz for the same power. AMD states that this is due to using an optimized 7nm manufacturing process. This is likely due to a minor BKM[*] or PDK[**] update that allows TSMC/AMD to tune the process for a better voltage/frequency curve and bin a single CPU slightly higher.
[...] In each [of the] three cases, the XT processors give slightly better frequency than the X units, so we should expect to see an official permanent price drop on the X processors in order to keep everything in line.
The CPUs should work with existing motherboards that supported the non-XT CPUs, after a BIOS update.
A September to October 2020 launch date is likely for the first next-generation Ryzen 4000 Zen 3 "Vermeer" CPUs. Rumors of the launch being pushed back to 2021 have been denied.
[*] BKM: Best-Known Method
[**] PDK: Process Design Kit