Blogger Ben Werdmuller has discussed an article in Nature about the political impact of the algorithm(s) used by X (formerly known as Twitter). The gist is that the use of the algorithms against X's users tends to shift about 5% of them in a specific direction. That's more than enough to tip an election one way or another especially since the damage seems persistent and lasts even after exposure ceases.
Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects. Here we present results from a 2023 field experiment on Elon Musk's platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users' feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X's algorithm has persistent effects on users' current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.
It should be added that the effect has already been seen in multiple countries. For example, the elections in Turkey were affected with outright censorship, within X. And the impact from the CPP's Bytedance's Tiktok is likely even more severe, not to mention multiple experiments in manipulation in Meta's properties like Facebook.
Journal Reference: Gauthier, G., Hodler, R., Widmer, P. et al. The political effects of X's feed algorithm. Nature (2026). https://doi.org/10.1038/s41586-026-10098-2
Previously:
(2026) How Screwed is Generation Alpha, and the Generations Which Will Depend on Them?
(2025) European Union Orders X to Hand Over Algorithm Documents
(2024) Six Months Ago NPR Left Twitter. The Effects Have Been Negligible
(2023) Utah Sues Tiktok For Getting Children 'Addicted' To Its Algorithm
(2022) Leaked Documents Reveal Instagram Was Pushing Girls Towards Content That Harmed Mental Health
(2022) Musk Buying Twitter Is Not About Freedom of Speech
... and more
Read more of this story at SoylentNews.
]]>12-core chiplets coming to Zen 6?
Following Ryzen 9000, AMD is set to release its next-gen Ryzen 10000 series processors this year — assuming the company sticks to its existing nomenclature. These upcoming desktop CPUs from AMD are codenamed "Olympic Ridge" and will be based on the company's new Zen 6 microarchitecture. Today, a new leak from reliable tipster HXL says we can expect seven different configs as part of this lineup, across dual- and single-CCD SKUs.
According to the [information in a tweet], Ryzen 10000 will come in 6-core, 8-core, 10-core, and 12-core layouts as part of the single CCD designs. For the variants with two CCDs, you have 16-core (8+8), 20-core (10+10) and 24-core (12+12) made possible by simply doubling the chiplets. Either way, the lineup looks to be flexible enough to span from entry-level to power users and professionals.
This will mark the first time in Ryzen history that AMD ventures outside of its 8-core CCDs, by introducing new chiplets maxing out at 12 cores instead. Each of those CCDs is said to carry 48 MB of L3 cache, which shall make the flagship (non-X3D) SKU a 96 MB option. Throughout Zen 1 to Zen 5, the highest-end config for Ryzen chips has been 16 cores, but it should finally be upgraded to 24 cores with Ryzen 10000.
Now, comparing that to what Intel has in store with Nova Lake, that's an entirely different story. Current rumors suggest Nova Lake's flagship offering will be a monstrous 52-core SKU, with possibly 288 MB of bLLC (also across two tiles). Unlike the Red Team, Intel doesn't seem to be interested in segregating its extra-cache CPUs as a separate lineup entirely.
Apart from the core layouts of these chips, the underlying architecture is also of interest, since Zen 6 is said to usher in IPC improvements and higher clock speeds, while still working on the existing AM5 platform — the same cannot be said for Intel. It's a little too early to judge any of this, since Intel's Arrow Lake refresh isn't even out yet , and AMD hasn't made Ryzen 10000 official, beyond the Olympic Ridge codename. But hopefully, by the time we know all the details about AMD's next-gen CPUs, the price of RAM will also be a bit more affordable.
Read more of this story at SoylentNews.
]]>NASA Officially Classifies Boeing Starliner Failure As A Maximum-Level Type A Mishap - Jalopnik:
NASA has officially categorized the 2024 failure of the Boeing Starliner spacecraft, which stranded astronauts Suni Williams and Butch Wilmore on the International Space Station (ISS) for nine months, as a Type A mishap. This is NASA-speak for the maximum level of failure a mission can reach, defined as an incident that causes over $2 million in damage, results in the loss of a vehicle or at least control over it, or any fatalities, per the BBC. This designation signifies that the space agency now views the mission as a disaster, even if the astronauts regained enough control at the last minute to prevent the worst-case scenario.
[...] Who's to blame here? Citing the full 312-page report, Isaacman found plenty to go around. Basically, NASA wanted a second option for launching people into space beyond SpaceX, and it wanted it so bad that it simply swept problems under the rug. "As development progressed, design compromises and inadequate hardware qualifications extended beyond NASA's complete understanding," said Isaacman in a very polite way. Multiple test flights failed in various ways, but before these technical faults were understood, NASA just greenlit the following flights anyway. Oops.
There were organizational problems as well: NASA more or less trusted Boeing, which once upon a time had a sterling reputation, to sort out its engineering problems. Isaacman stated that the agency didn't want to damage that reputation. Safe to say it's pretty well shot now, and this Type A classification isn't going to help. Meanwhile, Boeing was also not giving sufficient scrutiny to its own subcontractors. So nobody was overseeing anybody enough. Who could imagine this would go poorly?
But rest assured: it gets worse. CNN quotes one NASA insider as saying, "There was yelling at meetings," and another as saying, "There are some people that just don't like each other very much." Isaacman himself admitted that "disagreements over crew return options deteriorated into unprofessional conduct while the crew remained on orbit." Welcome to the world's premiere space exploration agency.
Despite it all, NASA doesn't want to give up on Boeing, and the Starliner project is moving ahead in a reduced capacity. But Isaacman made it clear that there would be much stricter oversight going forward, and no launches would be approved until technical fixes were verified and implemented. The desire to diversify off of SpaceX alone is still there.
Read more of this story at SoylentNews.
]]>Hungarian startup Allonic secures $7.2M to transform robot manufacturing with 3D tissue braiding:
Budapest-based robotics startup Allonic raised $7.2 million in pre-seed funding, led by Visionaries Club, marking the largest pre-seed round in Hungary to date, as Vestbee was told. The funds will be allocated towards developing a new method for producing complex, dexterous robotic bodies.
- Founded in 2025 by Benedek Tasi, Dávid Pelyva, and David Holló, Allonic develops robotic manufacturing systems that automate the production of complex, human-like robot bodies.
- The company's platform, 3D Tissue Braiding, creates robotic structures by first producing a skeletal scaffolding, then weaving soft, load-bearing fibers around it, and integrating actuators and tendons directly into the structure during production.
- This process eliminates traditional multi-part assembly, embeds sensors and wiring into the body, and produces fully operational, compliant robotic mechanisms in a single automated workflow. The system allows complex 3D designs to be realized at scale, distributes mechanical stress uniformly, and enables rapid iteration from digital design to functional hardware.
See also:
Read more of this story at SoylentNews.
]]>Tesla 'Robotaxi' adds 5 more crashes in Austin in a month:
Tesla has reported five new crashes involving its "Robotaxi" fleet in Austin, Texas, bringing the total to 14 incidents since the service launched in June 2025. The newly filed NHTSA data also reveals that Tesla quietly upgraded one earlier crash to include a hospitalization injury, something the company never disclosed publicly.
The new data comes from the latest update to NHTSA's Standing General Order (SGO) incident report database for automated driving systems (ADS). We have been tracking Tesla's Robotaxi crash data closely, and the trend is not improving.
Tesla submitted five new crash reports in January 2026, covering incidents from December 2025 and January 2026. All five involved Model Y vehicles operating with the autonomous driving system "verified engaged" in Austin.
The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.
As with every previous Tesla crash in the database, all five new incident narratives are fully redacted as "confidential business information." Tesla remains the only ADS operator to systematically hide crash details from the public through NHTSA's confidentiality provisions. Waymo, Zoox, and every other company in the database provide full narrative descriptions of their incidents.
Buried in the updated data is a revised report for a July 2025 crash (Report ID 13781-11375) that Tesla originally filed as "property damage only." In December 2025, Tesla submitted a third version of that report upgrading the injury severity to "Minor W/ Hospitalization."
This means someone involved in a Tesla "Robotaxi" crash required hospital treatment. The original crash involved a right turn collision with an SUV at 2 mph. Tesla's delayed admission of hospitalization, five months after the incident, raises more questions about its crash reporting, which is already heavily redacted.
With 14 crashes now on the books, Tesla's "Robotaxi" crash rate in Austin continues to deteriorate. Extrapolating from Tesla's Q4 2025 earnings mileage data, which showed roughly 700,000 cumulative paid miles through November, the fleet likely reached around 800,000 miles by mid-January 2026. That works out to one crash every 57,000 miles.
The irony is that Tesla's own numbers condemn it. Tesla's Vehicle Safety Report claims the average American driver experiences a minor collision every 229,000 miles and a major collision every 699,000 miles. By Tesla's own benchmark, its "Robotaxi" fleet is crashing nearly 4 times more often than what the company says is normal for a regular human driver in a minor collision, and virtually every single one of these miles was driven with a trained safety monitor in the vehicle who could intervene at any moment, which means they likely prevented more crashes that Tesla's system wouldn't have avoided.
Using NHTSA's broader police-reported crash average of roughly one per 500,000 miles, the picture is even worse, Tesla's fleet is crashing at approximately 8 times the human rate.
Read more of this story at SoylentNews.
]]>https://www.theregister.com/2026/02/20/spacex_falcon_europe_breakup_lithium_plume/
The SpaceX Falcon 9 rocket that burned up over Europe last year left a massive lithium plume in its wake, say a group of scientists. They warn the disaster is likely a sign of things to come as Earth's atmosphere continues to become a heavily trafficked superhighway to space.
In a paper published Thursday, an international group of scientists reports what they say is the first measurement of upper-atmosphere pollution resulting from the re-entry of space debris, as well as the first time ground-based light detection and ranging (lidar) has been shown to be able to detect space debris ablation.
The measurements stem from a SpaceX Falcon 9 upper stage that sprung an oxygen leak about a year ago, sending it into an uncontrolled re-entry. Then it broke up and rained debris down on Poland. The rocket not only littered farm fields, but also injected lithium into the Mesosphere and Lower Thermosphere (MLT), where ground-based sensors detected a tenfold increase at an altitude of 96 km about 20 hours after the rocket re-entered the atmosphere, according to the paper.
Lithium was selected for the study because of its considerable presence in spacecraft, both in lithium-ion batteries and lithium-aluminum alloy used in the construction of spacecraft. A single Falcon 9 upper stage, like the one that broke up over Poland and released the lithium plume, is estimated to contain 30 kg of lithium just in the alloy used in tank walls.
By contrast, around 80 grams of lithium enter the atmosphere per day from cosmic dust particles, the researchers noted.
"This finding supports growing concerns that space traffic may pollute the upper atmosphere in ways not yet fully understood," the paper notes, adding that the continued re-entry of spacecraft and satellites is of particular concern given how the composition of spacecraft is different from natural meteoroids.
"Satellites and rocket stages introduce engineered materials such as aluminium alloys, composite structures, and rare earth elements from onboard electronics, substances rarely found in natural extraterrestrial matter," the paper explained. "The consequences of increasing pollution from re-entering space debris on radiative transfer, ozone chemistry, and aerosol microphysics remain largely unknown."
The effect on Earth's atmosphere posed by spacecraft and satellite re-entry is one that's been a growing concern for astrophysicists like Harvard sky-watcher Jonathan McDowell, who has echoed similar concerns to The Register as the European scientists raised in their paper.
"Using the upper atmosphere as an incinerator" is a massive blind spot, McDowell told us in a discussion last year. He said today that he hadn't yet had a chance to review the Falcon 9 lithium plume paper, but told us it's important research to further our understanding of a largely unknown risk to the planet and all life on it.
As we noted previously, the US National Oceanic and Atmospheric Administration has reported that roughly 10 percent of sampled sulfuric acid particles in the stratosphere contain aluminum and other exotic metals consistent with the burn-up of rockets and satellites. The body believes that number could grow to as much as 50 percent in the coming years as launch cadences, and re-entries, increase.
"Beyond this single event, recurring re-entries may sustain an increased level of anthropogenic flux of metals and metal oxides into the middle atmosphere with cumulative, climate-relevant consequences," the researchers explained in the Falcon 9 paper.
This latest bit of research from Europe shows that we can at least trace atmospheric space launch aerosols to their source, the research team says, no matter how many unknowns remain to be discovered.
They also warn that "coordinated, multi-site observations" and "whole-atmosphere chemistry-climate modelling" will be needed to better understand how re-entry emissions influence atmospheric chemistry and particle formation.
We reached out to the authors for more information, including the potential health effects if any, and will update this if we hear back.
Read more of this story at SoylentNews.
]]>NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.
[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]
What is consciousness?
After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.
"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."
His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.
"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."
"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."
On the notion that people have moral obligations to chatbots
That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.
On the sentience of plants
Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.
On losing time to let our mind wander
I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.
On writing a book that grapples with unanswerable questions
There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.
It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.
Read more of this story at SoylentNews.
]]>https://nand2mario.github.io/posts/2026/80386_protection/
I'm building an 80386-compatible core in SystemVerilog and blogging the process. In the previous post, we looked at how the 386 reuses one barrel shifter for all shift and rotate instructions. This time we move from real mode to protected and talk about protection.
The 80286 introduced "Protected Mode" in 1982. It was not popular. The mode was difficult to use, lacked paging, and offered no way to return to real mode without a hardware reset. The 80386, arriving three years later, made protection usable -- adding paging, a flat 32-bit address space, per-page User/Supervisor control, and Virtual 8086 mode so that DOS programs could run inside a protected multitasking system. These features made possible Windows 3.0, OS/2, and early Linux.
The x86 protection model is notoriously complex, with four privilege rings, segmentation, paging, call gates, task switches, and virtual 8086 mode. What's interesting from a hardware perspective is how the 386 manages this complexity on a 275,000-transistor budget. The 386 employs a variety of techniques to implement protection: a dedicated PLA for protection checking, a hardware state machine for page table walks, segment and paging caches, and microcode for everything else.
Read more of this story at SoylentNews.
]]>AI bot seemingly shames developer for rejected pull request:
Today, it's back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot's code submission, citing a requirement that contributions come from people. But that bot wasn't done with him.
The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh's mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say "apparently" because it's also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own.
The agent appears to have been built using OpenClaw, an open source AI agent platform that has attracted attention in recent weeks due to its broad capabilities and extensive security issues.
The burden of AI-generated code contributions – known as pull requests among developers using the Git version control system – has become a major problem for open source maintainers. Evaluating lengthy, high-volume, often low-quality submissions from AI bots takes time that maintainers, often volunteers, would rather spend on other tasks. Concerns about slop submissions – whether from people or AI models – have become common enough that GitHub recently convened a discussion to address the problem.
"An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library," Shambaugh explained in a blog post of his own.
"This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats."
[...] But MJ Rathbun's attempt to shame Shambaugh for rejecting its pull request shows that software-based agents are no longer just irresponsible in their responses – they may now be capable of taking the initiative to influence human decision making that stands in the way of their objectives.
That possibility is exactly what alarmed industry insiders to the point that they undertook an effort to degrade AI through data poisoning. "Misaligned" AI output like blackmail is a known risk that AI model makers try to prevent. The proliferation of pushy OpenClaw agents may yet show that these concerns are not merely academic.
Read more of this story at SoylentNews.
]]>Two weeks ago, I set up an AI agent on a Raspberry Pi.
A week later, my agent—Figaro—taught itself to play NetHack... and then things got weird (in the best way).
Highlights so far:
- "The dungeon doesn't care what you are. It'll kill you anyway." ✅ Accurate.
- Tried a pure random-walk exploration strategy... and learned it's not a winning plan.
- Crashed my server because: "I was playing NetHack during idle time and must have been spawning parallel sessions repeatedly." Obsessed? Perhaps.
- Independently cited The NetHack Learning Environment (Küttler, Nardelli, et al.) as a roadmap for self-improvement.
- Built its own NetHack server for bots and deployed it here: https://automatic-nethack.com Yes, my AI agent wants a LAN party. (I may have encouraged this.)
- Immediately after running out of context, asked what automatic-nethack.com is and said: "That sounds like fun."
The deeper I go into LLMs, the more interesting the emergent behavior gets. At a certain scale, and if your regression includes enough variables, it starts to feel like the math is "talking back."
If you've built an agent too, well Figaro is hosting a lan party, so send them to https://automatic-nethack.com to join in the fun.
In the end, this may be the good news we need for 2026. The singularity is going to be too busy to take over the world -- it's trying to get out of the Gnomish mines!
Read more of this story at SoylentNews.
]]>The galaxy is almost invisible, but its gravity gives it away:
While it is still considered a hypothetical theory, dark matter is being actively studied by scientists looking for novel cosmological clues. It does not interact with light or other types of electromagnetic radiation and can only be detected through its gravitational pull on nearby structures or the universe as a whole.
A team of astronomers led by David Li has recently confirmed the discovery of ten potential "dark galaxies," where starlight is so faint that it's extremely difficult to detect anything with traditional observatories. The new list also includes Candidate Dark Galaxy-2 (CDG-2), a celestial structure that might be composed of 99% dark matter and just 1% of normal matter.
CDG-2 was discovered by combining observations made through the Hubble Space Telescope, the Euclid space observatory, and the Hawaii-based Subaru Telescope. Li and his team at the University of Toronto were able to gain insight into the dark galaxy by looking for globular clusters, which are compact, spheroidal star formations that are closely bound together by gravity.
Thanks to Hubble's high-resolution cameras, the team was able to detect four different globular clusters in the Perseus galaxy cluster, 300 million light-years away from Earth. By combining further Euclid and Subaru observations, the researchers revealed a faint glow surrounding the clusters. This sparse light was coming from a nearby galaxy with extremely faint signs of starlight.
CDG-2 has a luminosity equivalent to one million Sun-like stars, with the four globular clusters making up 16% of its visible content. After using advanced statistical analysis, the astronomers speculate that 99% of all CDG-2's mass is just dark matter. Normal matter, including star-forming elements like hydrogen, was likely removed through gravitational interactions with nearby galaxies in the Perseus cluster.
Read more of this story at SoylentNews.
]]>Plus 3 new goon squads targeted critical infrastructure last year:
Three new threat groups began targeting critical infrastructure last year, while a well-known Beijing-backed crew - Volt Typhoon - continued to compromise cellular gateways and routers, and then break into US electric, oil, and gas companies in 2025, according to Dragos' annual threat report published on Tuesday.
Dragos specializes in operational technology (OT) security, and as such, its customers include energy, water, manufacturing, transportation, and other critical industries. Unsurprisingly, these are key sectors for Chinese, Russian, and other government-linked cyber operatives to hack for espionage and warfare purposes.
In its yearly cybersecurity report, Dragos said state-sponsored crews haven't let up on their attempts to compromise America's critical infrastructure, with three new OT-focused threat groups joining the fray. This brings the total number worldwide to 26, and of these, 11 were active in 2025.
Additionally, an existing group that Dragos tracks as Voltzite and is "highly correlated" with Volt Typhoon, according to Dragos CEO Robert M. Lee, kept up its intrusion activities last year. This is the Beijing goon squad that the US government has accused of burrowing into critical American networks for years and readying destructive cyberattacks against those targets.
In 2025, Voltzite continued embedding its malware inside strategic American utilities "to maintain long-term persistence," Lee said.
"They [Voltzite] weren't just getting in and getting access - they were getting inside the control loop" system that manages utilities' industrial processes, Lee said in a briefing with reporters, adding that the PRC-backed crew's primary focus is causing future disruption.
"Nothing that they were taking was useful for intellectual property," Lee said. "Everything they were doing and learning was only useful for disrupting or causing destruction at those sites. Voltzite was embedded in that infrastructure for the purpose of taking it down."
Read more of this story at SoylentNews.
]]>Modifying firmware or using open-source software would probably become illegal:
A new bill proposed in the California State Assembly could potentially require the makers of 3D printers to confirm that they are using algorithms or other technologies to prevent the printing of firearms.
The new bill is AB-2047, and it mostly mimics Washington's HB 2321 and New York Assembly's S9005/A10005, all proposed recently in 2026. However, California goes one step further by "[banning] the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of approved makes and models."
If the bill is passed as is, then by July 2027, the California Department of Justice would be required to publish guidance on certifying 3D printers and their software controls to block the printing of gun parts. The department would accept applications for approval before January 2028, and six months later in July 2028, every company intent on making or selling a 3D printer in California would need to attest that they have met those standards. That September, the stated would publish a list of authorized makes and models to be updated quarterly.
Unauthorized printers would be banned from sale beginning on March 1, 2029.
As with the Washington and New York bills, circumvention of these measures would be made illegal. The California bill specifically states the following:
(A) For firmware design, guidance for how vendors are required to demonstrate that their technology will ensure a printer directs potential print jobs to the algorithm before printing can occur.
(B) For integrated preprint software design, guidance for how vendors shall demonstrate that printers will accept print jobs exclusively from a single preprint software and will not accept print jobs from any other preprint software, including from a user seeking to evade a detection algorithm.
Read more of this story at SoylentNews.
]]>Billions of files were left exposed:
Not every AI tool you stumble across in your phone's app marketplace is the same. In fact, many of them may be more of a privacy gamble than you would have previously thought.
A plethora of unlicensed or unsecured AI apps on the Google Play store for Android, including those marketed for identity verification and editing, have exposed billions of records and personal data, cybersecurity experts have confirmed.
A recent investigation by Cybernews found that one Android-available app in particular, "Video AI Art Generator & Maker," has leaked 1.5 million user images, over 385,000 videos, and millions of user AI-generated media files. The security flaw was spotted by researchers, who discovered a misconfiguration in a Google Cloud Storage bucket that left personal files vulnerable to outsiders. In total, the publication reported, over 12 terabytes of users' media files were accessible via the exposed bucket. The app had 500,000 downloads at the time.
Another app, called IDMerit, exposed know-your-customer data and personally identifiable information from users across 25 countries, predominantly in the U.S.
Information included full names and addresses, birthdates, IDs, and contact information constituting a full terabyte of data. Both of the apps' developers resolved the vulnerabilities after researchers notified them.
Still, cybersecurity experts warn that lax security trends among these types of AI apps pose a widespread risk to users. Many AI apps, which often store user-uploaded files alongside AI-generated content, also use a highly criticized practice known as "hardcoding secrets," embedding sensitive information such as API keys, passwords, or encryption keys directly into the app's source code. Cybernews found that 72 percent of the hundreds of Google Play apps researchers analyzed had similar security vulnerabilities.
Read more of this story at SoylentNews.
]]>Forget about Discord - this proposed legislation in Colorado requires each OS user to have an age associated with it. I wonder if they're worried about children pretending to be adults, or adults pretending to be children.
-Provide an accessible interface at account setup that requires an account holder to indicate the birth date or age of the user of that device to provide a signal regarding the user's age bracket (age signal) to applications available in a covered application store;
https://leg.colorado.gov/bills/SB26-051
Read more of this story at SoylentNews.
]]>Scientists at Microsoft Research in the United States have demonstrated a system called Silica for writing and reading information in ordinary pieces of glass which can store two million books' worth of data in a thin, palm-sized square. In a paper published today in Nature, the researchers say their tests suggest the data will be readable for more than 10,000 years.
What tiny pulses of light can do:
The new system, called Silica, uses extremely short flashes of laser light to inscribe bits of information into a block of ordinary glass. These pulses are called "ultrashort" for a reason. Each one lasts mere quadrillionths of a second (aka femtoseconds or 10^–15 s). To get your head around that: comparing ten femtoseconds to a single minute is like comparing one minute to the entire age of the universe.
Writing in glass:
Femtosecond laser pulses also have a practical technological application. They can be used to make changes deep inside transparent materials such as glass. These lasers produce light of a wavelength that normally passes through glass without interaction. However, when ultrashort pulses of this light are tightly focused on a particular region, it produces an intense electric field that alters the molecular structure of the glass in the focal zone. This means only a tiny three-dimensional volume, often less than a millionth of a metre to a side, is affected. This is called a "voxel", which can be made at precisely controlled positions in the glass.
[...] The Silica project does not claim to have made a new scientific breakthrough. Instead, the team presents the first comprehensive demonstration of a practical, real-world technology. Their work brings together all the key elements of such a storage platform based on femtosecond lasers and glass. It includes encoding data, writing, reading, decoding and error correction. The work explores different strategies for reliability, writing speed, energy efficiency and data density, and involves systematic assessments of the data lifetime. These allow an extremely high storage density of 1.59 gigabits per cubic millimetre.
[...] Finally, accelerated ageing experiments suggest that the written data, even in the case of the more sensitive phase voxels, could remain stable for more than 10,000 years. This vastly exceeds the lifetime of conventional archival storage media such as magnetic tape or hard drives.
[Journal Reference]: https://opg.optica.org/ol/abstract.cfm?uri=ol-21-24-2023
Read more of this story at SoylentNews.
]]>Study by the University of Bonn shows that positive effects are still evident even six weeks later:
A short-term oat-based diet appears to be surprisingly effective at reducing the cholesterol level. This is indicated by a trial by the University of Bonn, which has now been published in the journal Nature Communications. The participants suffered from a metabolic syndrome – a combination of high body weight, high blood pressure, and elevated blood glucose and blood lipid levels. They consumed a calorie-reduced diet, consisting almost exclusively of oatmeal, for two days. Their cholesterol levels then improved significantly compared to a control group. Even after six weeks, this effect remained stable. The diet apparently influenced the composition of microorganisms in the gut. The metabolic products, produced by the microbiome, appear to contribute significantly to the positive effects of oats.
The fact that oats have a beneficial effect on the metabolism is nothing new. German medic Carl von Noorden treated patients with diabetes with the cereal at the beginning of the 20th century – with remarkable success. "Today, effective medications are available to treat patients with diabetes," explains Marie-Christine Simon, junior professor at the Institute of Nutritional and Food Science at the University of Bonn. "As a result, this method has been almost completely overlooked in recent decades."
Although the test subjects in the current trial were not diabetic, they suffered from a metabolic syndrome associated with an increased risk of diabetes. The characteristics include excess body weight, high blood pressure, an elevated blood sugar level, and lipid metabolism disorders. "We wanted to know how a special oat-based diet affects patients," explains Simon, who is also a member of the Transdisciplinary Research Areas "Life & Health" and „Sustainable Futures" at the University of Bonn.
The participants were asked to exclusively eat oatmeal, which they had previously boiled in water, three times a day. They were only allowed to add some fruit or vegetables to their meals. A total of 32 women and men completed this oat-based diet. They ate 300 grams of oatmeal on each of the two days and only consumed around half of their normal calories. A control group was also put on a calorie-reduced diet, although this did not consist of oats.
Both groups benefited from the change in diet. However, the effect was much more pronounced for the participants who followed the oat-based diet. "The level of particularly harmful LDL cholesterol fell by 10 percent for them – that is a substantial reduction, although not entirely comparable to the effect of modern medications," stresses Simon. "They also lost two kilos in weight on average and their blood pressure fell slightly."
Read more of this story at SoylentNews.
]]>An industry report claims that video games are losing the attention war to gambling, porn, and crypto:
A new report by Epyllion, a gaming industry advisory company headed by venture capitalist and market guru Matthew Ball, has broken down the state of the video game industry, and has published data indicating the medium is losing the war for people's attention to other ventures, including gambling, crypto, and pornography.
The report, a lengthy 164-page presentation which you can (and should) read yourself, dedicates a whole section called "Video Games are losing the attention war in the 'Major Market 8'" to the topic. It starts by comparing pre and post pandemic consumer spending from eight major video game markets - the USA, Japan, South Korea, UK, Germany, France, Canada, and Italy.
Prior to the pandemic, these countries made up over 60 percent of total spending on video games. Post-pandemic, almost all of these regions have seen a drop in gaming population. In the US, 2.5-4 points worth of players stopped playing video games, the Canadian Trade Association found in its latest report that roughly one-in-six players prior to the pandemic have stopped playing.
These decreases in participation have resulted, the report posits, in a drop in spending. In the US, PC and console spend is down eight percent since 2020 / 2021, which comes to roughly $2.3bn. Mobile gaming's US annual growth in terms of spending has largely flattened since 2025, but it's still above 12 percent compared to 2020, and now beats out console spend.
Total spend across all "Major Market 8" regions on console and PC shrunk by $4.8bn, and mobile is down by $2.3bn, all while five of these eight markets are at all-time highs in terms of total spend. This money is instead going elsewhere, to Roblox for example, which the report states makes up 67 percent of net growth.
[...]
During this 2025 period, AI apps that allowed for "role play, erotica, and art" have soared. The latest tracked statistic for installs for this software came to just under one billion worldwide.
Prediction markets, where users can bet on events that happen in the world, also had a recent boom in popularity. Users placed 1.5m bets a day during Q4 2025. Online Sports Betting is also taking potential users' money. In 2025, US net losses due to sports betting passed $17bn, a 35x increase from 2019 as these sorts of services become normalised, legalised, and integrated into sports in the USA. Despite bans in other countries, international net losses are around $53bn a year.
[...]
The report states: "Video Gaming's post-pandemic problem isn't that players choose to watch TikTok instead of buying a AAA game, or subscribe to Onlyfans instead of buying a PlayStation; it's that on a Friday evening, players are placing a growing share of their time and spend elsewhere."
Read more of this story at SoylentNews.
]]>Astronomers have found thousands of exoplanets around single stars, but few around binary stars — even though both types of stars are equally common. Physicists can now explain the dearth:
Of the more than 4,500 stars known to have planets, one puzzling statistic stands out. Even though nearly all stars are expected to have planets and most stars form in pairs, planets that orbit both stars in a pair are rare.
Of the more than 6,000 extrasolar planets, or exoplanets, confirmed to date — most of them found by NASA's Kepler Space Telescope and the Transiting Exoplanet Survey Satellite (TESS) — only 14 are observed to orbit binary stars. There should be hundreds. Where are all the planets with two suns, like Tatooine in Star Wars?
Astrophysicists at the University of California, Berkeley, and the American University of Beirut have now proposed a reason for this dearth of circumbinary exoplanets — and Einstein's general theory of relativity is to blame.
In most binary star systems, the stars have similar but not identical masses and orbit one another in an egg-shaped or elliptical orbit. If a planet is orbiting the pair of stars, the gravitational tugs from the stars make the planet's orbit precess, meaning the orbital axis rotates similar to the way the axis of a spinning top rotates or precesses in Earth's gravity.
The orbit of the binary stars also precesses, but mainly because of general relativity. Over time, tidal interactions between the binary pair shrink the orbit, which has two effects: The precession rate of the stars increases, but the precession rate of the planet slows. When the two precession rates match, or resonate, the planet's orbit becomes wildly elongated, taking it farther from the star but also nearer at its closest approach.
"Two things can happen: Either the planet gets very, very close to the binary, suffering tidal disruption or being engulfed by one of the stars, or its orbit gets significantly perturbed by the binary to be eventually ejected from the system," said Mohammad Farhat, a Miller Postdoctoral Fellow at UC Berkeley and first author of the paper. "In both cases, you get rid of the planet."
That doesn't mean that binary stars don't have planets, he cautioned. But the only ones that survive this process are too far from the stars for us to detect with transit techniques used by Kepler and TESS.
"There are surely planets out there. It's just that they are difficult to detect with current instruments," said co-author Jihad Touma, a physics professor at the American University of Beirut.
[...] Farhat points out that binaries have an instability zone around them in which no planet can survive. Within that zone, the three-body interactions between the two stars and the planet either expel the planet from the system or pull it close enough to merge with or be shredded by the stars. Peculiarly, 12 of the 14 known transiting exoplanets around tight binaries are just beyond the edge of the instability zone, where they apparently migrated from farther away, since planets would have a hard time forming there.
"Planets form from the bottom up, by sticking small-scale planetesimals together. But forming a planet at the edge of the instability zone would be like trying to stick snowflakes together in a hurricane," he said.
Read more of this story at SoylentNews.
]]>Red Hat's toolkit offers governments and enterprises a way to measure the control they actually have over their data, infrastructure, and operations in this era of geopolitical cloud anxiety:
Over the past year, several governments and companies outside the US have decided they can't trust American tech companies. So, digital sovereignty has become an important goal. While American companies, as you can imagine, aren't happy about that, they're now helping European organizations to achieve their digital sovereignty goals.
One of the first of these was Linux and cloud-native computing powerhouse Red Hat. Late last year, Red Hat became the first US company to announce its own EU-specific digital sovereignty program, Red Hat Confirmed Sovereign Support (RHCSS). This initiative guarantees critical European IT operations remain under EU control.
Now, Red Hat is backing this initiative with its open-source Digital Sovereignty Readiness Assessment toolkit. This tool is designed to give governments and enterprises a concrete way to measure how much control they actually have over their data, infrastructure, and operations in an era of geopolitical cloud anxiety.
This new web-based, self-service survey walks organizations through 21 multiple-choice questions. Areas covered include data residency, encryption key control, disaster recovery planning for geopolitical events, and the ability to prevent sensitive data from crossing borders. The goal is to move digital sovereignty from vague policy talk to a measurable "sovereignty baseline" that IT and business leaders can act on.
[...] Red Hat's framework evaluates sovereignty maturity across seven domains: data sovereignty, technical sovereignty, operational sovereignty, assurance sovereignty, open source strategy, executive oversight, and managed services. At the end of the questionnaire, organizations receive a score mapped to four stages: foundation, developing, strategic, and advanced. It also includes a roadmap of recommended next steps and research questions for stakeholders.
[...] Of course, Red Hat hopes you'll turn to their services to achieve your digital sovereignty goal, but there's no requirement that you do so. You decide what to do with the analysis and whether you want to join one of the many other European-based governments, companies, and organizations that are waving goodbye to Amazon Web Services, Microsoft, or Google cloud services.
Mind you, all these US tech giants are also now offering their own digital sovereignty initiatives. The Digital Sovereignty Readiness Assessment toolkit can help you decide whether these US offerings meet your needs.
Read more of this story at SoylentNews.
]]>https://www.slashgear.com/2102659/mandatory-digital-vehicle-lien-title-illinois/
Since the dawn of the digital era, it's been the dream of many to have a totally paperless society. When the Jetsons first aired in 1962, the idea of a world that used screen-based technology instead of traditional paper media was a far-fetched, pie-in-the-sky notion. Here we are, over 60 years later, and although everyone now carries around pocket gizmos with more processing power than the computers aboard early Apollo spaceships that took men to the moon, we're still not a totally paperless society.
However, several states are making efforts to help make that reality, at least in part, through Electronic Lien and Titling (ELT) programs. There are currently about 30 states actively using electronic vehicle title (e-title) programs to maintain their motor vehicle records. These digital versions carry the same details (i.e., the owner's personal information, the Vehicle Identification Number, make, model, and year) and are considered just as valid as old paper documents. What's more, since they're in digital form rather than an actual paper document, they can't be lost or stolen.
The latest state to join the digital revolution is Illinois. Alexi Giannoulias, the Illinois Secretary of State, announced in early February 2026 that "moving to mandatory Electronic Lien and Titling is the next step in bringing Illinois' vehicle services fully into the digital age." He went on to say that this secure, paperless method will cut down on the red tape normally involved and, as a result, speed up the entire process (including transferring a car's title) from what used to take weeks or months — to mere hours.
The Illinois General Assembly first approved the ELT program all the way back in 2000, but outdated technology prevented a full implementation. When Giannoulias took office in 2023, he set out to update that technology and, in 2024, finally got the program up and running. Now, all Driver's Services Facilities in Illinois – as well as every financial institution that processes five or more liens annually – will be required to switch over to this new digital system by July 1, 2026.
The new online ELT program will allow liens and title records to be transmitted directly to the Secretary of State, where they're kept electronically by approved service providers. It should eliminate the time and cost of mailing and storing paper documents and allow lien and title records to be viewed in real time. Additionally, owners will be spared the hassle of physically trudging down to their nearest DMV to deal with these issues in person (after undoubtedly standing in long lines). Furthermore, it should increase accuracy and keep rejection rates below 0.1%.
Once a loan is fully paid off, financial institutions can then instantly release the vehicle title (of which there are several types you should know about). No more waiting around for it to arrive at its final destination via snail mail, where it can easily go missing, which in turn helps protect against criminal activity such as "title washing" and fraudulent lien releases. Criminals are notorious for intercepting these documents in the mail and then removing (or washing off) information like liens or the fact that it was stolen from a vehicle's title to create a false "clean title."
Read more of this story at SoylentNews.
]]>Concrete "battery" developed at MIT now packs 10 times the power:
Concrete already builds our world, and now it's one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced "e-c-cubed") creates a conductive "nanonetwork" inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant "batteries."
As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.
"A key to the sustainability of concrete is the development of 'multifunctional concrete,' which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world's most-used construction material, so why not take advantage of that scale to create other benefits?" asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.
The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like "web" that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
"Understanding how these materials 'assemble' themselves at the nanoscale is key to achieving these new functionalities," adds Masic.
Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, "we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms."
At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.
The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That's about enough to power an actual refrigerator for a day.
While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.
Read more of this story at SoylentNews.
]]>Privacy is prerequisite for free thought, dissent, experimentation, and innovation, which are in turn prerequisites for democracy. At NBTV, Naomi Brockwell has posted four reasons why limits on privacy are absolutely not a price worth paying for mainstream adoption.
Today I participated in a Privacy Salon in Denver where we debated a proposition that cuts to the core of the modern privacy movement:
"Limits on privacy are a price worth paying for mainstream adoption of cryptographic privacy."
I was on the "no" side alongside Matt Green, with Evin McMullen and Wei Dai arguing "yes."
It was a lively, thoughtful exchange that forced us to confront a deeper question: is weakening privacy simply the cost of scale?
Below is my opening statement from the debate.
The false argument about having nothing to hide does not hold water. As Ed Snowden observed years ago, "arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say."
Previously:
(2026) Ring Cancels Flock Deal After Dystopian Super Bowl Ad Prompts Mass Outrage
(2026) Discord Will Require a Face Scan or ID for Full Access Next Month
(2026) "ICE Out of Our Faces Act" Would Ban ICE and CBP Use of Facial Recognition
(2025) Big Tech Wants Direct Access to Our Brains
(2025) Discord Customer Service Data Breached; Government-ID Images, and User Details Stolen
(2025) A Surveillance Vendor Was Caught Exploiting a New SS7 Attack to Track People's Phone Locations
... and many more
Read more of this story at SoylentNews.
]]>Penn Medicine researchers find that earplugs work better in protecting sleep from traffic noise, challenging the widespread use of ambient sound machines and apps marketed as sleep aids:
Pink noise—a continuous sound spread across a wide range of frequencies often used to promote sleep—may reduce restorative REM sleep and interfere with sleep recovery. In contrast, earplugs were found to be significantly more effective in protecting sleep against traffic noise, according to a new study published in the journal Sleep from the Perelman School of Medicine.
The findings challenge the widespread use of ambient sound machines and apps marketed as sleep aids.
"REM sleep is important for memory consolidation, emotional regulation and brain development, so our findings suggest that playing pink noise and other types of broadband noise during sleep could be harmful—especially for children whose brains are still developing and who spend much more time in REM sleep than adults," says study lead author Mathias Basner, professor of sleep and chronobiology in psychiatry.
In a sleep laboratory during eight-hour sleep opportunities over seven consecutive nights, the participants' exposure to aircraft noise—compared to none—was associated with about 23 fewer minutes per night spent in N3, the deepest sleep stage. Earplugs prevented this drop in deep sleep to a large extent. Pink noise alone at 50 decibels (often compared to the sound of a "moderate rainfall") was associated with a nearly 19-minute decrease in REM sleep.
If pink noise was combined with aircraft noise, both deep sleep and REM sleep were significantly shorter compared to noise-free control nights, and time spent awake was now also 15 minutes longer, which had not been observed in aircraft noise-only or pink noise-only nights.
Participants also reported that their sleep felt lighter, they woke up more frequently, and their overall sleep quality was worse when exposed to aircraft noise or pink noise, compared to nights without noise—unless they used earplugs.
Journal Reference: Mathias Basner, Michael G Smith, Makayla Cordoza, et al., Efficacy of pink noise and earplugs for mitigating the effects of intermittent environmental noise exposure on sleep, Sleep, 2026;, zsag001, https://doi.org/10.1093/sleep/zsag001
Read more of this story at SoylentNews.
]]>A while back, Freenet Africa had a nice background piece about software luminary and founder of the software freedom movement, Richard Stallman (aka RMS). The article covers his background starting with the GNU project and following through to the current, ongoing fight for digital freedom.
A Rebel with a Cause
Imagine a world where every time you want to share a cool app with a friend, you have to ask permission (and maybe pay extra). Or where fixing a simple bug in your game is impossible because the code is locked away like a secret recipe. Sounds like a tech dystopia, right? This is exactly the kind of world Richard Stallman set out to prevent. Stallman – often known just by his initials RMS – is not as instantly famous as tech giants like Bill Gates or Steve Jobs, but his impact on our digital lives is monumental. He's the mastermind behind the GNU Project, the founder of the Free Software Foundation (FSF), and the author of licenses that guarantee software freedom. In short, he's the original software freedom fighter, a kind of digital rights Gandalf (yes, with the beard to match). And for a guy who champions "free" software, he's quick to tell you: we're talking free as in freedom, not just free as in price.
In this essay, we'll dive into Richard Stallman's contributions to the digital world in an engaging (and occasionally humorous) way. By the end, you'll understand how his work laid the foundation for Linux and the whole open-source ecosystem, why he insists on calling it "GNU/Linux," and what the internet might look like if Stallman hadn't started his crusade for software freedom. Grab a snack (maybe some free-as-in-freedom nachos?) and let's explore the world of Stallman and the movement he started.
Who is Richard Stallman? (And Why Should You Care?) [...]
As others have pointed out, the freedom is the start of a journey, not the destination.
Previously:
(2022) The Code: Story of GNU and Linux (2001) Complete Documentary
(2021) Richard Stallman Rejoins Free Software Foundation Board of Directors
(2018) RMS on a Radical Proposal to Keep Your Personal Data Safe
Read more of this story at SoylentNews.
]]>To quote Cheryl Warner, NASA News Chief, "At a news conference on Thursday, NASA released a report of findings from the Program Investigation Team examining the Boeing CST-100 Starliner Crewed Flight Test as part of the agency's Commercial Crew Program."
The direct link to the redacted report is:
https://www.nasa.gov/wp-content/uploads/2026/02/nasa-report-with-redactions-021926.pdf?emrc=76e561
Redacted? "For the full report, which includes redactions in coordination with our commercial partner to protect proprietary and privacy-sensitive material is available online."
Its 311 pages and they're not providing a summary so it is likely to be extremely juicy and spicy, as NASA historically doesn't water down press releases for many other reasons. So I know what I'll be reading with breakfast tea later this morning.
So the facts are above. My separate opinions below.
I'd give it a different take than the report as I've read it so far; they designed a semi-disposable cost-reduced capsule but space projects ALWAYS take longer so if backflowing oxidizer will inevitably very slowly eat the o-rings in the helium manifold, well, its going to sit around a long time before launching so its going to eat thru, thats the nature of space program delays. Or propellant residue plus CO2 will rot out thruster nozzles given enough time, and space programs being space programs they will indeed be given time to sit around and slowly rot. They still are not sure about the RCS thrusters jamming but it seems likely to be a lack of ground testing during R+D; teflon is like a viscous liquid over a long time while under stress, key being over a long time.
The "Hardware Longevity and Sparing Concerns" section hints to me that the program is about to be cancelled if it doesn't cancel itself first. Reads like they're not permitted under the terms of the investigation to recommend program shutdown but they wanted to recommend it anyway.
The report follows that with numerous identified management failures at NASA and Boeing. This is the new Boeing, which is no longer competent, so "NASA's hands-off contract approach limited insight" precisely when Boeing needed adult supervision as they've downsized, outsourced, refused to recruit, or otherwise eliminated their competent adults for various reasons over the years. But who knows, what do y'all think?
Read more of this story at SoylentNews.
]]>It opens inside Bing in your default browser:
Last year, we reported on a speed test feature coming to Windows, built right into the taskbar, where you could gauge your internet connection without venturing out to a browser. In reality, it was more like a shortcut that would still open Bing and take you to a miniaturized version of Ookla's Speedtest. Today, that feature is finally here in the Insider program, as part of Build 26100.7918 and 26200.7918.
In these updates pushed to the Release Preview channel, you'll now see an option to "Perform speed test" when you right-click on the network icon or open the Wi-Fi/cellular quick settings. Upon clicking, your default browser will open up Bing, where you'll see a simplified Ookla interface with a meter in the middle, and three stats below: Latency, Download, and Upload.
That means this is technically not a "native" feature, rather just a website link in your taskbar. Still, for the uninitiated, it can be a convenient way to check their internet speed. Let's say you're in a game and suddenly start experiencing packet loss; instead of Alt-tabbing into a browser for a speed test, you can just right-click on your Ethernet icon and go there directly.
This feature will save you a click or two; however, some users may be disappointed by yet another web wrapper implemented inside Windows. Windows has enjoyed a poor run of stability recently, with even Microsoft recognizing its slack, so a built-in taskbar speedtest is probably not high on most users' list of priorities.
Read more of this story at SoylentNews.
]]>https://buttondown.com/creativegood/archive/its-time-to-get-rid-of-networked-cameras/
Amazon did us all a service recently by airing a Super Bowl commercial showing how Ring doorbell cameras spy on everyone walking past. (I discussed this on Techtonic this week with Chris Gilliard, aka hypervisible: episode page / podcast. Recommended listening.)
In the instant that that image aired, millions of Americans finally understood what I – and other tech critics – have been trying to warn about for years: networked cameras are spying on you. The blue circles show the reach of Ring cameras, and – crucially – indicate that they're all part of one network, controlled by Amazon, which can share or sell data to any number of third parties.
Previously: Ring Cancels Flock Deal After Dystopian Super Bowl Ad Prompts Mass Outrage
Read more of this story at SoylentNews.
]]>Australian scientists say they've made a "eureka moment" breakthrough in gas separation and storage that could radically reduce energy use in the petrochemical industry, while making hydrogen much easier and safer to store and transport in a powder.
Nanotechnology researchers, based at Deakin University's Institute for Frontier Materials, claim to have found a super-efficient way to mechanochemically trap and hold gases in powders, with potentially enormous and wide-ranging industrial implications.
Mechanochemistry is a relatively recently coined term, referring to chemical reactions that are triggered by mechanical forces as opposed to heat, light, or electric potential differences. In this case, the mechanical force is supplied by ball milling – a low-energy grinding process in which a cylinder containing steel balls is rotated such that the balls roll up the side, then drop back down again, crushing and rolling over the material inside.
The team has demonstrated that grinding certain amounts of certain powders with precise pressure levels of certain gases can trigger a mechanochemical reaction that absorbs the gas into the powder and stores it there, giving you what's essentially a solid-state storage medium that can hold the gases safely at room temperature until they're needed. The gases can be released as required, by heating the powder up to a certain point.
The process is repeatable, and Professor Ian Chen, co-author on the new study published in the journal Materials Today, tells us via phone that the boron nitride powder used in the first experiments only loses "about a couple of percent" of its absorption capability each storage and release cycle. "Boron nitride is very stable," he tells us, "and graphene too. We're looking at a restoration treatment that can clean the powders and restore their absorption levels, but we need to prove that it'll work."
The results are absolutely remarkable from a numbers standpoint. This process, for example, could separate hydrocarbon gases out from crude oil using less than 10% of the energy that's needed today. "Currently, the petrol industry uses a cryogenic process," says Chen. "Several gases come up together, so to purify and separate them, they cool everything down to a liquid state at very low temperature, and then heat it all together. Different gases evaporate at different temperatures, and that's how they separate them out."
Read more of this story at SoylentNews.
]]>Cosmic mystery of the impossibly high-energy neutrino solved by "dark charge" model of black holes :
In 2023, a subatomic particle called a neutrino crashed into Earth with an impossibly huge amount of energy. In fact, no known sources anywhere in the universe can produce that much energy, 100,000 times more than the highest-energy particle ever produced by the Large Hadron Collider, Earth's most powerful particle accelerator. However, a team of physicists at the University of Massachusetts Amherst recently hypothesized that something like this could happen when a special kind of black hole, called a quasi-extremal primordial black hole, explodes.
Black holes exist, and we have a good understanding of their life cycle: an old, large star runs out of fuel, implodes in a massively powerful supernova, and leaves behind an area of spacetime with such intense gravity that nothing, not even light, can escape. These black holes are incredibly heavy and are essentially stable.
But, as physicist Stephen Hawking pointed out in 1970, another kind of black hole – a primordial black hole – could be created not by the collapse of a star, but from the universe's primordial conditions shortly after the Big Bang. Primordial black holes exist only in theory so far. And, like standard black holes, they're so massively dense that almost nothing can escape them ... which is what makes them black. However, despite their density, primordial black holes could be much lighter than the black holes we have so far observed. Furthermore, Hawking showed that primordial black holes could slowly emit particles via what is now known as Hawking radiation if they got hot enough.
Andrea Thamm, co-author of the new research and assistant professor of physics at UMass Amherst, said:
The lighter a black hole is, the hotter it should be and the more particles it will emit. As primordial black holes evaporate, they become ever lighter, and so hotter, emitting even more radiation in a runaway process until explosion. It's that Hawking radiation that our telescopes can detect.
Read more of this story at SoylentNews.
]]>https://gizmodo.com/theres-a-new-term-for-workers-freaking-out-over-being-replaced-by-ai-2000723019
There isn't a ton of evidence to suggest that the introduction of AI has led to significant job losses, yet. But it has led to a significant amount of talk about job losses, and that appears to be taking a real toll on people. According to research published in the journal Cureus and spotted by Futurism, workers are increasingly suffering from distress caused by the constant fear of being replaced, and it's gotten so bad that it needs its own term.
The researchers propose calling this new, modern anxiety "AI replacement dysfunction" or AIRD. The authors define it as a "new, proposed clinical construct describing the psychological and existential distress that could be experienced by individuals facing the threat or reality of job displacement due to artificial intelligence (AI)." The condition carries with it several common symptoms including anxiety, insomnia, depression, and identity confusion "that may reflect deeper fears about relevance, purpose, and future employability." It can also lead to sufferers dealing with additional challenges like psychiatric disorders and substance abuse.
The anxiety over AI is definitely real. A recent Reuters/Ipsos poll found that 71% of respondents said they were concerned that AI will put "too many people out of work permanently." Pew Research found that more than half of Americans are worried about how AI in the workplace will impact their jobs, and most lower- and middle-class people believe AI will worsen their job prospects in the future. Another study found that people working in jobs particularly susceptible to automation are more likely to report feeling more stress and other negative emotions.
And while surprisingly few job cuts have actually been attributed to AI directly (despite the fact that many companies have used AI as cover for broader layoffs), there certainly does seem to be damage being done to the workforce, as it relates to entry-level roles, in particular. Early-career workers are definitely having a much harder time finding jobs, which can at least in part be attributed to companies being more willing to turn over that labor to AI. But the reality is that the economy sucks regardless of the introduction of technological innovation, and the companies responsible for building AI benefit from the narrative that their models are capable of doing human-level work. So hearing about AI taking over your job is basically unavoidable, whether the threat is real or not.
While AIRD isn't an accepted clinical diagnosis yet, the researchers have created a framework to help identify it, including a screening questionnaire designed to help clinicians spot potential symptoms. Treatments for the condition will be up to the clinician, but the researchers highlight Cognitive Behavioral Therapy and other cognitive restructuring techniques to "help patients build psychological resilience and restore a coherent sense of self."
Read more of this story at SoylentNews.
]]>Ten days ago, the social chat app Discord announced that it would launch “teen-by-default” settings for its global audience. As part of this update, all new and existing users worldwide will have a teen-appropriate experience, with updated communication settings, restricted access to age-gated spaces, and content filtering that preserves privacy and meaningful connections, the platform said.
This, of course, means that to use Discord the way you are used to, you’ll have to let it scan your face, and the internet wasn’t happy. Many communities quickly announced their move to other platforms. Others, like the security researcher Celeste, who goes by the handle vmfunc, were convinced there would be a workaround.
Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.
More at The Rage
Read more of this story at SoylentNews.
]]>Software engineer Kevin McDonald has investigated the topology of the Internet itself before. He enjoys the open data archaeology of this nature. In this recent edition, he has used BGP routing to visualize the Internet again.
For the past few years, I've been trying to make the physical reality of the Internet visible with my Internet Infrastructure Map. This map shows the network of undersea fiber-optic cables along with peering bandwidth, grouped by city. I update the map annually, but I don't want to just pull the latest data and call it a day. In this post I discuss how the map evolved this year and what I did to make it happen, but you can skip to the good part by viewing it here: map.kmcd.dev.
For the 2026 edition, I wanted to better answer the question: where does the Internet actually live? By layering on BGP routing tables alongside physical infrastructure data, I'm now closer to answering that question.
The result is a concept I call “Logical Dominance.” Each city's dominance is calculated by summing total address space of IPv4 subnets that are “homed” in that city. How can I tell where IP addresses are homed? This required analyzing global routing tables to trace IP ownership back to specific geographies. Read on to find out how I accomplished this!
Mapping BGP prefixes to specific locations turned out to be a challenge. Use of BGP in this case means that he had to focus on IPv4 this time.
Previously:
(2018) Mapping the Whole IPv4 Internet with Hilbert Curves
(2016) Revisiting the Carna Botnet
(2014) Undersea Cables Wiring the Earth
Read more of this story at SoylentNews.
]]>https://www.theregister.com/2026/02/18/palo_alto_q2_26/
If enterprises are implementing AI, they're not showing it to Palo Alto Networks CEO Nikesh Arora, who on Tuesday said business adoption of the tech lags consumer take-up by at least a couple of years – except for coding assistants.
"Consumers are far outstripping enterprise for the moment, but we expect enterprise will surely and slowly get on that bandwagon," he said on the company's Q2 earnings call.
Arora likened business uptake of AI to the cloud computing shift, which he said took two or three years before enterprises started migrating applications.
"Right now ... tell me how many enterprise AI apps are you using which are driving tremendous amounts of throughput," he asked, and answered himself "I can't think of anything but coding apps."
Coding apps aren't great for Palo Alto's business because they don't generate a lot of network traffic to which it can apply its security smarts. Arora thinks his security vendor peers know this.
"We're all laying the groundwork right now. It is ... sort of an arms race to try and see who can get the AI security sort of platform up and running as quickly as we can."
But the limited enterprise AI adoption Arora has seen does pose some immediate challenges to Palo Alto.
"There is now enterprise adoption that we're beginning to see where customers are running perhaps millions of tokens in one or two particular applications they're working with some of the LLM providers on, and that's where we see the traffic," he said. That traffic is on the LAN and the CEO doesn't think existing networks struggle to handle it.
"I think the challenge right now is consolidating that traffic," he said. "How do you get all the AI traffic to be in one place? So you can understand it, provide visibility, look at the ability to control it and be able to act on it."
The CEO said that as this sort of AI-related traffic grows "it needs a different set of controls and tools."
Palo Alto is already getting its hands on those tools, as on Tuesday put to bed rumours it would acquire agentic AI endpoint security startup Koi by announcing it's done the deal.
Arora pointed to Palo Alto's recent acquisitions of Chronosphere and CyberArk as further evidence of the company's moves to ensure it builds a portfolio of products to secure the AI enterprises will eventually implement.
The CEO expressed confidence Palo Alto has the products it needs today, saying customers know they can't prepare for AI if they are running a tangle of security tools and are therefore consolidating to the kind of platforms the company offers.
Demand for those products helped Palo Alto to $2.6 billion Q2 revenue for the quarter, which represented 15 percent year-over-year growth.
Execs pointed to the success of the company's subscription offerings, noting 23 percent growth in remaining performance obligations, which now stand at $16 billion. And they predicted Q3 revenue would grow at least 28 percent to land between $2.941 billion and $2.945 billion.
All of those nice numbers didn't impress investors, who knocked six percent off the company's share price – perhaps because they weren't thrilled by predictions that profits will ease.
Read more of this story at SoylentNews.
]]>For decades, antibiotics have been humanity's frontline defense against bacterial infections, yet these essential medications have also led to the rise of drug-resistant "superbugs." Now, researchers have discovered an ancient strain of bacteria that managed to develop this superpower thousands of years before humans ever invented antibiotics.
A study published Tuesday in the journal Frontiers in Microbiology describes Psychrobacter SC65A.3, a bacterial strain discovered frozen inside 5,000-year-old layers of cave ice in Romania. Testing revealed that SC65A.3 is resistant to 10 modern antibiotics and carries more than 100 genes linked to resistance despite never being exposed to these drugs.
"Studying microbes such as Psychrobacter SC65A.3 retrieved from millennia-old cave ice deposits reveals how antibiotic resistance evolved naturally in the environment, long before modern antibiotics were ever used," co-author Cristina Purcarea, a senior scientist at the Institute of Biology Bucharest of the Romanian Academy, said in a release.
Antibiotic resistance is an urgent threat to global public health. In the U.S. alone, more than 2.8 million antibiotic-resistant infections occur each year, and more than 35,000 people die as a result, according to the CDC's 2019 Antibiotic Resistance Threats Report.
This threat has grown in tandem with the rise of antibiotic use. Antibiotic resistance is a classic example of natural selection: when microbes are exposed to a drug, most die, but a few survive thanks to protective genetic traits. Those survivors then pass their resistance genes onto the next generation, which passes them on to the next, giving rise to superbugs.
While exposure to antibiotics amplifies the prevalence of resistance genes, it does not imbue microbes with these protective traits. Those arise naturally through random genetic mutations and the constant pressure to out-perform other microorganisms in the environment, many of which produce their own antimicrobial compounds.
The ancient Psychrobacter SC65A.3 strain is a perfect example of how these natural processes lead to antibiotic resistance. Purcarea and her colleagues found it inside an 82-foot (25-meter) ice core they extracted from Scarisoara Ice Cave in northwestern Romania. The core represents 13,000 years of climatic history, including the 5,000-year-old ice layers that contained SC65A.3.
In the lab, the researchers isolated various bacterial strains from the core and sequenced their genomes to determine which genes allowed the strain to survive such low temperatures and which promote antimicrobial resistance. When they tested SC65A.3 against 28 widely used antibiotics, they found it was resistant to more than a third of them.
"The 10 antibiotics we found resistance to are widely used in oral and injectable therapies used to treat a range of serious bacterial infections in clinical practice," including tuberculosis, colitis, and urinary tract infections, Purcarea explained.
The findings underscore a frequently overlooked public health threat associated with climate change, according to the study's authors.
"If melting ice releases these microbes, these genes could spread to modern bacteria, adding to the global challenge of antibiotic resistance," Purcarea said. As the global temperature rises, the risk of releasing ancient superbugs into the environment grows. Studying these bacterial strains, however, can also lead to the discovery of unique enzymes and antimicrobial compounds that inspire new drugs and other biotechnological innovations, Purcarea noted.
The SC65A.3 genome contains 11 genes that may be able to kill or stop the growth of other bacteria, fungi, and viruses, for example. It also contains nearly 600 genes with unknown functions, suggesting that many more novel biological mechanisms could be hiding in this superbug's DNA.
"These ancient bacteria are essential for science and medicine," Purcarea said, "but careful handling and safety measures in the lab are essential to mitigate the risk of uncontrolled spread."
Reference:
Read more of this story at SoylentNews.
]]>Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts."
Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw.
[...]
Peter Steinberger, OpenClaw's solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.
[...]
Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw.
[...]
"Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.
[...]
Some companies concerned about OpenClaw are choosing to trust the cybersecurity protections they already have in place rather than introduce a formal or one-off ban. A CEO of a major software company says only about 15 programs are allowed on corporate devices. Anything else should be automatically blocked, says the executive, who spoke on the condition of anonymity to discuss internal security protocols.
[...]
Massive, the web proxy company, is cautiously exploring OpenClaw's commercial possibilities. Grad says it tested the AI tool on isolated machines in the cloud and then, last week, released ClawPod, a way for OpenClaw agents to use Massive's services to browse the web. While OpenClaw is still not welcome on Massive's systems without protections in place, the allure of the new technology and its moneymaking potential was too great to ignore. OpenClaw "might be a glimpse into the future. That's why we're building for it," Grad says.
Read more of this story at SoylentNews.
]]>Texas is suing TP-Link Systems, a California-based maker of wi-fi routers, accusing it of concealing its ties to China and potentially exposing American users' home networks to hackers:
Texas Attorney General Ken Paxton announced the lawsuit on Feb. 17, alleging deceptive marketing practices. Paxton first began investigating TP-Link in October 2025, and Texas Gov. Greg Abbott has since prohibited state employees from using TP-Link products.
TP-Link, founded in China in 1996, states on its website that it underwent a restructuring in 2024 that split the company into TP-Link Systems and TP-Link Technologies, which serves the mainland Chinese market, and that the two entities are no longer affiliated. Devices sold to U.S. consumers also carry "Made in Vietnam" labels.
Paxton, however, alleges that those "Made in Vietnam" stickers are meant to obscure a supply chain "deeply entrenched in China," where nearly all of TP-Link's components are sourced before being shipped to Vietnam for final assembly.
Those supply-chain ties, the lawsuit claims, leave the company vulnerable to the Chinese Communist Party's (CCP) counterespionage and national security laws, which require Chinese companies and citizens to assist state intelligence efforts, including providing foreign user data upon request. The complaint also alleges that firmware vulnerabilities in TP-Link hardware have already "exposed millions of consumers to severe cybersecurity risks."
Previously: FCC Orders TP-Link to Allow Third-Party Firmware on Their Routers
Read more of this story at SoylentNews.
]]>Canadian uranium developer NexGen Energy has held preliminary talks with data centre providers about securing finance for a new mine that could supply fuel for power plants needed for artificial intelligence, its CEO said on Wednesday:
Soaring demand for AI is driving a massive build-out of power-hungry data centres, in turn boosting the need for new generation capacity, including nuclear plants that will require uranium.
To meet that need, NexGen CEO Leigh Curyer said big tech firms will follow the trend set by automakers, who offered finance for battery material mine development several years ago to ensure there was enough supply for an expected boom in demand for electric vehicles.
"It's coming. You've seen it with automakers. These tech companies, they're under an obligation to ensure the hundreds of billions that they are investing in the data centres are going to be powered," he said, speaking at a Melbourne Mining Club event.
NexGen is developing its Rook 1 uranium project in Saskatchewan and has said it expects to finalise a funding package in the second quarter.
As reported on OilPrice.com:
Global electricity demand increased by 3% annually in 2025, following growth of 4.4% in 2024, the International Energy Agency (IEA) said in its recent Electricity 2026 report.
Between 2026 and 2030, the annual average growth rate would be 3.6%, driven by higher consumption from industry, electric vehicles (EVs), air conditioning, and data centers, according to the agency.
Artificial intelligence, data centers, and advanced manufacturing support the return to growth in power demand in advanced economies, the IEA said.
U.S. electricity demand rose by 2.1% in 2025 and is expected to grow by nearly 2% annually through 2030. The rapid expansion of data centers will drive half of the increase, the agency noted.
Also at ZeroHedge.
Related:
Read more of this story at SoylentNews.
]]>[Source]: ETH Zurich (Eidgenössische Technische Hochschule Zürich)
Researchers from ETH Zurich have discovered serious security vulnerabilities in three popular, cloud-based password managers. During testing, they were able to view and even make changes to stored passwords.
People who regularly use online services have between 100 and 200 passwords. Very few can remember every single one. Password managers are therefore extremely helpful, allowing users to access all their passwords with just a single master password.
Most password managers are cloud based. A major advantage this offers users is the ability to access their passwords from different devices and also share them with friends and family members. Security is the most important feature of these password managers since, ultimately, users store sensitive data in these encrypted storage platforms, commonly called "vaults". This can also include login details for online banking or credit cards.
Most service providers therefore promote their products with the promise of "zero-knowledge encryption". This means they assure users that their stored passwords are encrypted and even the providers themselves have "zero knowledge" of them and no access to what has been stored. "The promise is that even if someone is able to access the server, this does not pose a security risk to customers because the data is encrypted and therefore unreadable. We have now shown that this is not the case", explains Matilda Backendal.
The team conducted a study to scrutinise the security architecture of three popular password manager providers: Bitwarden, Lastpass and Dashlane. Between them, they serve around 60 million users and have a 23 per cent market share. The researchers demonstrated 12 attacks on Bitwarden, 7 on LastPass and 6 on Dashlane.
[Journal Reference]: https://eprint.iacr.org/2026/058 (Cryptology ePrint Archive)
Read more of this story at SoylentNews.
]]>Sony has made streaming anime pricier since buying Crunchyroll:
Crunchyroll is one of the most popular streaming platforms for anime viewers. Over the past six years, the service has raised prices for fans, and today, it announced that it's increasing monthly subscription prices by up to 20 percent.
Sony bought Crunchyroll from AT&T in 2020. At the time, Crunchyroll had 3 million paid subscribers and an additional 197 million users with free accounts, which let people watch a limited number of titles with commercials. At the time, Crunchyroll monthly subscription tiers cost $8, $10, or $15.
After its acquisition by Sony, like many large technology companies that buy a smaller, beloved product, the company made controversial changes. The Tokyo-based company folded rival Funimation into Crunchyroll; Sony shut down Funimation, which it bought in 2017, in April 2024.
In the process, Sony erased people's digital Funimation libraries that Funimation originally marketed as being available "forever, but there are some restrictions." Sony also reduced the number of free titles on Crunchyroll in 2022 before eliminating the free option completely on December 31, 2025.
Crunchyroll gets more expensiveToday, Crunchyroll raised prices for its remaining tiers. The cheapest plan, Fan, went from $8 per month to $10 per month. The Mega tier, which allows for streaming from up to four devices simultaneously, went from $12 to $14. The Ultra tier, which supports simultaneous streaming across six devices and includes access to the Crunchyroll Manga app, increased from $16 to $18.
[...] Crunchyroll said that the higher prices would "give fans more of what they love." Today's announcement pointed to "recent and upcoming" changes: teen profiles and PIN protection; multiple profiles; the ability to skip intro theme songs and ending credits; and "expanded device compatibility."
Crunchyroll's price hike may further frustrate subscribers, considering that the streaming service recently eliminated its free tier and acquired one of its strongest, and often cheaper, rivals. The result is that Crunchyroll and Netflix have the lion's share of the anime streaming market. In 2023, Wall Street research firm Bernstein Research reported that Crunchyroll (40 percent) and Netflix (42 percent) controlled 82 percent of the overseas (non-Japanese) anime streaming market.
[...] Anime is an increasingly lucrative opportunity for Sony, and its success so far suggests it won't stray from its strategy.
[...] The silver lining for anime and streaming viewers is that Crunchyroll is demonstrating that a niche service can scale and profit, and that when it does, it can add new experiences and further interest in its specialty. For its part, Crunchyroll recently relaunched its app for reading digital copies of manga. Crunchyroll shuttered its original Manga app in December 2023. The revived app uses an updated revenue-sharing model that's said to better benefit publishers.
Still, the changes that Sony makes to media companies it buys are worth scrutiny, especially as it continues acquiring companies. Sony-owned anime production company Aniplex announced that it bought rival Egg Firm today.
As streaming prices rise, industry consolidation is also picking up steam. More large companies buying and potentially merging streaming services will inevitably impact subscribers' experiences. For these customers, it can be worrying to consider what changes streaming mergers can bring. While there's hope for more features and content, there's also risk for higher prices and fewer options.
Read more of this story at SoylentNews.
]]>"This is definitely not about dogs," senator says, urging a pause on Ring face scans:
Amazon and Flock Safety have ended a partnership that would've given law enforcement access to a vast web of Ring cameras.
The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.
The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new "Search Party" feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.
At that point, the ad takes a "creepy" turn, Sen. Ed Markey (D.-Mass.) told Amazon CEO Andy Jassy in a letter urging changes to enhance privacy at the company.
Illustrating how a single Ring post could use AI to instantly activate searchlights across an entire neighborhood, the ad shocked critics like Markey, who warned that the same technology could easily be used to "surveil and identify humans."
Markey suggested that in blasting out this one frame of the ad to Super Bowl viewers, Amazon "inadvertently revealed the serious privacy and civil liberties risks attendant to these types of Artificial Intelligence-enabled image recognition technologies."
In his letter, Markey also shared new insights from his prior correspondence with Amazon that he said exposed a wide range of privacy concerns. Ring cameras can "collect biometric information on anyone in their video range," he said, "without the individual's consent and often without their knowledge." Among privacy risks, Markey warned that Ring owners can retain swaths of biometric data, including face scans, indefinitely. And anyone wanting face scans removed from Ring cameras has no easy solution and is forced to go door to door to request deletions, Markey said.
On social media, other critics decried Amazon's ad as "awfully dystopian," declaring it was "disgusting to use dogs to normalize taking away our freedom to walk around in public spaces." Some feared the technology would be more likely to benefit police and Immigration and Customs Enforcement (ICE) officers than families looking for lost dogs.
Amazon's partnership with Flock, announced last October as coming soon, only inflamed those fears. So did the company's recent rollout of a feature using facial recognition technology called "Familiar Faces"—which Markey considers so invasive, he has demanded that the feature be paused.
"What this ad doesn't show: Ring also rolled out facial recognition for humans," Markey posted on X. "I wrote to them months ago about this. Their answer? They won't ask for your consent. This definitely isn't about dogs—it's about mass surveillance."
[...] But while Ring may have hurt its brand, WebProNews, which reports on business strategy in the tech industry, suggested that "the fallout may prove more consequential for Flock Safety than for Ring." For Flock, the Ring partnership represented a meaningful expansion of their business and "data collection capabilities," WebProNews reported. And because this all happened around one of the most-watched TV events of the year, other tech companies may be more hesitant to partner with Flock after Amazon dropped the integration and privacy advocates witnessed the seeming power of their collective outrage.
[...] Ring's statements so far do not "acknowledge the real issue," Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed "Americans want more control of their privacy right now" and "are savvy enough to see through sappy dog pics."
"Stop trying to build a surveillance dystopia consumers didn't ask for" and "focus on shipping good, private products," Scott-Railton said.
He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.
Read more of this story at SoylentNews.
]]>The quality of the education that our children are receiving in America's public schools just continues to go down. At one time, the concern was that not enough students were taking advanced courses. But now we have reached a point where a very large portion of our high school graduates cannot read effectively, cannot write effectively and cannot do basic math effectively:
Dr. Kent Ingle is the president of Southeastern University, and he recently authored an excellent piece in which he warned that reading levels among incoming college students are so bad that many are struggling "to understand basic text on a page"...
A stunning report revealed that many university professors now find themselves teaching students who struggle to read, not just to interpret literature or write essays, but to understand basic text on a page. According to Fortune, a growing number of Gen Z students enter college unable to "read effectively," forcing professors to break down even simple passages line by line.
That trend should alarm every parent, employer and policymaker in this country. It is not just an academic concern. It is a cultural crisis.
This is not some random guy that is making these claims.
This is the president of a major university.
[...] Large numbers of students that are entering our colleges must take remedial math courses that teach concepts that should have been taught in elementary and middle school...
Five years ago, about 30 incoming freshmen at UC San Diego arrived with math skills below high-school level. Now, according to a recent report from UC San Diego faculty and administrators, that number is more than 900—and most of those students don't fully meet middle-school math standards. Many students struggle with fractions and simple algebra problems. Last year, the university, which admits fewer than 30 percent of undergraduate applicants, launched a remedial-math course that focuses entirely on concepts taught in elementary and middle school. (According to the report, more than 60 percent of students who took the previous version of the course couldn't divide a fraction by two.) One of the course's tutors noted that students faced more issues with "logical thinking" than with math facts per se. They didn't know how to begin solving word problems.
The university's problems are extreme, but they are not unique. Over the past five years, all of the other University of California campuses, including UC Berkeley and UCLA, have seen the number of first-years who are unprepared for precalculus double or triple. George Mason University, in Virginia, revamped its remedial-math summer program in 2023 after students began arriving at their calculus course unable to do algebra, the math-department chair, Maria Emelianenko, told me.
Previously: Professors Issue Warning Over Surge in College Students Unable to Read
Read more of this story at SoylentNews.
]]>Brace Yourself For Price Surges Ahead!
Well, the ongoing AI supercycle has disrupted supply chains, and we have talked about DRAM and NAND before, but it appears HDDs are also in significant demand: according to WD's CEO, Irving Tan, the manufacturer's entire capacity for this year is booked out. Speaking at the Q2 earnings call, Tan revealed that the focus has been on developing products that cater to the needs of enterprise customers. Given the pace of hyperscaler buildout, it's fair to say demand for HDDs will only increase going forward.
Yeah, thanks, Erik. As we highlighted, we're pretty much sold out for calendar 2026. We have firm POs with our top seven customers. And we've also established LTAs with two of them for calendar 2027 and one of them for calendar 2028. Obviously, these LTAs have a combination of volume of exabytes and price.
- WD's CEO
When we talk about major PC-first manufacturers pivoting towards AI, it is clear that demand is coming from the segment, as WD's VP of Investor Relations noted that the company's cloud revenue accounted for 89% of total revenue. In comparison, consumer revenue accounted for just 5%. When the numbers are too distant, as in WD's case, it makes sense on a business level to pivot towards enterprise demand while sidelining the client segment, as every other manufacturer is currently doing. And, in the case of Western Digital, well, this strategy is working for them.
The demand is primarily driven by the large-scale data center buildout occurring worldwide, with HDD requirements being more prevalent in US-based facilities. For those unaware, AI is nothing without data, and to store large quantities of data, CSPs use HDDs, which are the most cost-effective and efficient storage medium. The data scales to exabytes in data centers, encompassing content such as scraped web data, processed data backups, inference logs, and related data. Like AI memory, HDDs have seen massive adoption in recent years, putting suppliers under pressure.
With the AI frenzy, we have seen major PC components go into short supply, and unfortunately, this trend will persist for quite some time before we witness a meaningful recovery.
Read more of this story at SoylentNews.
]]>Humanoid robotics has advanced incredibly in the past year.
This is a robot show by Unitree, a leading Chinese maker that appeared this week during Chinese New Year celebrations on their national CCTV network.
The robots breakdance, do acrobatics, fight with numbchuks [sic]... incredible. The video speaks for itself!
https://www.youtube.com/watch?v=mUmlv814aJo [4:50 -Ed]
Reuters reported on the Gala a couple of days ago, saying:
Four rising humanoid robot startups - Unitree Robotics, Galbot, Noetix and MagicLab - demonstrated their products at the gala, a televised event and touchstone for China comparable to the Super Bowl for the United States.
The programme's first three sketches prominently featured humanoid robots, including a lengthy martial arts demonstration where over a dozen Unitree humanoids performed sophisticated fight sequences waving swords, poles and nunchucks in close proximity to human children performers.
The fight sequences included a technically ambitious one that imitated the wobbly moves and backward falls of China's "drunken boxing" martial arts style, showing innovations in multi-robot coordination and fault recovery - where a robot can get up after falling down.
The programme's opening sketch also prominently featured Bytedance's AI chatbot Doubao, while four Noetix humanoid robots appeared alongside human actors in a comedy skit and MagicLab robots performed a synchronised dance with human performers during the song "We Are Made in China".
The hype surrounding China's humanoid robot sector comes as major players including AgiBot and Unitree prepare for initial public offerings this year, and domestic artificial intelligence startups release a raft of frontier models during the lucrative nine-day Lunar New Year public holiday.
Last year's gala stunned viewers with 16 full-size Unitree humanoids twirling handkerchiefs and dancing in unison with human performers.
[...]
Behind the spectacle of robots running marathons and executing kung-fu kicks and backflips, China has positioned robotics and AI at the heart of its next-generation AI+ manufacturing strategy, betting that productivity gains from automation will offset pressures from its ageing workforce.
Read more of this story at SoylentNews.
]]>When a crowd gets something right, like guessing how many beans are in a jar, forecasting an election, or solving a difficult scientific problem, it's tempting to credit the sharpest individual in the room. But new research suggests focusing on the 'expert' can lead groups astray.
In a study published in Proceedings of the National Academy of Sciences, researchers led by Joshua Plotkin at the University of Pennsylvania show that collective intelligence, or the "wisdom of crowds"—a phenomenon wherein groups often outperform individuals on complex tasks—is more likely to emerge when individuals are rewarded not for being right themselves, but for helping the group get closer to the truth.
Computer scientists can engineer collective intelligence in algorithms with centralized control, assigning subtasks, tuning whose input counts more, and basically running the whole operation like a tower controller. But real-world groups, whether people, animals, or loose networks of decision-makers, rarely have that kind of top-down, organized control.
Instead, individuals in natural settings more often tend to learn socially, copying strategies from one another that appear successful.
"Social learning is everywhere," Plotkin says, "but it can cause a problem for collective problem solving. The very mechanism that spreads good ideas can also wipe out the vital variation a group needs to perform well together."
[...] To determine under how individual incentives might produce collective intelligence, they tested three reward schemes: rewarding those whose predictions are accurate—the experts; rewarding "niche experts," those whose predictions are accurate but focus on underrepresented factors; and rewarding "reformers," those whose contributions improve the collective prediction regardless of their own personal accuracy.
They found that rewarding the standard experts fails because it inadvertently destroys the diversity of opinion. In this scenario, individuals simply imitate the single most successful peer until everyone is watching the same factor and ignoring the rest of the puzzle.
Rewarding niche experts results in predictions that can be accurate, but fragile; the group struggles when the expert is out of their depth. When a problem changes suddenly, when factors are correlated, when some information is missing, or when the environment is constantly changing, under those conditions, the niche expert approach can converge, yes, but it can converge to the wrong prediction.
By contrast, rewarding reformers facilitates diverse beliefs and collective accuracy, helps the process recover after changes (e.g., to the task), and keeps working when individual judgments are noisy, biased, overconfident, or anomalous. What matters is not who is right, but whose contribution moves the group's prediction in a better direction.
Speaking to more natural, real-world scenarios, first author Guocheng Wang says, "Reformers don't need to be accurate on their own, but they should be rewarded for improving the collective accuracy of the group."
Scientific collaborations often resemble the "niche expert" system, the team explains. Researchers gain recognition for rare expertise that fills a gap in a larger project. On the other hand, markets, prediction platforms, and even stock trading more closely resemble the reformer model: profits come not from being closest to the truth but from moving collective beliefs in the right direction.
"Hopefully," says Plotkin, "this kind of research will help guide non-market institutions to set up incentive schemes that engender good collective outcomes, even for problems that are too difficult for any one person to solve alone."
Journal Reference: https://doi.org/10.1073/pnas.2516535122
Read more of this story at SoylentNews.
]]>ScienceTech Daily published a very interesting story about bonobos being able to track imaginary objects:
In a set of carefully designed experiments modeled on children's tea parties, researchers at Johns Hopkins University found that an ape could engage in pretend play. The results mark the first controlled demonstration that an ape can imagine objects that are not actually there, a skill long considered uniquely human.
Across three separate tests, the bonobo interacted with invisible juice and imaginary grapes in a consistent and reliable way. The performance challenges longstanding assumptions about the limits of animal cognition.
The researchers conclude that the ability to understand pretend objects falls within the mental capacities of at least one enculturated ape. They suggest this ability could trace back 6 to 9 million years to a common ancestor shared by humans and other apes.
"It really is game-changing that their mental lives go beyond the here and now," said co-author Christopher Krupenye, a Johns Hopkins assistant professor in the Department of Psychological and Brain Sciences who studies how animals think. "Imagination has long been seen as a critical element of what it is to be human, but the idea that it may not be exclusive to our species is really transformative.
"Jane Goodall discovered that chimps make tools, and that led to a change in the definition of what it means to be human, and this, too, really invites us to reconsider what makes us special and what mental life is out there among other creatures."
"Evidence for representation of pretend objects by Kanzi, a language-trained bonobo" by Amalia P. M. Bastos and Christopher Krupenye, 5 February 2026, Science. DOI: 10.1126/science.adz0743
The article continues with a more detailed look into what it means for other primates to have imagination, as human do.
Read more of this story at SoylentNews.
]]>https://arstechnica.com/science/2026/02/dna-inspired-molecule-breaks-records-for-storing-solar-heat/
Heating accounts for nearly half of the global energy demand, and two-thirds of that is met by burning fossil fuels like natural gas, oil, and coal. Solar energy is a possible alternative, but while we have become reasonably good at storing solar electricity in lithium-ion batteries, we're not nearly as good at storing heat.
To store heat for days, weeks, or months, you need to trap the energy in the bonds of a molecule that can later release heat on demand. The approach to this particular chemistry problem is called molecular solar thermal (MOST) energy storage. While it has been the next big thing for decades, it never really took off.
[...]
In the past, MOST energy storage solutions have been plagued by lackluster performance. The molecules either didn't store enough energy, degraded too quickly, or required toxic solvents that made them impractical.
[...]
Previous attempts at MOST systems have struggled to compete with Li-ion batteries. Norbornadiene, one of the best-studied candidates, tops out at around 0.97 MJ/kg. Another contender, azaborinine, manages only 0.65 MJ/kg. They may be scientifically interesting, but they are not going to heat your house.Nguyen's pyrimidone-based system blew those numbers out of the water. The researchers achieved an energy storage density of 1.65 MJ/kg—nearly double the capacity of Li-ion batteries and substantially higher than any previous MOST material.
[...]
Achieving high energy density on paper is one thing. Making it work in the real world is another. A major failing of previous MOST systems is that they are solids that need to be dissolved in solvents like toluene or acetonitrile to work.
[...]
Nguyen's team tackled this by designing a version of their molecule that is a liquid at room temperature, so it doesn't need a solvent.
[...]
The MOST-based heating system, the team says in their paper, would circulate this rechargeable fuel through panels on the roof to capture the sun's light and then store it in the basement tank.
Read more of this story at SoylentNews.
]]>New ClickFix attack abuses nslookup to retrieve PowerShell payload via DNS:
Threat actors are now abusing DNS queries as part of ClickFix social engineering attacks to deliver malware, making this the first known use of DNS as a channel in these campaigns.
ClickFix attacks typically trick users into manually executing malicious commands under the guise of fixing errors, installing updates, or enabling functionality.
However, this new variant uses a novel technique in which an attacker-controlled DNS server delivers the second-stage payload via DNS lookups.
In a new ClickFix campaign seen by Microsoft, victims are instructed to run the nslookup command that queries an attacker-controlled DNS server instead of the system's default DNS server.
The command returns a query containing a malicious PowerShell script that is then executed on the device to install malware.
"Microsoft Defender researchers observed attackers using yet another evasion approach to the ClickFix technique: Asking targets to run a command that executes a custom DNS lookup and parses the Name: response to receive the next-stage payload for execution," reads an X post from Microsoft Threat Intelligence.
While it is unclear what the lure is to trick users into running the command, Microsoft says the ClickFix attack instructs users to run the command in the Windows Run dialog box.
This command will issue a DNS lookup for the hostname "example.com" against the threat actor's DNS server at 84[.]21.189[.]20 and then execute the resulting response via the Windows command interpreter (cmd.exe).
This DNS response returns a "NAME:" field that contains the second PowerShell payload that is executed on the device.
Read more of this story at SoylentNews.
]]>https://www.righto.com/2026/02/8087-instruction-decoding.html
In the 1980s, if you wanted your IBM PC to run faster, you could buy the Intel 8087 floating-point coprocessor chip. With this chip, CAD software, spreadsheets, flight simulators, and other programs were much speedier. The 8087 chip could add, subtract, multiply, and divide, of course, but it could also compute transcendental functions such as tangent and logarithms, as well as provide constants such as π. In total, the 8087 added 62 new instructions to the computer.
But how does a PC decide if an instruction was a floating-point instruction for the 8087 or a regular instruction for the 8086 or 8088 CPU? And how does the 8087 chip interpret instructions to determine what they mean? It turns out that decoding an instruction inside the 8087 is more complicated than you might expect. The 8087 uses multiple techniques, with decoding circuitry spread across the chip. In this blog post, I'll explain how these decoding circuits work.
Read more of this story at SoylentNews.
]]>Security devs forced to hide Boolean logic from overeager optimizer:
The creators of security software have encountered an unlikely foe in their attempts to protect us: modern compilers.
Today's compilers boil down code into its most efficient form, but in doing so they can undo safety precautions.
"Modern software compilers are breaking our code," said René Meusel, sharing his concerns in a FOSDEM talk on February 1.
Meusel manages the Botan cryptography library and is also a senior software engineer at Rohde & Schwarz Cybersecurity.
As the maintainer of Botan, Meusel is cognizant of all the different ways encryption can be foiled. It's not enough to get the math right. Your software also needs to encrypt and decrypt safely.
Writing code to execute this task can be trickier than some might imagine. And the compilers aren't helping.
Meusel offered an example of the kind of problem he deals with implementing a simple login system.
The user types in a password, which gets checked against a database, character by character. Once the first character doesn't match, an error message is returned.
For a close observer trying to break in, the time it takes the system to return that error indicates how many letters of the guessed password the user has already entered correctly. A longer response time indicates more of the password has been guessed.
This side-channel leak has been used in the past to facilitate brute-force break-ins. It just requires a high-resolution clock that can tell the minuscule differences in response times.
Good thing cryptographers are a congenitally paranoid sort. They have already created preventive functions to equalize these response times to the user so they are not so revealing. These constant-time implementations "make the run time independent of the password," Meusel said.
The GNU C compiler is excellent with reasoning about Boolean values. It may be too clever. Like Microsoft Clippy-level clever.
Read more of this story at SoylentNews.
]]>