Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
[...] The bill would shield frontier AI developers from liability for "critical harms" caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America's largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois," said OpenAI spokesperson Jamie Radice in an emailed statement. "They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards."
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn't intentional and they published their reports.
[...] "At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation," Niedermeyer said.
Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois' reputation for aggressively regulating technology. "We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There's no reason existing AI companies should be facing reduced liability," Wisor says.
[...] Years into the AI boom, there's still an open legal question around what happens if an AI model causes a catastrophic event.
Intel is developing its own version of neural compression technology, which will reduce the footprint of video game textures in VRAM and/or storage, similar to Nvidia's NTC. Intel's solution can achieve a 9x compression ratio in its quality mode and an 18x compression ratio in its more aggressive setting. The GPU maker also announced it will have two versions of the tech for different hardware, similar to XeSS. One will be tuned for its XMX engine while the other will be designed to run on traditional CPU and GPU cores at the expense of performance.
Intel is using BC1 texture compression and linear algebra for the XMX-accelerated portion of its neural texture compression technology. BC1 takes advantage of a "feature pyramid" that compresses four BC1 textures with MIP-chains. Compared to traditional compression, Intel's neural compression uses weights to compress textures with minimal loss to image quality. An encoder is responsible for encoding the textures, and a decoder is responsible for the decompression stage.
Intel noted four ways developers can deploy its texture compression, aimed at accelerating install times, saving disk space, or saving VRAM. The first is aimed at saving space on a server and reducing file size downloads by compressing textures beforehand, uploading those files to a server, then having the client download those textures and decompressing the textures on local storage.
The next three revolve around gameplay itself; one of these is streaming in textures as the game loads, the next involves streaming textures during gameplay, and the last one is loading textures on the fly without holding textures in VRAM (the latter is likely aimed at low VRAM GPUs).
Intel's compression tech has two modes of operation: a variant A mode that runs at higher quality and a variant B mode that sacrifices quality for higher compression. Intel claims variant A can take two of the first 4096 x 4096 64MB textures in a feature pyramid and compress them down to 10.7 MB each while retaining the 4K texture size. The remaining bottom two 4K by 4K pyramid feature textures are reduced to half their resolution and are compressed down to 2.7 MB.
With variant B, the textures are compressed more aggressively. The first texture in a feature pyramid is compressed down to 10.7 MB while retaining its resolution, the second texture is reduced down to half its normal resolution and compressed down to 2.7 MB, and the third texture's resolution is reduced to quarter resolution and compressed to 0.68 MB. The last texture's resolution is reduced to one-eighth of the texture's resolution and compressed down to 0.17 MB.
In Intel's own testing, it compared its variant A and variant B texture compression using BC1, against an industry standard compression format using a 3xBC1 plus 1xBC3 format. Variant A achieved over a 9x compression ratio, and variant B an 18x compression ratio over the aforementioned industry standard format, which was only capable of a 4.8x compression ratio.
Intel's new texture compression tech is achieving almost the same compression ratios as Nvidia's own neural texture compression using variant B. It still remains to be seen whether Nvidia or Intel's solution provides better quality, but Intel is the only one of the three major Western GPU manufacturers to have a solution that works on graphics cards besides its own (for now).
Pair backs scraper blocking and standards to separate trusted agents from bad bots
Citing the need to adapt to an internet increasingly serving the needs of AI agents without considering the needs of site owners, Cloudflare and GoDaddy are partnering on efforts to control how AIs crawl the web and interact with web content.
The content delivery network and the web host on Tuesday announced that they would help website owners gain better control over their relationship with AI, primarily through GoDaddy integrating Cloudflare's AI Crawl Control utility into its platform. That tool, as the pair explained in a press release, lets owners manage how AI interacts with their websites, allowing, blocking, or requiring payment from crawlers for access.
"By putting tools like AI Crawl Control and open standards into the hands of website owners, we are providing essential underpinnings for a new Internet business model," Cloudflare chief strategy officer Stephanie Cohen said of the move. "We want to ensure that every creator has the tools to verify who is interacting with their site, while giving legitimate AI agents a secure, transparent way to participate."
Cloudflare has been beating the drum over the need to control bots' access to websites and web content, and has rolled out several measures aimed at restricting unauthorized scraping in recent years. In 2025, it rolled out an AI that it said would trap and waste the time of unauthorized AI scrapers by feeding them endless garbage, and it has previously pushed to require AI bots to pay for access to websites.
Charging bots was one of Cloudflare's ideas to help protect website operators, who the CDN has noted are losing tons of money earned via web traffic since many search visitors are now instead fed the answers they're looking for by an AI, like Google's AI Overviews. Therefore, visitors are often less likely to click through to the original source.
The pair have skin in this game, clearly, as without website owners making money, they're unlikely to get paid themselves.
If you set up roadblocks to stop bad bots, which the pair note is the point of this endeavor, then you're naturally going to end up catching a lot of good bots in the process, which is why Cloudflare and GoDaddy have also expressed support for new standards they believe will keep good bots in operation while restricting the reach of bad ones.
The pair expressed support for the Agent Name Service (ANS), a proposal made last year that would act like a DNS system for AI agents, creating an open, protocol-agnostic registry of AI agents that would allow them to operate with a degree of trust and assurance by linking them to controllers, among other things.
ANS was ultimately built by GoDaddy, we note, and is available on GitHub. The pair also threw their weight behind Cloudflare's Web Bot Auth method that relies on cryptographic signatures in HTTP messages to determine whether a request comes from an AI bot. The two technologies, the pair said, allow AI agents to identify themselves through cryptographically signed requests.
"With an open ecosystem of standards and methods for identifying agents, the agentic web can evolve with transparency built in by default," the pair said.
[...] That's not critical mass for AI agent standards to be considered … well, standard, but it's a start. It's also a helluva lot more likely to succeed than Sam Altman's eyeball-scan-for-an-AI-agent-license scheme, so there's that, too.
Either way, something has to happen soon: MIT CSAIL's 2025 AI Agent Index, published in February, found that AI bots regularly ignore robots.txt restrictions, and few have released any safety data. Universally agreed upon rules are needed as they proliferate and change the shape of the internet.
[Ed. note: Little Snitch is a macOS program that intercepts network traffic at the kernel level to let you know what connections your applications are making behind the scenes]
Recent political events have pushed governments and organizations to seriously question their dependence on foreign-controlled software. The core issue is simple and uncomfortable: through automatic updates, a vendor can run any code, with any privileges, on your machine, at any time. Most people know this, but prefer not to think about it. Linux is the obvious candidate for reducing that dependency: no single company controls it, no single country owns it. So I decided to explore it myself.
[...] Very soon after that, I felt kind of naked: being used to Little Snitch, it's a strange feeling to have no idea what connections your computer is making. I researched a bit, found OpenSnitch, several command line tools, and various security systems built for servers. None of these gave me what I wanted: see which process is making which connections, and in the best case deny with a single click.
[...] To make a long story short: I decided to use eBPF for traffic interception at kernel level. It's high performance and much more portable than kernel extensions. The main application code is in Rust, a language I've wanted to explore for quite a while. And the user interface was built as a web application. That last choice might seem odd for a privacy tool, but it means you can monitor a remote Linux server's network connections from any device, including your Mac. Want to know what Nextcloud, Home Assistant, or Zammad are actually connecting to? Use Little Snitch on the server.
[...] The kernel component, written for eBPF, is open source and you can look at how it's implemented, fix bugs yourself, or adapt it to different kernel versions. The UI is also open source under GPL v2, feel free to make improvements. The backend, which manages rules, block lists, and the hierarchical connection view, is free to use but not open source. That part carries more than twenty years of Little Snitch experience, and the algorithms and concepts in it are something we'd like to keep closed for the time being.
One important note: unlike the macOS version, Little Snitch for Linux is not a security tool. eBPF provides limited resources, so it's always possible to get around the firewall for instance by flooding tables. Its focus is privacy: showing you what's going on, and where needed, blocking connections from legitimate software that isn't actively trying to evade it.
blog post: https://obdev.at/blog/
software: https://obdev.at/products/littlesnitch-linux/index.html
We have better, more open ways to build our walls:
There is a bit of a stir in the Linux community this week. Little Snitch, the venerable gatekeeper of macOS network traffic, has finally made its way to our shores. On paper, it is an impressive bit of engineering. It utilises eBPF for high-performance kernel-level monitoring and is written in Rust, which is enough to make any technical enthusiast's ears perk up. It even sports a fancy web UI for those who prefer a mouse to a terminal.
But as I looked closer, the gloss started to peel. While parts of the project are open, the core logic, the "brain" that actually decides what to block and how to analyse your traffic, is closed source.
For a FOSS enthusiast, this is a total non-starter. We don't migrate to Linux just to swap one proprietary black box for another. If I cannot audit the code that sits between my binaries and the internet, I am not interested. A security tool that asks for blind trust is an oxymoron. In my home lab, if the code isn't transparent, the binary doesn't get executed. It is that simple.
As I've detailed before on this blog in The DNS Safety Net, my primary line of defence is AdGuard Home. By handling privacy at the DNS level, I have a silent, network-wide shield that catches the vast majority of telemetry, trackers, and "phone home" attempts before they even leave my Proxmox nodes.
[...] Even at the application level, I already have better alternatives in place. For this blog, I use Wordfence. It acts as a localised firewall, monitoring for malicious traffic and unauthorised changes right at the source. Between network-wide DNS filtering and application-specific security, the layers are already there. Adding a proprietary binary into that mix adds complexity without adding meaningful trust.
[...] If I ever needed to track down which specific application is making suspicious outbound connections, I would turn to OpenSnitch, the fully open-source, community-driven application firewall for Linux. It is not as polished as the new Little Snitch port, but every line of its code is open for inspection and it does not ask for blind trust.
My network is quiet, my logs are clean, and my gatekeeper is a piece of transparent software I host myself. Until a tool comes along that respects both my privacy and the FOSS ethos I live by, that is not going to change. If you are serious about your own data, you should keep your gatekeepers open and your network controlled at the edge.
After two decades of continuous work, researchers in Japan have discovered a genetic 'dead end' to mammal cloning.
The study began in 2005, when researchers, led by scientists at the University of Yamanashi in Japan, cloned a single female mouse.
They then re-cloned that clone by transferring its nuclear DNA into an egg 'emptied' of nuclear DNA, and so on and so forth, for 57 more generations, producing more than 1,200 mice from that single original donor.
Two decades later, the team was on their 58th generation, and the re-cloned mice had accumulated so many genetic mutations that they died the day after they were born.
The study is the first peer-reviewed research to 'serially' clone a mammal to this end.
"It has long been unclear whether mammals, unlike plants and some lower animals, could sustain their species through clonal reproduction alone," write the research team, led by geneticist Sayaka Wakayama.
"[O]ur results align closely with Muller's ratchet theory," they add. "This model predicts that in asexual lineages, deleterious mutations inevitably accumulate, ultimately producing mutational meltdown and extinction."
Since the first mammal was cloned in the mid-1990s, famously called Dolly the Sheep , scientists have learned a great deal about the whole process , and how to recreate an animal using very few cells.
Some conservationists hope that the practice can one day help us bring back species from the brink of extinction, and a few celebrities have even started cloning their pets .
While this might work for a while, over time, as clones are re-cloned and then re-cloned again, dangerous mutations can accumulate in the genome. How long this takes to kill a creature is unknown, and scientists in Japan wanted to find out using mice.
For the team's first 25 cloning attempts, the re-cloned mice looked no different to the original genetic donor. In fact, success rates improved with each generation of clones, leading the authors to suspect "it may be possible to reclone animals indefinitely".
But then, something changed. The success rates of the cloned mice gradually declined before suddenly coming to an end.
It seemed that the mice had somehow lost their ability to efficiently eliminate chromosomal abnormalities and coding mutations.
Loss of the X chromosome became a prominent problem after the 25th generation of clones, and the frequency of deleterious mutations nearly doubled by the 57th generation.
Even those carrying mutations, however, lived normal lifespans – until generation 58, that is.
"Although serial cloning could not continue beyond the 58th generation (G58), the re-cloned mice remained healthy except G58, raising the possibility that subsequent generations could be produced via sexual reproduction," the authors suggest .
To test that idea, the team took female mice from the 20th, 50th, and 55th generations and mated them with normal male mice. The 20th-generation clones had similar litter sizes to control mice, but 50th- and 55th-generation clones had dramatically smaller litters.
Still, when those offspring lineages produced grandchildren of the clones with normal mice, their litter sizes increased again to a healthy number.
The findings suggest that mammal species can be surprisingly tolerant of genetic mutations, remaining fit and able to reproduce even in the face of widespread genetic alterations.
The study, the authors say, reaffirms "the evolutionary inevitability that sexual reproduction is indispensable for the long-term survival of mammalian species".
Journal Reference: Wakayama, S., Ito, D., Inoue, R. et al. Limitations of serial cloning in mammals. Nat Commun 17, 2495 (2026). https://doi.org/10.1038/s41467-026-69765-7
Some experts argue that AI was just used as an excuse for poor business decisions:
“I don’t know if they are directly related to actual productivity gains,” Hodjat told Nikkei in reference to the job cuts. “Sometimes, you know, AI becomes the scapegoat from a financial perspective, like when a company hired too many, or they want to resize, and it gets blamed on AI.” Despite that, he said that AI-driven layoffs could still happen, but that it would take another six months to a year “before companies start seeing real productivity gains from AI,” and that “it will be painful for all of us as we’re going through it, and simply because it’s a transition.”
[...] Despite all these analyses, some experts are pushing back against this narrative, pointing out that AI-driven layoffs were just being used as an excuse for poor business performance. OpenAI CEO Sam Altman said during the India AI Impact Summit, “I don’t know what the exact percentage is, but there’s some AI washing where people are blaming AI for layoffs that they would otherwise do, and then there’s some real displacement by AI of different kinds of jobs.” While they say that some of these layoffs would still happen with or without AI, there’s still a consensus that the technology would have an impact on jobs and that we should be ready for a disruption.
[...] "There's going to be a ton of people that are coming out of school that can't find a job and don't have the domain expertise,” Hodjat told Nikkei. “You have to bring them in. You have to have them learn on the job, on how to use AI within the various domains.”
Brady Frey did not realize that his daughter lied about her age when she set up her Discord account. He only found out after her account got hacked and he got trapped in a spiraling support nightmare while trying to stop the hacker from targeting dozens of her young friends with financial extortion scams.
When Frey's daughter signed up for Discord, she was 12 and technically not old enough to have an account.
[...]
Hiding her age, she created an account that listed her as over 18 years old.Now 13, the teen had been happily using the app for months when she suddenly got locked out of her account after clicking on a link from an attacker posing as Discord support. Since she didn't enable two-factor authentication, the attacker was able to commandeer the account.
[...]
Discord's chatbot, Clyde, and a seeming human support member, Nelly, automatically closed her support tickets after telling her it would be best to report the issue from inside the app, which she could not access.Frey told Ars he was shocked to see a platform as big as Discord relying on such poor support infrastructure.
"There's no pathway for a parent to step in and advocate for a minor whose account has been compromised," Frey told Ars.
[...]
the hacker wasn't booted until Ars intervened.Logging back into the account and surveying the damage, Frey told Ars that 38 of his daughter's friends were targeted with a social engineering scam that Bitdefender reported in February is "widespread" on Discord.
[...]
Most of the friends seemingly did not fall for the scam, but two users appeared to have taken the bait, Frey told Ars.
[...]
In the future, Discord plans to roll out global age checks that would rely on AI and other methods to detect and verify users like Frey's daughter, who should be marked as a teen. But in the meantime, Frey's experience shows "what happens after a minor in real life is compromised and a parent tries to get help," Frey said.
On top of repeated issues with the support forum, "Discord's in-app reporting tools failed repeatedly," he told Ars.
[...]
Seeking answers he couldn't get from Discord's support forum, he requested her data from Discord and soon confirmed his suspicions: The platform had labeled his daughter as a teen internally days before the hack occurred.
[...]
After receiving the data dump on his daughter's Discord account, a couple of things stuck out immediately as odd to Frey."There's no age recorded at signup, but there's something worth flagging: her data includes an age_group field set to '13–17,' confirming Discord's system knows she's a teen," Frey told Ars.
[...]
Additionally, Frey noticed that a separate field, "is_underage," was set to "false." He told Ars that he thinks that "discrepancy matters because the underage flag likely controls whether stricter ad protections" for kids are "applied."Since his daughter set up the account with an 18+ setting, it's possible that the field corresponded to her self-reported age.
[...]
Seemingly, that meant that the platform could create "a detailed behavioral ad profile" on the teen, even though its internal system had categorized her in the 13–17 age group, Frey said.Samantha Baldwin, a policy and research staff technologist for the Electronic Frontier Foundation (EFF), told Ars that Discord's hesitancy to formally update the age setting is telling. Frey's case shows why privacy advocates believe that age verification laws aren't about "protecting children" but about "surveillance and censorship," she said.
"That they would not recategorize a minor's account demonstrates this clearly," Baldwin said.
[...]
EFF has long warned against age-gating the Internet
[...]
Discord plans to stop collecting as many IDs and rely on new technology, like on-device face scans and age signals, to detect when users are lying about their ages as global age checks roll out later this year. But any time a user appeals their age estimation, Discord would still require an ID.
[...]
After weeks of begging for support, the teen was clearly exasperated when she tried to share her passport, and Discord support did not accept it and instead asked for a face scan. The chatbot Clyde seemingly messed up when prompting her to verify her age with k-ID, which Discord uses in some regions but not in the US currently."Please reopen the ticket, it is not about the Face Scan," the teen said.
But the ticket wasn't reopened until Ars poked Discord one last time.
[...]
Please reopen the ticket. The automatic close is incorrect, just like it was wrong on the other tickets over the past month."
One Microsoft product was approved despite years of concerns about its security:
In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings.
The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: "The package is a pile of shit."
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security.
Such judgments would be damning for any company seeking to sell its wares to the US government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the US in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials.
[...] Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling—which included a kind of "buyer beware" notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.
[...] Today, key parts of the federal government, including the Justice and Energy departments, and the defense sector rely on this technology to protect highly sensitive information that, if leaked, "could be expected to have a severe or catastrophic adverse effect" on operations, assets, and individuals, the government has said.
Originally spotted on Schneier on Security.
Related:
MIT researchers recently published "Fully 3D-Printed electric motor manufactured via multi-modal, multi-material extrusion" full text and nice illustrations available at https://www.tandfonline.com/doi/full/10.1080/17452759.2026.2613185 To do this required modifying a 3D printer, a lot of experimentation with feedstock, and software changes to solve printing problems. When they got done they have something they call a linear motor, but it looks more like a voice coil or solenoid (very short stroke) to me. None the less, it is proof of concept that printing both conductors and magnetic materials is possible.
In this work, a commercial multi-material extrusion 3D printer was modified to process conductive inks, soft and hard magnetic composite pellets, and rigid and compliant polymeric filaments. Using this system, solenoids, hard magnets, and springs were fabricated. These components were combined through straightforward assembly to demonstrate the first fully 3D-printed electric motor — a linear actuator composed of five distinct functional materials: dielectric, electrically conductive, soft magnetic, hard magnetic, and flexible. The solenoids produced up to 2.03 mT magnetic fields, the magnets generated up to 71 mT magnetic fields, and the linear actuator attained a maximum displacement of 318 μm at its resonant frequency (41.6 Hz). This study demonstrates the capability of multi-modal, multi-material extrusion 3D printing to fabricate all critical components of electrical machines, with magnetization of the hard magnets being the only post-printing step.
[...]
The potential of material extrusion 3D printing for the fabrication of electronics has led to the development and commercialization of several electrically conductive filaments, often made of polymer-based composites doped with metallic (e.g. copper [Citation20]) and carbon-based (e.g. graphene [Citation65]) fillers. Among the commercially available electrically conductive filaments, Electrifi (Multi3D, Middlesex, NC, USA) – a copper-reinforced PLA filament – is, to the best of our knowledge, the most electrically conductive. However, its resistivity, in the order of 10−4 Ω·m [Citation20], is still considerably high – about four orders of magnitude larger than that of bulk copper. This circumstance suggests the need to consider other forms of printable feedstock, such as inks and pastes, to be able to attain higher conductivities.
They wound up using pellets of silver-loaded conductive ink, along with a special/overlapping printing pattern to limit local spots of higher resistance.
CrystalX RAT comes with a handful of prankware:
Cybersecurity experts Kaspersky have detailed CrystalX RAT, a new malware-as-a-service (MaaS) offering rather similar to the popular WebRAT.
For data theft and infostealing, it enables keylogging, clipboard jacking, browser data theft, and desktop app data theft (Steam, Discord, Telegram).
Finally, for surveillance, it enables video capture through the camera, as well as audio capture through the microphone.
At the same time, it can be seen as prankware, as well. There are a handful of disturbance features thrown into the mix, such as the ability to change desktop wallpapers, alter display orientation to various angles, showing fake notification, changing the cursor position, hiding desktop icons, taskbar, Task Manager, and Command Prompt executable, and remapping the mouse.
Finally, it provides an attacker-victim chat window, allowing the attackers to tease, taunt, threaten, or demand money from their victims.
The PR campaign Kaspersky is mentioning is a series of fairly organized campaigns across different channels designed to entice potential buyers, since CrystalX RAT works on a tiered subscription model. Unfortunately, there was no word on how much a subscription costs. We only know that there are multiple tiers on offer.
The primary channel for promotions and subscriptions is Telegram, the famed instant chat platform. However, the MaaS is also being promoted on YouTube via a dedicated marketing channel which demonstrates its different features and capabilities.
Furthermore, Kaspersky argues that the prankware features are also, in a sense, a PR stunt, since such an offering will most likely stand out in a sea of various malware-as-a-service solutions.
Those include a detailed user panel, various customization options, as well as anti-analysis features. Some of its standout features include geoblocking, executable customization, anti-debugging, VM detection, and more.
Right now, it is difficult to say how many people fell victim to CrystalX RAT, or how they initially picked it up. It is likely that a social engineering campaign is at play, including things like fake software cracks, non-existent premium services, activators, and similar. The victims are predominantly located in Russia, and according to Leonid Bezvershenko, senior security researcher at Kaspersky GReAT, the RAT is “already affecting dozens of victims.”
“Such a diverse feature set effectively enables a 360-degree compromise of the victim and a complete loss of privacy. Beyond gaining access to account credentials, the stolen data could potentially be used for blackmail,” he said. “We expect the number of victims to grow significantly and its geographic spread to expand in the near future.”
After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now:
We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.
When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing.
We called for:
- Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
- Real security improvements: Including genuine end-to-end encryption for direct messages
- Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.
Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them.
TFA goes on to explain why they're remaining on Facebook and TikTok. They have vowed to keep fighting to protect digital rights.
Researchers at the Max Planck Institute for Quantum Optics (MPQ), Garching, in collaboration with Prof. Dr. Randolf Pohl from the Institute for Physics at Johannes Gutenberg University Mainz (JGU), have successfully conducted experiments on hydrogen atoms which allow testing of the Standard Model of particle physics up to the 13th decimal place. When it comes to measurements using hydrogen atoms, this is the most exact result to date. It allows researchers to, among other things, test predictions in hydrogen and solve the so-called proton radius puzzle. This puzzle has existed since measurements on two types of hydrogen indicated different proton radii. The new research results have recently been published in the journal Nature.
The Standard Model of particle physics encompasses the smallest-scale physics in a model consisting of particles and forces. One of its foundational components is quantum electrodynamics (QED). It described how light and matter fundamentally interact with each other. "Because hydrogen is relatively simple, it is well-suited for calculation. This means we can use it to test QED, and thus the Standard Model", explains Prof. Randolf Pohl. For their experiment, the researchers analyzed hydrogen's energetic structure using high-precision laser spectroscopy. They examined two different energy levels and determined the energy needed to transition from one level to the other, or, more specifically, their transition frequency. The measured transition frequency confirms the Standard Model with a deviation of less than one trillionth (0.7 parts per trillion). With this, the researchers have set a new benchmark in measuring the energy levels of hydrogen atoms. "This measurement is as good as the anomalous magnetic moment of the electron – the current gold standard for the confirmation of the Standard Model", says Pohl.
Thanks to this precision, the measurements taken confirm predictions made through the Standard Model which have never been confirmed in ordinary hydrogen before. "We are able to see very small, extremely interesting contributions that arise from the interaction with more complex particles called hadrons", says Dr. Lothar Maisenbacher from the MPQ, lead author of the study. Dr. Vitaly Wirthl, co-author and also from the MPQ, expands on this: "In the contributions to the transition frequency, we see muons in the electronic hydrogen for the first time. In theory, muon-antimuon particle pairs contribute to vacuum polarization, which is relevant for the precision of our measurement."
In addition to testing the Standard Model and QED, the scientists also used the hydrogen measurement to investigate the inconsistency with earlier measurements in muonic hydrogen. These measurements, lead by Prof. Pohl, use muonic hydrogen that possesses a muon instead of an electron. This elementary particle is similar to an electron, since it carries the same charge. However, it is more than 200 times heavier and has a lifespan of just two microseconds. The new measurement data means that the discrepancy between the two hydrogen types can be significantly ruled out for the first time. Both types have a proton radius of 0.8406 femtometers. However, it remains unclear how the discrepancy measured earlier can be explained.
Journal Reference: Maisenbacher, L., Wirthl, V., Matveev, A. et al. Sub-part-per-trillion test of the Standard Model with atomic hydrogen. Nature 650, 845–851 (2026). https://doi.org/10.1038/s41586-026-10124-3
https://news.mit.edu/2026/toward-cheaper-cleaner-hydrogen-production-0403
Hydrogen sits at the center of some of the world's most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.
But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such "green hydrogen" is made by running an electric current through water in an electrolyzer.
Green hydrogen won't scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.
1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.
In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.
"Green hydrogen has been a hard industry to have success in so far," acknowledges 1s1 co-founder Dan Sobek '88, SM '92, PhD '97. "The difference with us is we've done very targeted customer discovery. We have a very strong value proposition that's not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That's a nice point of entry."
Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.
"We're at an inflection point for the company," Sobek says. "The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused."
Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.
"A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1," Sobek says. "A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking."
Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.
Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on "proton exchange membrane" electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.
On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.
Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer's electrochemical cell.
"I asked Sukanta how we could improve the efficiency and durability of that element," Sobek recalls. "He gave me a one-word answer: boron."
Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.
The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.
"These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts," Sobek says.
In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.
"It's not just the technology, but the way we're applying it," Sobek says, "We're making hydrogen viable for use in the production of different industrial chemicals."
1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It's also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.
"We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals," Sobek says. "It's gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We're trying to enable low-cost, responsible mining."
As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.
"We have a large number of potential customers because this technology is really foundational," Sobek says. "Creating high-impact technologies is always fun."
Honda is deepening its retreat from an aggressive electric vehicle rollout, canceling three U.S.-bound EVs and warning that the shifting market could result in major financial losses as it pivots toward hybrids:
The automaker recently announced it will halt development and launch plans for the 0 Series SUV, 0 Series Saloon and the Acura RSX. Those models had been slated for U.S. production as early as this year following factory retooling tied to Honda’s next-generation EV strategy.
Honda said the decision reflects a rapidly changing business environment, including slower EV demand, tariff pressures and weaker-than-expected product performance in key markets.
“Honda determined that starting production and sales of these three models in the current business environment where the demand for EVs is declining significantly would likely result in further losses over the long term,” the company said in a statement.
The financial impact is substantial. Honda said total losses tied to the move could reach as much as $15.8 billion. That includes operating expenses projected between roughly $5.2 billion and $7.1 billion in the current fiscal year, reversing what had been an operating profit forecast just one month ago into an expected operating loss.
[...] Honda’s challenges extend beyond North America. The automaker acknowledged it has fallen behind competitors in China, particularly in EV cost competitiveness and in-vehicle software. Sales in China dropped sharply last year, further weighing on overall performance.
Previously:
Parts of the ancient Earth may have formed continents and recycled crust through subduction far earlier than previously thought.
New research led by scientists at the University of Wisconsin–Madison has uncovered chemical signatures in zircons, the planet's oldest minerals, that are consistent with subduction and extensive continental crust during the Hadean Eon, more than 4 billion years ago. The findings challenge models that have long considered Earth's earliest times as dominated by a rigid, unmoving "stagnant lid" and no continental crust, with potential implications for the timing of the origin of life on the planet.
The study, published Feb. 4 in the journal Nature, is based on chemical analyses of ancient zircons found in the Jack Hills of Western Australia. These sand-sized grains preserve the only direct records of Earth's first 500 million years and offer rare insight into how the planet's surface and interior interacted as continents first formed.
[...] These elements are essentially fingerprints of the environments where the zircons formed, allowing the scientists to distinguish zircons that formed in magmas that originated in the Earth's mantle beneath Earth's crust from those associated with subduction and continental crust. Because zircons lock in their chemistry when they crystallize and are highly resistant to alteration, they preserve uniquely reliable records of early Earth processes, even after several billion years.
"They're tiny time capsules and they carry an enormous amount of information," says John Valley, a professor emeritus of geoscience at UW–Madison who led the research.
Valley says that the chemistry of zircons found in the Jack Hills clearly shows that they originated from a much different source than other Hadean zircons found in South Africa, which carry a chemical signature typical of more primitive rocks originating within the Earth's mantle.
"What we found in the Jack Hills is that most of our zircons don't look like they came from the mantle," says Valley. "They look like continental crust. They look like they formed above a subduction zone."
Together, the two groups of zircons suggest that early Earth was not dominated by a single tectonic style, according to Valley.
[...] The oldest accepted microfossils are about 3.5 billion years old, but the Jack Hills zircons push evidence for potentially habitable surface conditions much earlier.
"We propose that there was about 800 million years of Earth history where the surface was habitable, but we don't have fossil-evidence and don't know when life first emerged on Earth," Valley says.
As scientists continue to hunt for evidence of what the earliest Earth was like, Valley says the latest results are an example of the power of improving and refining laboratory techniques.
"Our new analytical capabilities opened a window into these amazing samples," he says. "The Hadean zircons are literally so small you can't see them without a lens, and yet they tell us about the otherwise unknown story of the earliest Earth."
Journal Reference: Valley, J.W., Blum, T.B., Kitajima, K. et al. Contemporaneous mobile- and stagnant-lid tectonics on the Hadean Earth. Nature 650, 636–641 (2026). https://doi.org/10.1038/s41586-025-10066-2