Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Living worm towers are recorded in the wild for the first time, a rare example of collective hitchhiking in nature
- First evidenceof "living towers" in nature: observed in rotting apples and pears from local orchards in Konstanz, Germany
- Tower function confirmed: towers can attach to passing insects and can bridge physical gaps to disperse
- A powerful model:C. elegans are a new a tool for studying the ecology and evolution of collective dispersal
Nematodes are the most abundant animal on earth, but when times get tough, these tiny worms have a hard time moving up and out. So, they play to the strength of their clade. If food runs out and competition turns fierce, they slither towards their numerous kin. They climb onto each other and over one another until their bodies forge a living tower that twists skyward where they might hitch a ride on a passing animal to greener and roomier pastures.
At least that's what scientists assumed. For decades, these worm structures were more mythical than material. Such aggregations, in which animals link bodies for group movement, are rare in nature. Only slime molds, fire ants, and spider mites are known to move in this way. For nematodes, nobody had even seen the aggregations—known as towers— forming anywhere but within the artificial confines of laboratories and growth chambers; and nobody really knew what they were for. Did towers even exist in the real world?
Now, researchers in Konstanz, Germany, have recorded video footage of worms towering in fallen apples and pears from local orchards. The team from the Max Planck Institute of Animal Behavior (MPI-AB) and the University of Konstanz combined fieldwork with laboratory experiments to provide the first direct evidence that towering behavior occurs naturally and functions as a means of collective transport.
"I was ecstatic when I saw these natural towers for the first time," says senior author Serena Ding, group leader at the MPI-AB, of the moment when co-author Ryan Greenway sent her a video recording from the field. "For so long natural worm towers existed only in our imaginations. But with the right equipment and lots of curiosity, we found them hiding in plain sight."
Greenway, a technical assistant at the MPI-AB, spent months with a digital microscope combing through decaying fruit in orchards near the university to record natural occurrences and behavior of worm towers. Some of these whole towers were brought into the lab. What was inside the towers surprised the team. Although the fruits were crawling with many species of nematodes, natural towers were made only of a single species, all at the tough larval stage known as a "dauer."
"A nematode tower is not just a pile of worms," says the first author Daniela Perez, a postdoctoral researcher at MPI-AB. "It's a coordinated structure, a superorganism in motion."
(The term "Dauer" refers to a developmental stage in nematodes, particularly in Caenorhabditis elegans, where larvae enter a state of dormancy or stasis to survive harsh environmental conditions. Ed. note)
The team observed the natural "dauer" towers waving in unison, much like individual nematodes do by standing on their tails to latch onto a passing animal. But their new findings showed that entire worm towers could respond to touch, detach from surfaces, and collectively attach to insects such as fruit flies—hitchhiking on mass to new environments.
To probe deeper, Perez built a controlled tower using laboratory cultures of C. elegans. When placed on food-free agar with a small vertical post—a toothbrush bristle—hungry worms began to self-assemble. Within two hours, living towers emerged, stable for over 12 hours, and capable of extending exploratory "arms" into surrounding space. Some even formed bridges across gaps to reach new surfaces.
"The towers are actively sensing and growing," says Perez. "When we touched them, they responded immediately, growing toward the stimulus and attaching to it."
This behavior, it turns out, is not restricted to the so-called "dauer" larval stage seen from the wild samples. Adult C. elegans and all larval stages in the lab also towered—an unexpected twist that suggests towering may be a more generalized strategy for group movement than previously assumed.
Yet despite the architectural complexity of these towers, the worms inside showed no obvious role differentiation. Individuals from the base and the apex were equally mobile, fertile, and strong, hinting at a form of egalitarian cooperation. But so far only, the authors point out, in the controlled conditions of the laboratory. "C. elegans is a clonal culture and so it makes sense that there is no differentiation within the tower. In natural towers, we might see separate genetic compositions and roles, which prompts fascinating questions about who cooperates and who cheats."
As researchers seek to understand how group behavior evolves—from insect swarms to bird migrations—these microscopic worm towers might rise to provide some of the answers.
"Our study opens up a whole new system for exploring how and why animals move together," says Ding who leads a research program on nematode behavior and genetics. "By harnessing the genetic tools available for C. elegans, we now have a powerful model to study the ecology and evolution of collective dispersal."
Journal Reference: DOI: 10.1016/j.cub.2025.05.026
See also:
Processed by jelizondo
Arthur T Knackerbracket has processed the following story:
We may already have had our first-ever encounter with dark matter, according to researchers who say a mysteriously high-energy particle detected in 2023 is not a neutrino after all, but something far stranger
An extremely high-energy particle that was spotted tearing through Earth has left scientists flummoxed ever since it was discovered. While many researchers believe the particle was an unusual neutrino, some are now suggesting it may be something even wilder: a particle of dark matter travelling across the cosmos.
The KM3NeT detector, off the coast of Italy, spotted this “impossible” neutrino in 2023 while it was still under construction. The particle in question was of immense proportions, 35 times more energetic than any seen before. Where it came from remains a mystery, with possible sources including a galaxy with a very active central black hole known as a blazar, or a background source of high-energy neutrinos pervading the universe.
[...] IceCube has seen evidence for hundreds of cosmic neutrinos since 2011, but never something as energetic as KM3NeT’s discovery. That was confusing, because whatever source KM3NeT was seeing, IceCube should have seen it too.
Dev says that if the incoming particle was dark matter and not a neutrino, it could explain this mystery. The shallow predicted path of the incoming particle meant that it had to travel through more of Earth to reach KM3NeT than IceCube, increasing the chance of it being scattered into a muon. “The dark matter goes through lots of Earth’s matter,” says Dev, “and we can explain why IceCube didn’t see it.”
The particle would have been produced in a blazar and then fired towards Earth in a beam. Dev favours this idea because high-energy protons in a blazar more efficiently transfer their energy into dark matter than neutrinos, he says. The vast majority of the other events detected by KM3NeT and IceCube would probably still have been neutrinos.
Not everyone is convinced just yet. “From an Occam’s razor perspective, this is probably just an ordinary neutrino that’s exceptional in energy,” says Dan Hooper at the University of Wisconsin–Madison. However, if correct, it would give us a method to find and study dark matter particles, which have never previously been detected. “Everybody would be pretty thrilled if these machines can study not only neutrinos but also dark matter,” says Hooper.
Journal Reference: arXiv DOI: 10.48550/arXiv.2505.22754
Arthur T Knackerbracket has processed the following story:
William Dana Atkinson was one of the core people on the teams that created the Lisa and then Macintosh computers at Apple in the late 1970s and early 1980s. He was one of the most important and influential computer programmers who has ever lived. It is no exaggeration to say that all computer UIs designed in the last 40 years were shaped and influenced by Atkinson's brilliance and originality.
He dropped out of a doctorate in neuroscience to join Apple in 1978. He became employee # 51. In 2018, he said:
Some say Steve used me, but I say he harnessed and motivated me, and drew out my best creative energy. It was exciting working at Apple, knowing that whatever we invented would be used by millions of people.
Since the weekend when the news first broke, a remarkable range of tributes to the man and his work have appeared. For the Reg FOSS desk, one of the things that has particularly struck us is the range of different aspects of Atkinson's creativity that resonated with different people.
For instance, those working mainly online, such as TechCrunch's Antony Ha, call out HyperCard. Atkinson designed and wrote HyperCard, which introduced the wider world to the concept of hyperlinks as invented by Ted Nelson for his Xanadu hypertext system. Hypercard made it easy to create "stacks" of documents and navigate through them by clicking on links. As The Register has described more than once, it's inspired JavaScript recreations and parts of the Windows 10 UI, but most significantly of all it inspired the creators of the World Wide Web. It is specifically mentioned in the original proposal. Atkinson himself described the inspiration for Hypercard as a 1985 LSD trip.
Anyone who creates or edits pictures on computers uses tools that follow in the footsteps of an earlier Bill Atkinson app: he wrote MacPaint. Between it and his earlier version, LisaSketch, this introduced ideas like tool palettes, which became toolbars; the lasso tool to select objects; zoomed-in pixel editing, which Atkinson called FatBits; and the paint-bucket fill tool, among many other things.
Programmers who know his work nod with respect to lower-level stuff. The story of Apple's visit to Xerox, where the Xerox Alto graphical workstation inspired the Apple Lisa's overlapping windows, is well known. What's less well understood is that Smalltalk could only write or draw into one window, the topmost. It was impossible to update the contents of background windows. Atkinson was on that visit, but he didn't know that. He just assumed the Smalltalk team must have solved that, and all he had to do was work out how. He called his resulting algorithm regions and after a serious 1982 car accident and head injuries, Atkinson's first words to his visiting boss were "Don't worry, Steve, I still remember how to do regions."
Lower-level still, he is remembered for his remarkably efficient dithering algorithm, which you can try here.
As "Apple acolyte" Steven Levy wrote in Rolling Stone, Atkinson also brought Burrell Smith, designer of the original Macintosh hardware, to the project. In his book Insanely Great, Levy described the coming up with regions:
Atkinson worked at the problem for months - not only in long hours at a desk, but literally in his dreams. Upon arising he would record his somnambulant labors in a notebook. Eventually wave after wave of Atkinson's brainpower eroded the problem. He had set our to reinvent the wheel; actually he wound up inventing it.
Unlike this vulture, Levy knew Atkinson personally, and we found his Wired obituary touching.
The New York Times obituary has some quotes that convey how people who knew his work saw it. Apple colleague Steve Perlman said *"Looking at his code was like looking at the ceiling of the Sistine Chapel. His code was remarkable. It is what made the Macintosh possible."
Atkinson persuaded Apple to spin off its radical Paradigm project as a separate company, and left Apple in 1990 to co-found General Magic. It developed the Magic Cap – essentially something like a smartphone, but a decade too early. Its website is still online and you can run the software in emulation today.
In 2007 he came out of semi-retirement to work with Palm Pilot inventor Jeff Hawkins's AI startup Numenta, saying "What Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet."
Later in his life, he became an accomplished nature photographer. In 2004, he published a book of extreme close-up mineral photographs called Within the Stone. In 2009, he wrote an app to electronically send postcards: it's called PhotoCard. You can download pictures from the book and his other photography on BillAtkinson.com.
A year and a half ago, the Computer History Museum hosted a panel discussion titled Insanely Great for the Mac's 40th birthday. Its two hours contain many spirited contributions from Atkinson.
He announced his illness on Facebook in late 2024, saying:
On October first, I was diagnosed with pancreatic cancer. Because of vascular involvement, surgery is not possible.
He married Jingwen Cai in January 2023. As his family said, "He is survived by his wife, two daughters, stepson, stepdaughter, two brothers, four sisters, and dog, Poppy."
This vulture learned of Atkinson's passing from a post on Daring Fireball. We cannot better John Gruber's closing words:
I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us.
Arthur T Knackerbracket has processed the following story:
US Navy Secretary John Phelan has told the Senate the service needs the right to repair its own gear, and will rethink how it writes contracts to keep control of intellectual property and ensure sailors can fix hardware, especially in a fight.
Speaking to the Senate Armed Services Committee on Tuesday, Phelan cited the case of the USS Gerald R. Ford, America's largest and most expensive nuclear-powered aircraft carrier, which carried a price tag of $13 billion. The ship was struggling to feed its crew of over 4,500 because six of its eight ovens were out of action, and sailors were barred by contract from fixing them themselves.
"I am a huge supporter of right to repair," Phelan told the politicians. "I went on the carrier; they had eight ovens — this is a ship that serves 15,300 meals a day. Only two were working. Six were out."
He pointed out the Navy personnel are capable of fixing their own gear but are blocked by contracts that reserve repairs for vendors, often due to IP restrictions. That drives up costs and slows down basic fixes. According to the Government Accountability Office, about 70 percent [PDF] of a weapon system's life-cycle cost goes to operations and support.
A similar issue plagued the USS Gerald Ford's weapons elevators, which move bombs from deep storage to the flight deck. They reportedly took more than four years after delivery to become fully operational, delaying the carrier's first proper deployment.
"They have to come out and diagnose the problem, and then they'll fix it," Phelan said. "It is crazy. We should be able to fix this."
The Navy is not alone in its concerns, as the US Army is peeved about the right to repair equipment it paid for too. In a rare display of bipartisanship, both Democrats and Republicans agreed that the Army shouldn't be waiting on contractors to fix its kit and Defense Secretary Pete Hegseth issued a memo directing the service to add right-to-repair provisions to its contracts.
"On a go-forward basis, we have been directed to not sign any contracts that don't give us a right to repair," Army Secretary Daniel Driscoll told the House Armed Services Committee on June 4. "On a go-back basis, we have been directed to go and do what we can to go get that right to repair."
Last year Senator Elizabeth Warren (D-MA) introduced the Servicemember Right-to-Repair Act [PDF] that would allow military personnel to repair the equipment they use. It's currently under consideration by Congress, but it seems the government is anxious to move fast.
"Our soldiers are immensely smart and capable and should not need to rely on a third party contractor to maintain their equipment. Oven repair is not rocket science: of course sailors should be able to repair their ovens," Kyle Wiens, CEO of repair specialists iFixit told The Register.
"It's gratifying to see Secretary Phelan echoing our work. The Navy bought it, the Navy should be able to fix it. Ownership is universal, and the same principles apply to an iPhone or a radar. Of course, the devil is in the details: the military needs service documentation, detailed schematics, 3D models of parts so they can be manufactured in the field, and so on. We're excited that the military is joining us on this journey to reclaim ownership."
Wiens has also been vocal in letting ordinary citizens have the same rights, despite frantic lobbying by the tech industry, which would generally prefer you just buy a new thing when the old one wears out. Several states, including California, New York, Massachusetts, Minnesota, Oregon, and Colorado, have already passed consumer right-to-repair laws, and now it seems like the military is leading the way to get it done on the federal level.
"We hope that anyone listening to us who hopes to pitch us a contract going forward will look back at their previous agreements they've signed with us, and if they're unwilling to give us that right to repair, I think we're going to have a hard time negotiating with them," Driscoll said.
We'll have to see if this trickles down to the rest of us.
The Steve Jobs Archive, Wired, and MacRumors are covering the 20th anniversary of an iconic commencement address to Standford University which Steve Jobs delivered in June, 2005. In it he quoted the Whole Earth Catalog by encouraging the graduates to "stay hungry, stay foolish".
At that point in time, YouTube was only months old, Twitter didn't exist, and Facebook didn't even have its news feed. The national media hadn't covered the speech. Apple sent out no press releases. But Stanford published the transcript on its primitive website, and people began discovering it. I recently checked my inbox for June 2005 and found multiple copies sent to me from different mailing lists. As the weeks and months went by, more and more people found the speech. Berlin describes it as going "slow-motion viral."
- — Wired
It's not an obvious candidate for a classic. A commencement address by a college dropout. A talk aimed at 22-year-olds that warns "You will gradually become the old and be cleared away." A text as shadowed by reality as soaring with inspiration: "Your time is limited, so don't waste it living someone else's life."
It is a speech by a tech founder who scarcely mentions technology. A few years earlier, Steve told an interviewer, "People sometimes forget that they are very unique and that they have very unique feelings and perspectives. The whole computer industry wants to forget about the humanist side." Steve had not forgotten. At Stanford, under the guise of great simplicity—"Today I want to tell you three stories from my life"—Steve touches on fundamental truths that make us human: love, death, fear, authenticity, hope.
The video, as published at The Steve Jobs Archive, has been recently enhanced from SD to HD.
For those that have attended commencement events, either their own or other people's, are there any memorable speakers who stand out?
Arthur T Knackerbracket has processed the following story:
Germany's long-awaited Jupiter supercomputer launched into the number four spot on the Top500 list of publicly ranked systems, dethroning Italy's HPC6 as Europe's biggest, baddest iron.
Developed as part of the EuroHPC Joint Undertaking and built-in collaboration with Nvidia and French supercomputer builder Eviden (formerly Atos), the system is designed to support research across a number of fields, including biophysics, cellular neuroscience, nuclear and elementary particle physics, astrophysics, and climate science.
Boffins [scientists] are also exploring the potential application of machine learning algorithms like those used in diffusion models to generate images to advance areas like medical imaging and autonomous vehicles.
In its first appearance on the biannual ranking, the Jülich Supercomputing Center's (JSC) flagship system managed 793 petaFLOPS of double precision (FP64) grunt in the time-honored High-Performance Linpack (HPL) benchmark.
However, the run is just a teaser for what's to come. Much like the US Department of Energy's Aurora supercomputer in its first appearance on the Top500 in late 2023, Jupiter's is also a partial run. The system is widely expected to be the European continent's first true exascale supercomputer.
Built by Eviden for the EuroHPC Joint Undertaking and deployed at the Jülich Supercomputing Center in Germany, Jupiter is expected to be Europe's first exascale-capable supercomputer.
While Jupiter ranks as Europe's new supercomputing beast, it still falls far short of its American counterparts. The US DoE's El Capitan, Frontier, and Aurora systems remain uncontested as the only three exascale-capable supercomputers on the Top500 at 1.74, 1.35, and 1.01 exaFLOPS, respectively.
That's likely to change in the not-so-distant future once Jülich can bring the full force of the Jupiter system to bear on the semiannual benchmark. The machine will only need to pack on another 207 petaFLOPS to its next HPL run to herald Europe's entrance into the exascale era, and just 220 more petaFLOPS to fly above Aurora. That's assuming, of course, the boffins at Argonne don't find a way to eke out a bit more compute from the Intel-based monster.
[...] Based on Eviden's BullSequana XH3000 platform, the Jupiter Boost section will feature roughly 6,000 compute nodes, each packed with four Nvidia Grace Hopper Superchips (GH200) and InfiniBand NDR200 networking.
[...] With about 24,000 superchips, Jupiter's Booster section promises somewhere between 872 petaFLOPS (vector) and 1.6 exaFLOPS (matrix) performance.
However, Jupiter won't be limited to traditional HPC applications. Each superchip is capable of churning out roughly 2 petaFLOPS of dense FP8 compute (double that if you can take advantage of Nvidia's hardware sparsity).
On paper, that means Jupiter's Boost should deliver just over half the AI performance of Lawrence Livermore National Laboratory's (LLNL's) El Capitan system at approximately 47 exaFLOPS of dense AI compute versus 87 exaFLOPS on the all AMD system.
[...] Jupiter's launch underscores Europe's growing influence in the HPC arena. As of today, five of the ten most powerful systems in the Top500 are based in Europe, a first in the semiannual ranking's more than 30-year history.
[...] China, meanwhile, remains missing in action, having largely withdrawn from the Top500 over the past few years. We know the Middle Kingdom already has several home-grown exascale supercomputers deployed, but it simply hasn't released Linpack results. This continues to leave something of a blind spot on the time-honored ranking.
The company's chairman insists that going all-in on electric cars is wrong:
Akio Toyoda is a man who speaks his mind. He's been saying for years that forcing everyone to buy EVs isn't the way forward. Toyota's chairman is adamant that the transition can't be rushed and that going all-in on electric vehicles would have massive repercussions across the automotive industry. He believes millions of jobs throughout the supply chain could be at risk if the combustion engine is phased out too quickly. On the environmental front, Toyoda maintains that EVs are still much dirtier than hybrids.
The grandson of Toyota founder Kiichiro Toyoda claims the company has sold around 27 million hybrids since launching the first-generation Prius in 1997. According to him, those hybrids have had the same carbon footprint as nine million fully electric vehicles when adding battery and vehicle production into the equation.
Toyoda argues that a single EV is as dirty as three hybrids. However, while it's true that producing EVs and their batteries creates more carbon emissions than building gas cars, over their life cycles, EVs are responsible for far fewer overall emissions.
From InsideEVs:
The biggest anti-EV argument stems from the emissions generated during the mining, refining and processing of the raw materials used in high-voltage batteries. EV batteries use materials such as lithium, cobalt and nickel that require hazardous, water-intensive mining processes.
So when an EV rolls off a production line, it's already born "dirtier" than the average gas or hybrid vehicle, for now. It comes with a bigger "carbon debt," a term that researchers use to calculate the emissions vehicles gather before even hitting the road.
A research paper published in the scientific journal IOP Science says that gas and hybrid vehicles create six to nine metric tons of carbon dioxide emissions in their manufacturing, depending on the vehicle segment. EVs, on the other hand, generate 11 to 14 metric tons of CO2 emissions before going into the hands of customers.
But that's only part of the story. Once EVs hit the road, they begin paying off that carbon debt and their overall "emissions" start decreasing. Hybrids and gas vehicles, on the other hand, head in the opposite direction, growing their carbon emissions over time. After a certain number of miles, an EV can potentially clear that debt entirely.
How long that takes, exactly, can depend on who you ask. A 2023 Argonne National Laboratory study found that it can take an electric car 19,500 miles to mitigate the emissions made during manufacturing. That's less than two years of typical American driving, according to FactCheck.org. Another study in the journal Nature put that number higher, with carbon reductions beginning around 28,000 miles. Either way, considering how long Americans keep their cars, EVs become the far cleaner option over time.
Arthur T Knackerbracket has processed the following story:
By affecting cows’ diets, climate change can affect cheese’s nutritional value and sensory traits such as taste, color and texture. This is true at least for Cantal — a firm, unpasteurized cheese from the Auvergne region in central France, researchers report February 20 in the Journal of Dairy Science.
Cows in this region typically graze on local grass. But as climate change causes more severe droughts, some dairy producers are shifting to other feedstocks for their cows, such as corn, to adapt. “Farmers are looking for feed with better yields than grass or that are more resilient to droughts,” but they also want to know how dietary changes affect their products, says animal scientist Matthieu Bouchon.
For almost five months in 2021, Bouchon and colleagues at France’s National Research Institute for Agriculture, Food and Environment tested 40 dairy cows from two different breeds — simulating a drought and supplementing grass with other fodder, largely corn, in varying amounts.
[...] They found that a corn-based diet did not affect milk yield and even led to an estimated reduction in the greenhouse gas methane coming from cows’ belching. But grass-fed cows’ cheese was richer and more savory than that from cows mostly or exclusively fed corn. Grass-based diets also yielded cheese with more heart-healthy omega-3 fatty acids and higher counts of probiotic lactic acid bacteria. The authors suggest that to maintain cheese quality, producers should include fresh vegetation in cows’ fodder when it is based on corn.
Experts not involved with the study point out that warming climates impact cattle physiology as well as feed quality. “Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature,” says Marina Danes, a dairy scientist at the Federal University of Lavras in Brazil.
[...] “The problem with the study is they increased the starch levels in the feed,” says Marcus Vinícius Couto, technical coordinator at the Central Cooperative of Rural Producers, an association of agricultural producers in Belo Horizonte. Starch is a challenge to digest for the first and largest compartment of a cow’s stomach — the rumen — where food ferments and plant fibers get broken down.
“We’re using feed with controlled starch levels,” as well as fat, hay and cottonseed fibers, to improve the milk’s composition, Couto says.
French producers will possibly need different strategies to fit their environment and cow breeds. But Bouchon is certain of one thing: “If climate change progresses the way it’s going, we’ll feel it in our cheese.”
Journal Reference: M. Bouchon et al. Adaptation strategies to manage summer forage shortages improve animal performance and better maintain milk and cheese quality in grass- versus corn-based dairy systems. Journal of Dairy Science, Volume 108, Issue 5. Published online February 20, 2025. doi: 10.3168/jds.2024-25730
Mistral releases a vibe coding client, Mistral Code:
French AI startup Mistral is releasing its own "vibe coding" client, Mistral Code, to compete with incumbents like Windsurf, Anysphere's Cursor, and GitHub Copilot.
Mistral Code, a fork of the open source project Continue, is an AI-powered coding assistant that bundles Mistral's models, an "in-IDE" assistant, local deployment options, and enterprise tools into a single package. A private beta is available as of Wednesday for JetBrains development platforms and Microsoft's VS Code.
"Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped, on-prem GPUs," Mistral wrote in a blog post provided to TechCrunch.
AI programming assistants are growing increasingly popular. While they still struggle to codequalitysoftware, their promise to boost coding productivity is pushing companies and developers to adopt them rapidly. One recent poll found that 76% of developers have used or were planning to use AI tools in their development processes last year.
Mistral Code is said to be powered by a combination of in-house models including Codestral (for code autocomplete), Codestral Embed (for code search and retrieval), Devstral (for "agentic" coding tasks), and Mistral Medium (for chat assistance). The client supports more than 80 programming languages and a number of third-party plug-ins, and can reason over things like files, terminal outputs, and issues, the company said.
[...] Mistral said it plans to continue making improvements to Mistral Code and contribute at least a portion of those upgrades to the Continue open source project.
Another day, another whatever these things are.
Arthur T Knackerbracket has processed the following story:
But it could just be a coincidence...
Just days after a new Washington crackdown on semiconductor design software exports to China, which banned companies like Synopsys from offering their services to clients in the country, access to some vital services appears to have been quietly restored. Notably, the turnabout comes within days of a high-level phone call between President Trump and Xi Jinping, according to Digitimes.
Digitimes reports that following the call, which took place on June 5, there has been a shift in the semiconductor market pertaining to the software used by companies in semiconductor design. Notably, several local Chinese IC design engineers and companies have reported that access to Synopsys' SolvNetPlus platform and Cadence's Support Portal has now been restored.
The report notes that it's unclear at this stage whether the change marks an isolated dispensation for certain clients or a broader relaxing of tension and restrictions between China and the US.
At the end of May, Synopsys paused its sales and services offerings in China and suspended its financial guidance, after receiving a BIS letter from the Bureau of Industry and Security of the U.S. Department of Commerce. The letter reportedly disclosed "new export restrictions related to China," and further reports claimed Synopsys had told staff to halt services and sales in the country and stop taking new orders to comply.
The EDA ban was expected to hit Chinese companies, notably Xiaomi and Lenovo, hard. Specifically, Chinese companies rely on American software for the manufacture and production of more advanced semiconductors like those used in AI processing. Reports at the time indicated that while China did have some homegrown EDA capacity, it was only "usable" on 7nm nodes and older, a weighty concern for future production.
These concerns appear to have been short-lived, however. Digitimes reports that following the phone call on June 5 (it is unclear if EDA access was specifically discussed), multiple Chinese IC firms reported successfully logging into SolvNetPlus with no issues.
Digitimes cites industry analysts who postulate whether the call might have garnered a software approach from the U.S. in regard to technology export restrictions and a gradual restoration of some services.
The House Committee on Oversight and Government Reform recently (June 5) held a hearing on, ahem, Artificial Intelligence, and its usage within the federal government.
We stand at the dawn of an intelligent age, a transformative period rivaling the industrial and nuclear eras, where AI—the new electricity, the engine of global change—is redrawing the very architecture of global power. It is clear that the nation that masters and fully adopts this foundational technology will not only lead but also write the rules for this new epoch. The breathtaking adoption of AI, exemplified by ChatGPT's rapid rise, underscores that for the United States, widespread federal adoption and deployment are not merely options but a strategic imperative essential for national competitiveness, national security, and effective governance.
(First witness, Mr. Yll Bajraktari, Competitive Studies Project.)
Today, AI is fundamentally transforming how work gets done across America's $30 trillion economy. AI solves a universal problem for public and private entities by transforming employee experience, providing instant support, reducing the toil of manual and tedious tasks, and allowing employees to focus on activities and jobs that provide significantly more value to the organization, leading to more efficient and effective organizations.
(Second witness, Mr. Bhavin Shah, Moveworks.)
AI has evolved dramatically in just a few years and today Generative AI holds enormous promise in radically improving the delivery of government services. The meteoric rise of the newest form of Generative AI— Agentic AI— offers the alluring opportunity to use AI for task automation, not just generating on-demand content, like ChatGPT and its rival chatbots. With these rapid developments, the government stands to realize massive cost savings and enormous gains in effectiveness in scores of programs while at the same time preserving the integrity of taxpayer dollars.
(Third witness, Ms. Linda Miller, TrackLight.)
Proposals to regulate AI systems are proliferating rapidly with over 1,000 AI-related bills already introduced just five months into 2025.27 The vast majority of these are state bills and many of them propose a very top-down, bureaucratic approach to preemptively constraining algorithmic systems. As these mandates expand they will significantly raise the cost of deploying advanced AI systems because complicated, confusing compliance regimes would hamstring developers—especially smaller ones.
Such a restrictive, overlapping regulatory regime would represent a reversal of the policy formula that helped America become the global leader in personal computing, digital technologies, and the internet.
(Fourth witness, Mr. Adam Thierer, R Street Institute.)
Than they made the mistake of calling their final witness, a man named Bruce Schneier. I'll leave you the pleasure of reading the full 31 pages of his testimony here, but I'd like to finish with a couple of money quotes of his, as cited in El Reg:
"You all need to assume that adversaries have copies of all the data DOGE has exfiltrated and has established access into all the networks that DOGE has removed security controls from ... DOGE's affiliates have spread across government ... They are still exfiltrating massive US databases, processing them with AI and offering them to private companies such as Palantir. These actions are causing irreparable harm to the security of our country and the safety of everyone, including everyone in this room, regardless of political affiliation."
Oddly enough, Mr. Schneier was the only witness not quoted, or even mentioned, in the wrap-up of that hearing. Maybe that wrap-up was AI generated?
Arthur T Knackerbracket has processed the following story:
United has switched off Starlink service on its United Express regional aircraft following reports of radio interference. According to The Points Guy, Starlink connectivity has been turned off across its fleet "out of an abundance of caution," a move the carrier confirmed in a statement.
As noted by the report, United has installed Starlink on nearly two dozen Embraer E175 aircraft. United announced the rollout on March 7, outlining plans to fit 40+ regional aircraft each month beginning in May through the end of 2025. The installation takes around 8 hours per aircraft, and United eventually plans to roll out Starlink to its entire fleet.
TPG reports that United has received reports of radio interference caused by Starlink, affecting the VHF antennas pilots use to contact air traffic control. As such, the aforementioned E175 aircraft carrying Starlink have been operating offline for the past few days, including a flight Tom's Hardware took on Monday, June 9.
United has issued a statement to TPG noting "Starlink is now installed on about two dozen United regional aircraft. United and Starlink teams are working together to address a small number of reports of static interference during the operation of the Wi-Fi system." United says this is "fairly common" with any new airline Wi-Fi provider, and says it expects the service to be back up and running "soon."
TPG reports that United and Starlink have already identified a solution and are rolling out the fix to affected aircraft. Allegedly, one-third of the affected planes have had the fix applied and are now operating with Starlink restored, with the remaining planes set for reconnection once they've had the fix applied.
Arthur T Knackerbracket has processed the following story:
A researcher has exposed a flaw in Google's authentication systems, opening it to a brute-force attack that left users' mobile numbers up for grabs.
The security hole, discovered by a white-hat hacker operating under the handle Brutecat, left the phone numbers of any Google user who'd logged in open to exposure. The issue was a code slip that allowed brute-force attacks against accounts, potentially enabling SIM-swapping attacks.
"This Google exploit I disclosed just requires the email address of the victim and you can get the phone number tied to the account," Brutecat told The Register.
Brutecat found that Google's account recovery process provided partial phone number hints, which could be exploited. By using cloud services and a Google Looker Studio account, the attacker was able to bypass security systems and launch a brute-force attack.
They explained in the post that "after looking through random Google products, I found out that I could create a Looker Studio document, transfer ownership of it to the victim, and the victim's display name would leak on the home page, with 0 interaction required from the victim."
The researcher also found an old-school username recovery form that worked without Javascript, which allowed them to check if a recovery email or phone number was associated with a specific display name using 2 HTTP requests.
After this, they could go "through forgot password flow for that email and get the masked phone."
Finally, a brute-forcing tool they developed as gpb would run with the display name and masked phone to unmask the phone number, using real-time libphonenumber validation to filter out invalid number queries made to Google's API.
[...] Surprisingly, Google didn't consider this a serious flaw, awarding Brutecat $5,000 under its bug bounty scheme.
"Google was pretty receptive and promptly patched the bug," the researcher said. "By depreciating the whole form compared to my other disclosures, this was done much more quickly. That being said, the bounty is pretty low when taking into account the impact of this bug."
Arthur T Knackerbracket has processed the following story:
New imagery encompassing nearly 800,000 galaxies.
The Cosmic Evolution Survey (COSMOS) has just released the “largest look ever into the deep universe.” Even more importantly, it has made the data publicly available and accessible “in an easily searchable format.” Possibly the star attraction from this massive 1.5TB of James Webb Space Telescope (JWST) data is the interactive viewer, where you can gawp at stunning space imagery encompassing nearly 800,000 galaxies. At the same site, you can find the complete set of NIRCam and MIRI mosaics and tiles, plus a full photometric catalog.
The COSMOS-Web program is a NASA-backed project with the support of scientists from the University of California, Santa Barbara (UCSB), and Rochester Institute of Technology (RIT). With this significant data release, the public at large is getting access to the largest view deep into the universe they will have ever seen.
According to the press release announcement, the published survey maps 0.54 degrees of the sky, or “about the area of three full moons,” with the NIRCam (near infrared imaging), and a 0.2 square degree area with MIRI (mid-infrared imaging).
To help Joe Public make sense of this 1.5TB data deluge, COSMOS-Web has thoughtfully provided a full aperture and model-based photometric catalog. Using this reference, those interested can observe “photometry, structural measurements, redshifts, and physical parameters for nearly 800,000 galaxies.” More excitingly for amateur astrophysics enthusiasts, the researchers claim that the new JWST imaging, combined with previous COSMOS data, “opens many unexplored scientific avenues.”
Before you head on over to the linked resources, it might be useful to familiarize yourself with some of the terms and units used by COSMOS-Web. If we want to look more closely at the JWST NIRCam mosaics, for example, you will see that the newly surveyed area is mapped into 20 zones with reference codes. Each of the mosaics is available in four NIRCam filters (F115W, F150W, F277W, F444W). In terms of scale, mosaics are available in both 30mas and 60mas. ‘Mas’ is short for milliarcseconds, a unit of angular measurement commonly used in astronomy.
Both mosaics (created by stitching together multiple tiles), and tiles (individual images, as captured by the telescope) are available for download and study. For example, a single 30mas pixel scale mosaic from NIRCam might require a download of up to 174GB, while the individual tiles are a ‘mere’ 7-10GB (compressed). You would also need specialized astronomical software to open these FITS data maps, but there are many options available, including some free and open-source software.
The COSMOS project has made use of most of the major telescopes on Earth and in space. It began with its use of the Hubble Space Telescope to cover what has now become known as the COSMOS field, a 2-square-degree field which appears to cover approximately 2 million galaxies. The initial Hubble survey took 640 orbits of the Earth. Ultimately, it is hoped that the research team will be able to study the formation and evolution of galaxies across cosmic time.
Study shows making hydrogen with soda cans and seawater is scalable and sustainable:
A MIT study shows that making hydrogen with aluminum soda cans and seawater is both scalable and sustainable.
Hydrogen has the potential to be a climate-friendly fuel since it doesn't release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a "green" fuel over its entire life cycle.
A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.
Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.
Now, the researchers have carried out a "cradle-to-grave" life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.
In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.
Now the question is how to avoid a Hindenburg every now and then.