Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Wendell Berry's list from 1987 is more relevant than ever before:
What do you want from new technology?
[...] Wendell Berry provided a list of nine reasonable requirements for new tech back in 1987, and they're still appropriate today.
Berry's list is actually more relevant than ever before. And the failure of tech companies to meet his modest demands is now painfully evident to everybody.
It wasn't always this bad.
[...]
- The new tool should be cheaper than the one it replaces.
- It should be at least as small in scale as the one it replaces.
- It should do work that is clearly and demonstrably better than the one it replaces.
- It should use less energy than the one it replaces.
- If possible, it should use some form of solar energy, such as that of the body.
- It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.
- It should be purchasable and repairable as near to home as possible.
- It should come from a small, privately owned shop or store that will take it back for maintenance and repair.
- It should not replace or disrupt anything good that already exists, and this includes family and community relationships.
[...] The curious fact is that the most up-to-date and forward-looking thing is this whole article is Berry's list from 1987. Nothing on it is obsolescent or inappropriate or dysfunctional or harmful.
TFA discusses each rule and provides examples how the opposite is what's actually happening today.
Arthur T Knackerbracket has processed the following story:
In 2013, dozens of dolphins living in Florida’s Indian River Lagoon mysteriously began to die. Their remains washed up, showing the animals had been emaciated. Now, over a decade later, ecologists believe they’ve figured out the cause of the bizarre die-off.
While the deaths have long been linked to gigantic algae blooms in the water, it took until now to determine exactly how the two events were connected, and it turns out, it’s mostly humanity’s fault. This might be hard to believe, but apparently dumping massive amounts of human waste and fertilizer into waterways can be bad.
As the ecologists note in the journal Frontiers in Marine Science, the long-lasting phytoplankton blooms began in 2011. The spread of the tiny plant-like organisms led to a widespread change in the Indian River Lagoon’s ecology. Their presence caused the amount of seagrass in the water to decrease by over 50%, and a 75% loss of macroalgae (better known as seaweed).
That alone wouldn’t have killed off the dolphins, but when the ecologists examined isotopic ratios in teeth samples taken from the carcasses, and compared them to teeth taken from 44 dolphins that hadn’t been part of the die-off, they realized their diets had been drastically altered. The dolphins had eaten 14% to 20% fewer ladyfish, a key dolphin prey animal, but had eaten up to 25% more sea bream, a less nutritious fish. In essence, the presence of such large amounts of phytoplankton had reduced the amount of food available for the dolphins’ usual prey. As the prey numbers dwindled, the dolphins had to catch more prey to consume the same amount of energy. The effects weren’t felt just by those dolphins that died, but by the area’s dolphin population as a whole. At the time, 64% of observed dolphins were underweight, while 5% were classified as emaciated.
“In combination, the shift in diets and the widespread presence of malnourishment suggest that dolphins were struggling to catch enough prey of any type,” said Wendy Noke Durden, a research scientist at the Hubbs-SeaWorld Research Institute, who worked on the research, in a statement. “The loss of key structural habitats may have reduced overall foraging success by causing changes in the abundance and distribution of prey.”
The historic record bears this out. According to records kept of the recorded causes of death for stranded dolphins, starvation was the cause of death in 17% of recorded dolphin deaths in the area between 2000 and 2020. That number spiked to 61% in 2013.
“Blooms of phytoplankton are part of productive ecological systems,” said Charles Jacoby, strategic program director at the University of South Florida, who also worked on the study. “Detrimental effects arise when the quantities of nutrients entering a system fuel unusually intense, widespread, or long-lasting blooms. In most cases, people’s activities drive these excess loads. Managing our activities to keep nutrients at a safe level is key to preventing blooms that disrupt ecological systems.”
There is a small silver lining to this grisly finding. As the researchers noted, waste and other crap dumped into Indian River Lagoon is being gradually reduced and is expected to hit safe levels in 2035.
Journal Reference: https://doi.org/10.3389/fmars.2025.1531742
https://phys.org/news/2025-04-bird-stay-unravel-entanglement-stiff.html
The concept of constructing a self-supporting structure made of rods—without the use of nails, ropes, or glue—dates back to Leonardo da Vinci. In the Codex Atlanticus, da Vinci illustrated a design for a self-supporting bridge across a river, which can be easily demonstrated using toothpicks, matches, or chopsticks. However, this design is fragile—pulling one of the rods or pushing the bridge from below can cause it to collapse.
In contrast, bird nests—which are also self-supporting structures consisting of rigid sticks and twigs—are remarkably stable despite continuous disturbances such as wind, ground vibrations, and the landing or takeoff of birds. What makes bird nests so sturdy?
This was the question at the center of a recent paper from L. Mahadevan and his team at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). The research is published in the Proceedings of the National Academy of Sciences.
Mahadevan is the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at SEAS and the Faculty of Arts and Sciences at Harvard. The paper was co-authored by Thomas Plumb-Reyes and Hao-Yu Greg Lin.
While entanglement in small, flexible systems, such as polymers, is well understood, less is known about how stiff, macroscale components entangle, especially when they are densely packed.
"When we think about entanglement, we typically think about flexible, individual constituents wrapping around each other, as exemplified in tangled headphone cords or entangling vines," said Mahadevan. "Contrary to this common intuition, stiff and straight rods can also entangle themselves—if they are long or thin enough."
To understand how, the researchers used X-ray tomography—a technique that creates a detailed cross-section of an object—as well as computer simulation and experimentation to peer inside and reconstruct the complex structure of bird nests.
The team collaborated with the Harvard Museum of Comparative Zoology, which provided a real bird's nest made from steel wires.
"Pigeons have been known to nest near construction sites and use scrap metal to make their nests, which worked out for us because X-ray scanning on metals provides a clear image to work with," said Yeonsu Jung, a postdoctoral fellow in applied mathematics at SEAS and first author of the paper.
After imaging and mapping the real birds' nests, the researchers created their own, using steel rods with varying length-to-diameter ratios, or aspect ratios. The research team found that the degree of entanglement within a pile of rods depended on this ratio. If the rods had a low aspect ratio—were too short and too wide—the entanglement would be weak and localized at separate spots. But rods with a high aspect ratio—were longer and thinner—had stronger entanglement throughout the entire structure.
"By looking inside these structures, we could see the percolations of entanglement," said Jung. "For rods with a low aspect ratio, there could be pockets of entanglement, but those would still fall apart and stay unconnected. But for high aspect ratio rods, things are really connected inside and the nest would stay together."
Unlike polymers and other microscopic filaments, the team also found that friction and gravity play a role in keeping these systems entangled as well. The team found that nests built with lower aspect ratio packing could become more entangled when exposed to force—in this case, being bounced up and down.
Journal Reference: https://doi.org/10.1073/pnas.2401868122
Arthur T Knackerbracket has processed the following story:
Melting occurs despite Corsair's first-party 600W 12VHPWR cable being used.
Another Blackwell GPU bites the dust, as the meltdown reaper has reportedly struck a Redditor's MSI GeForce RTX 5090 Gaming Trio OC, with the impact tragically extending to the power supply as well. Ironically, the user avoided third-party cables and specifically used the original power connector, the one that was supplied with the PSU, yet both sides of the connector melted anyway.
Nvidia's GeForce RTX 50 series GPUs face an inherent design flaw where all six 12V pins are internally tied together. The GPU has no way of knowing if all cables are seated properly, preventing it from balancing the power load. In the worst-case scenario, five of the six pins may lose contact, resulting in almost 500W (41A) being drawn from a single pin. Given that PCI-SIG originally rated these pins for a maximum of 9.5A, this is a textbook fire/meltdown risk.
The GPU we're looking at today is the MSI RTX 5090 Gaming Trio OC, which, on purchase, set the Redditor back a hefty $2,900. That's still a lot better than the average price of an RTX 5090 from sites like eBay, currently sitting around $4,000. Despite using Corsair's first-party 600W 12VHPWR cable, the user was left with a melted GPU-side connector, a fate which extended to the PSU.
The damage, in the form of a charred contact point, is quite visible and clearly looks as if excess current was drawn from one specific pin, corresponding to the same design flaw mentioned above. The user is weighing an RMA for their GPU and PSU, but a GPU replacement is quite unpredictable due to persistent RTX 50 series shortages. Sadly, these incidents are still rampant despite Nvidia's assurances before launch.
With the onset of enablement drivers (R570) for Blackwell, both RTX 50 and RTX 40 series GPUs began suffering from instability and crashes. Despite multiple patches from Nvidia, RTX 40 series owners haven't seen a resolution and are still reliant on reverting to older 560-series drivers. Moreover, Nvidia's decision to discontinue 32-bit OpenCL and PhysX support with RTX 50 series GPUs has left the fate of many legacy applications and games in limbo.
As of now, the only foolproof method to secure your RTX 50 series GPU is to ensure optimal current draw through each pin. You might want to consider Asus' ROG Astral GPUs as they can provide per-pin current readings, a feature that's absent in reference RTX 5090 models. Alternatively, if feeling adventurous, maybe develop your own power connector with built-in safety measures and per-pin sensing capabilities?
The dire wolf has been extinct for over 10,000 years. These two wolves were brought back from extinction:
The dire wolf, an animal that has been extinct for over 10,000 years has nearly come back after scientists at Colossal Biosciences were able to edit the DNA of a more modern wolf to appear and have the features of the dire wolf, a type of wolf that was made famous by "Game of Thrones."
Colossal Biosciences posted to X with a video clip of two small wolf cubs barking, "You're hearing the first howl of a dire wolf in over 10,000 years. Meet Romulus and Remus—the world's first de-extinct animals, born on October 1, 2024."
[...] The Dallas-based company, which has put on challenges to bring back the dodo bird as well as the woolly mammoth, was able to obtain DNA from fossils of dire wolves in 2021 and then edit the DNA of grey wolves in order to weave the key features of the dire wolf in with the grey wolf cubs. The embryos were edited and placed into a surrogate wolf-mother. Three wolves were born as a result, two male and one female, the New York Times reported.
From the AP News report:
Colossal scientists learned about specific traits that dire wolves possessed by examining ancient DNA from fossils. The researchers studied a 13,000 year-old dire wolf tooth unearthed in Ohio and a 72,000 year-old skull fragment found in Idaho, both part of natural history museum collections.
Then the scientists took blood cells from a living gray wolf and used CRISPR to genetically modify them in 20 different sites, said Colossal's chief scientist Beth Shapiro. They transferred that genetic material to an egg cell from a domestic dog. When ready, embryos were transferred to surrogates, also domestic dogs, and 62 days later the genetically engineered pups were born.
We're used to updating Windows, macOS, and Linux systems at least once a month (and usually more), but people with ancient DOS-based PCs still get to join in the fun every once in a while. Over the weekend, the team that maintains FreeDOS officially released version 1.4 of the operating system, containing a list of fixes and updates that have been in the works since the last time a stable update was released in 2022.
[...]
The release has "a focus on stability" and includes an updated installer, new versions of common tools like fdisk, and format and the edlin text editor.
[...]
Hall talked with Ars about several of these changes when we interviewed him about FreeDOS in 2024. The team issued the first release candidate for FreeDOS 1.4 back in January.
[...]
The standard install image includes all the files and utilities you need for a working FreeDOS install, and a separate "BonusCD" download is also available for those who want development tools, the OpenGEM graphical interface, and other tools.
People think the em dash is a dead giveaway you used AI – are they right?
ChatGPT is rapidly changing how we write, how we work – and maybe even how we think. So it makes sense that it stirs up strong emotions and triggers an instinct to figure out what's real and what's not.
But on LinkedIn, the hunt for AI-generated content has gone full Voight-Kampff. According to some, there's now a surefire way to spot ChatGPT use: the em dash.
Yes, the punctuation mark officially defined by the width of one "em." A favorite of James Joyce, Stephen King, and Emily Dickinson. A piece of punctuation that's been around since at least the 1830s. So why is it suddenly suspicious? Is it really an AI tell or punctuation paranoia?
Rebecca Harper, Head of Content Marketing at auditing compliance platform ISMS.online, doesn't think so: "I find the idea that it's some kind of AI tell ridiculous. If we start policing good grammar out of fear of AI, we're only making human writing worse!"
She's right. The em dash isn't some fringe punctuation mark. Sure, it's used less often than its siblings – the en dash and the humble hyphen – and it's more common in the US than the UK. But that doesn't make it automatically suspicious.
Robert Andrews, a Senior Editor, explains that this is a difference in style rather than a smoking gun: "It's not just a marker of AI, but of US English and AP Style. It's quite alien to UK journalism training and style, at least my own, albeit long ago. But increasingly encountered in AP Style environments - (or –, or —) unsurprising that this would flow into LLMs."
[...] Still, because it's slightly less common in some circles, people have latched onto it as a tell. Chris McNabb, Chief Technology Officer at eGroup Communications, makes this case: "I think it's a strong indicator, especially when you see it being used often by one person. Typically most people aren't going to long press the dash key to even use the en dash BUT AI such as ChatGPT uses it by default in a lot of cases. So yes when you do see an em dash particularly more than one in a message it's a pretty safe bet for a majority of posts."
So now, some people are actively scrubbing their em dashes to avoid suspicion. Editors, marketers, and content folks are switching them out for commas or full stops just to avoid being mistaken for a ChatGPT user.
[...] Maybe we'll look back on this moment and laugh. Or cringe. Maybe the AI bubble will burst, and human-made content will feel valuable again. Or maybe AI will become so deeply embedded, so seamless, that trying to tell the difference will feel quaint.
Until then, let's stop blaming punctuation. Because what we're really afraid of isn't the em dash. It's the slow, creeping erosion of what's real. And honestly? It's painful to live in fear. Isn't it?
I find this LinkedIn-based paranoia all very amusing, as I have been using all of these punctuations for years – nay, decades – in my technical writing. I seriously doubt that this means I am a machine—or does it??
Author and developer Scott Chacon has reflected that twenty years, as of April 7th, Linus Torvalds made the first commit to Git, the free and open source distributed version control system which he was building at the time. Linus has long since passed the baton onward. As a developer tool, Git is known for its quirks and idiosyncrasies as much as its ability to handle everything from small to very large projects with speed and efficiency.
Over these last 20 years, Git went from a small, simple, personal project to the most massively dominant version control system ever built.
I have personally had a hell of a ride on this particular software roller coaster.
I started using Git for something you might not imagine it was intended for, only a few months after it's first commit. I then went on to found GitHub, write arguably the most widely read book on Git, build the official website of the project, start the annual developer conference, etc - this little project has changed the world of software development, but more personally, it has massively changed the course of my life.
I thought it would be fun today, as the Git project rolls into it's third decade, to remember the earliest days of Git and explain a bit why I find this project so endlessly fascinating.
Although Git is often used as part of a set of services like those provided by Codeberg, Gitlab, and others more or less infamous it is perfectly easy to run it in-house. Either way it has become virtually synonymous with version control. Over the years, Git has gradually pushed aside its predecessors and even many (if not all) of its contemporary competitors.
Previously:
(2024) Beyond Git: How Version Control Systems Are Evolving For Devops
(2022) Give Up GitHub: The Time Has Come!
(2017) Git 2.13 Released
The following are top facial recognition companies packaging technology to simplify identity verification for businesses, consumers and government: Cognitec, Sensory, iProov, HyperVerge, Clarifai, Amazon Rekognition, there are many others. They use one or a combination of traditional algorithms, deep learning, optical and infrared sensors, 3D scans, other technology and of course hybrids of the many approaches.
Mother Jones is known for long political stories, this one is based on a successful facial recognition company, Clearview, and how they got their technology widely deployed (and highly remunerated) in a short time, and the underlying political ideology that drove the developers in their mission:
an interesting idea...The United States of America was founded on the idea that all men are created equal. And Curtis simply asked a question, as I remember it: 'What if they're not? What do you do?...How do you govern that?'...That's what we talked about all the time."
Clearview is riding a wave of demand in the sea of identity tracking technology, and they don't look likely to wipe out anytime soon:
Since Clearview's existence first came to light in 2020, the secretive company has attracted outsize controversy for its dystopian privacy implications. Corporations like Macy's allegedly used Clearview on shoppers, according to legal records; law enforcement has deployed it against activists and protesters; and multiple government investigations have found federal agencies' use of the product failed to comply with privacy requirements. Many local and state law enforcement agencies now rely on Clearview as a tool in everyday policing, with almost no transparency about how they use the tech. "What Clearview does is mass surveillance, and it is illegal," the privacy commissioner of Canada said in 2021. In 2022, the ACLU settled a lawsuit with Clearview for allegedly violating an Illinois state law that prohibits unauthorized biometric harvesting. Data protection authorities in France, Greece, Italy, and the Netherlands have also ruled that the company's data collection practices are illegal. To date, they have fined Clearview around $100 million.
It's amazing what impact a small group of technology oriented people can have in today's society.
https://spectrum.ieee.org/sound-waves
A group of international researchers have developed a way to use sound to generate different types of wave patterns on the surface of water and use those patterns to precisely control and steer floating objects. Though still in the early stages of developing this technique, the scientists say further research could lead to generating wave patterns to help corral and clean up oil spills and other pollutants. Further out, at the micrometer scale, light waves based on the research could be used to manipulate cells for biological applications; and by scaling up the research to generate water waves hundreds of times larger using mechanical means, optimally designed water wave patterns might be used to generate electricity.
The team conducted laboratory experiments where they generated topological wave structures such as vortices, where the water swirls around a center point; Möbius strips that cause the water to twist and loop around in a circle; and skyrmions, where the waves twist and turn in 3D space.
"We were able to use these patterns to control the movement of objects as small as a grain of rice to as large as a ping-pong ball, which has never been done before," says Yijie Shen, an assistant professor at Nanyang Technology University in Singapore who co-led the research. "Some patterns can act like invisible tweezers to hold an object in place on the water, while other patterns caused the objects to move along circular or spiral paths."
Commenting on the findings, Usama Kadri, a reader in applied and computational mathematics at Cardiff University in Wales, noted that, "The research is conceptually innovative and represents a significant development in using sound to generate water waves."Kadri, who is researching the effects of acoustic-gravity waves (sound waves influenced by gravity and buoyancy) added, "The findings can be a bridge between disciplines such as fluid dynamics, wave physics, and topological field theory, and open up a new way for remote manipulation and trapping of particles of different sizes."
The lab set-up consisted of carefully designed 3D-printed plastic structures based on computer simulations, including a hexagonal structure and a ring-shaped structure, and each is partially submerged in a tank of water. Rubber tubing from individual off-the-shelf speakers was is attached to precisely sited nozzles protruding from the tops of the devices structures and were are used to deliver a continuous low frequency 6.8 hertz sound to the hexagonal device, or a 9 hertz sound to the ring device. The sounds cause the surface of the water to oscillate and create desired wave patterns. A particular sound's amplitude, phase, and frequency can be adjusted using a laptop computer, so that when the waves meet and combine in the tank, they create the complex patterns that have previously been worked out using computer simulations. The findings were published in February in Nature.
The wave patterns apply forces similar to those seen in optical and acoustic systems, including gradient forces that change in intensity, and which can attract objects towards the strongest part of the wave, like leaves moving to the center of a whirlpool; and radiation pressure that pushes objects in the same direction the wave is moving.
"The wave patterns we generated are topological and stable, so they keep their shape even when there is some disturbance in the water," says Shen. "This is something we want to study further to better understand what's happening."
No one can seem to agree on what an AI agent is:
Silicon Valley is bullish on AI agents. OpenAI CEO Sam Altman said agents will "join the workforce" this year. Microsoft CEO Satya Nadella predicted that agents will replace certain knowledge work. Salesforce CEO Marc Benioff said that Salesforce's goal is to be "the number one provider of digital labor in the world" via the company's various "agentic" services.
But no one can seem to agree on what an AI agent is, exactly.
In the last few years, the tech industry has boldly proclaimed that AI "agents" — the latest buzzword — are going to change everything. In the same way that AI chatbots like OpenAI's ChatGPT gave us new ways to surface information, agents will fundamentally change how we approach work, claim CEOs like Altman and Nadella.
That may be true. But it also depends on how one defines "agents," which is no easy task. Much like other AI-related jargon (e.g. "multimodal," "AGI," and "AI" itself), the terms "agent" and "agentic" are becoming diluted to the point of meaninglessness.
That threatens to leave OpenAI, Microsoft, Salesforce, Amazon, Google, and the countless other companies building entire product lineups around agents in an awkward place. An agent from Amazon isn't the same as an agent from Google or any other vendor, and that's leading to confusion — and customer frustration.
[...] So why the chaos?
Well, agents — like AI — are a nebulous thing, and they're constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI's Operator, Google's Project Mariner, and Perplexity's shopping agent — and their capabilities are all over the map.
Rich Villars, GVP of worldwide research at IDC, noted that tech companies "have a long history" of not rigidly adhering to technical definitions.
"They care more about what they are trying to accomplish" on a technical level, Villars told TechCrunch, "especially in fast-evolving markets."
But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai.
"The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning," Ng said in a recent interview, "but about a year ago, marketers and a few big companies got a hold of them."
[...] "Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes," Rowan said. "This can result in varied interpretations of what AI agents should deliver, potentially complicating project goals and results. Ultimately, while the flexibility can drive creative solutions, a more standardized understanding would help enterprises better navigate the AI agent landscape and maximize their investments."
Unfortunately, if the unraveling of the term "AI" is any indication, it seems unlikely the industry will coalesce around one definition of "agent" anytime soon — if ever.
The Overpopulation Project has an English translation of Frank Götmark's short essay which explores the idea that Homo sapiens is an invasive specie. The essay was originally published on March 30th in Svenska Dagbladet and has been very slightly modified.
An invasive species can be defined as an alien, non-native species that spreads and causes various forms of damage. Such species are desirable to regulate and, in the best case, eliminate from a country. But compared to our population growth they are a minor problem, at least in Sweden and many European countries. In North America and Australia, they are a larger problem. But again, they cause a lot less damage than Homo sapiens, who is in any case the cause of their spread.
Invasive species tend to appear near buildings and infrastructure; for example, on roadsides and other environments that are easily colonized, or in the sea via ballast in ships. It is often difficult to draw boundaries in time and space for invasive species. For example, in Sweden several species came in via seeds in agriculture during the 19th century and became common, such as certain weeds.
The idea has been explored before, for example back in 2015 by Scientific American. It's also relevant to note that the global population might be underestimated substantially.
Previously:
(2019) July 11 is World Population Day
(2016) Bioethicist: Consider Having Fewer Children in the Age of Climate Change
(2015) Poll Shows Giant Gap Between what US Public and Scientists Think
(2014) The Climate-Change Solution No One Will Talk About
NASA's Webb Exposes Complex Atmosphere of Starless Super-Jupiter - NASA Science:
An international team of researchers has discovered that previously observed variations in brightness of a free-floating planetary-mass object known as SIMP 0136 must be the result of a complex combination of atmospheric factors, and cannot be explained by clouds alone.
Using NASA's James Webb Space Telescope to monitor a broad spectrum of infrared light emitted over two full rotation periods by SIMP 0136, the team was able to detect variations in cloud layers, temperature, and carbon chemistry that were previously hidden from view.
The results provide crucial insight into the three-dimensional complexity of gas giant atmospheres within and beyond our solar system. Detailed characterization of objects like these is essential preparation for direct imaging of exoplanets, planets outside our solar system, with NASA's Nancy Grace Roman Space Telescope, which is scheduled to begin operations in 2027.
SIMP 0136 is a rapidly rotating, free-floating object roughly 13 times the mass of Jupiter, located in the Milky Way just 20 light-years from Earth. Although it is not classified as a gas giant exoplanet — it doesn't orbit a star and may instead be a brown dwarf — SIMP 0136 is an ideal target for exo-meteorology: It is the brightest object of its kind in the northern sky. Because it is isolated, it can be observed with no fear of light contamination or variability caused by a host star. And its short rotation period of just 2.4 hours makes it possible to survey very efficiently.
Prior to the Webb observations, SIMP 0136 had been studied extensively using ground-based observatories and NASA's Hubble and Spitzer space telescopes.
"We already knew that it varies in brightness, and we were confident that there are patchy cloud layers that rotate in and out of view and evolve over time," explained Allison McCarthy, doctoral student at Boston University and lead author on a study published today in The Astrophysical Journal Letters. "We also thought there could be temperature variations, chemical reactions, and possibly some effects of auroral activity affecting the brightness, but we weren't sure."
To figure it out, the team needed Webb's ability to measure very precise changes in brightness over a broad range of wavelengths.
Using NIRSpec (Near-Infrared Spectrograph), Webb captured thousands of individual 0.6- to 5.3-micron spectra — one every 1.8 seconds over more than three hours as the object completed one full rotation. This was immediately followed by an observation with MIRI (Mid-Infrared Instrument), which collected hundreds of spectroscopic measurements of 5- to 14-micron light — one every 19.2 seconds, over another rotation.
The result was hundreds of detailed light curves, each showing the change in brightness of a very precise wavelength (color) as different sides of the object rotated into view.
"To see the full spectrum of this object change over the course of minutes was incredible," said principal investigator Johanna Vos, from Trinity College Dublin. "Until now, we only had a little slice of the near-infrared spectrum from Hubble, and a few brightness measurements from Spitzer."
The team noticed almost immediately that there were several distinct light-curve shapes. At any given time, some wavelengths were growing brighter, while others were becoming dimmer or not changing much at all. A number of different factors must be affecting the brightness variations.
"Imagine watching Earth from far away. If you were to look at each color separately, you would see different patterns that tell you something about its surface and atmosphere, even if you couldn't make out the individual features," explained co-author Philip Muirhead, also from Boston University. "Blue would increase as oceans rotate into view. Changes in brown and green would tell you something about soil and vegetation."
To figure out what could be causing the variability on SIMP 0136, the team used atmospheric models to show where in the atmosphere each wavelength of light was originating.
"Different wavelengths provide information about different depths in the atmosphere," explained McCarthy. "We started to realize that the wavelengths that had the most similar light-curve shapes also probed the same depths, which reinforced this idea that they must be caused by the same mechanism."
One group of wavelengths, for example, originates deep in the atmosphere where there could be patchy clouds made of iron particles. A second group comes from higher clouds thought to be made of tiny grains of silicate minerals. The variations in both of these light curves are related to patchiness of the cloud layers.
A third group of wavelengths originates at very high altitude, far above the clouds, and seems to track temperature. Bright "hot spots" could be related to auroras that were previously detected at radio wavelengths, or to upwelling of hot gas from deeper in the atmosphere.
Some of the light curves cannot be explained by either clouds or temperature, but instead show variations related to atmospheric carbon chemistry. There could be pockets of carbon monoxide and carbon dioxide rotating in and out of view, or chemical reactions causing the atmosphere to change over time.
"We haven't really figured out the chemistry part of the puzzle yet," said Vos. "But these results are really exciting because they are showing us that the abundances of molecules like methane and carbon dioxide could change from place to place and over time. If we are looking at an exoplanet and can get only one measurement, we need to consider that it might not be representative of the entire planet."
This research was conducted as part of Webb's General Observer Program 3548.
See also:
Arthur T Knackerbracket has processed the following story:
[CFD: Computational Fluid Dynamics]
CFD simulation is cut down from almost 40 hours to less than two using 1,024 Instinct MI250X accelerators paired with Epyc CPUs.
AMD processors were instrumental in achieving a new world record during a recent Ansys Fluent computational fluid dynamics (CFD) simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory (ORNL). According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly.
Frontier was once the fastest supercomputer in the world, and it was also the first one to break into exascale performance. It replaced the Summit supercomputer, which was decommissioned in November 2024. However, the El Capitan supercomputer, located at the Lawrence Livermore National Laboratory, broke Frontier’s record at around the same time. Both Frontier and El Capitan are powered by AMD GPUs, with the former boasting 9,408 AMD EPYC processors and 37,632 AMD Instinct MI250X accelerators. On the other hand, the latter uses 44,544 AMD Instinct MI300A accelerators.
Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia’s market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.
“By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges, enabling breakthroughs in efficiency, sustainability, and innovation,” said Brad McCredie, AMD Senior Vice President for Data Center Engineering.
Even though AMD can deliver top-tier performance at a much cheaper price than Nvidia, many AI data centers prefer Team Green because of software issues with AMD’s hardware.
One high-profile example was Tiny Corp’s TinyBox system, which had problems with instability with its AMD Radeon RX 7900 XTX graphics cards. The problem was so bad that Dr. Lisa Su had to step in to fix the issues. And even though it was purportedly fixed, the company still released two versions of the TinyBox AI accelerator — one powered by AMD and the other by Nvidia. Tiny Corp also recommended the more expensive Team Green version, with its six RTX 4090 GPUs, because of its driver quality.
If Team Red can fix the software support on its great hardware, then it could likely get more customers for its chips and get a more even footing with Nvidia in the AI GPU market.
The temporary structure likely consisted of debris from a broken-up asteroid:
Earth may have sported a Saturn-like ring system 466 million years ago, after it captured and wrecked a passing asteroid, a new study suggests.
The debris ring, which likely lasted tens of millions of years, may have led to global cooling and even contributed to the coldest period on Earth in the past 500 million years.
That's according to a fresh analysis of 21 crater sites around the world that researchers suspect were all created by falling debris from a large asteroid between 488 million and 443 million years ago, an era in Earth's history known as the Ordovician during which our planet witnessed dramatically increased asteroid impacts.
A team led by Andy Tomkins, a professor of planetary science at Monash University in Australia, used computer models of how our planet's tectonic plates moved in the past to map out where the craters were when they first formed over 400 million years ago. The team found that all the craters had formed on continents that floated within 30 degrees of the equator, suggesting they were created by the falling debris of a single large asteroid that broke up after a near-miss with Earth.
"Under normal circumstances, asteroids hitting Earth can hit at any latitude, at random, as we see in craters on the moon, Mars and Mercury," Tomkins wrote in The Conversation. "So it's extremely unlikely that all 21 craters from this period would form close to the equator if they were unrelated to one another."
The chain of crater locations all hugging the equator are consistent with a debris ring orbiting Earth, scientists say. That's because such rings typically form above planets' equators, as occurs with those circling Saturn, Jupiter, Uranus and Neptune. The chances that these impact sites were created by unrelated, random asteroid strikes is about 1 in 25 million, the new study found.
[...] "Over millions of years, material from this ring gradually fell to Earth, creating the spike in meteorite impacts observed in the geological record," Tomkins added in a university statement. "We also see that layers in sedimentary rocks from this period contain extraordinary amounts of meteorite debris."
The team found that this debris, which represented a specific type of meteorite and was found to be abundant in limestone deposits across Europe, Russia and China, had been exposed to a lot less space radiation than meteorites that fall today. Those deposits also reveal signatures of multiple tsunamis during the Ordovician period, all of which can be best explained by a large, passing asteroid capture-and-break-up scenario, the researchers argue.
The new study is a "new and creative idea that explains some observations," Birger Schmitz of Lund University in Sweden told New Scientist. "But the data are not yet sufficient to say that the Earth indeed had rings."
Searching for a common signature in specific asteroids grains across the newly studied impact craters would help test the hypothesis, Schmitz added.
Earth may have had a ring system 466 million years ago:
In a discovery that challenges our understanding of Earth's ancient history, researchers have found evidence suggesting that Earth may have had a ring system, which formed around 466 million years ago, at the beginning a period of unusually intense meteorite bombardment known as the Ordovician impact spike.
This surprising hypothesis, published today in Earth and Planetary Science Letters, stems from plate tectonic reconstructions for the Ordovician period noting the positions of 21 asteroid impact craters. All these craters are located within 30 degrees of the equator, despite over 70 per cent of Earth's continental crust being outside this region, an anomaly that conventional theories cannot explain.
The research team believes this localised impact pattern was produced after a large asteroid had a close encounter with Earth. As the asteroid passed within Earth's Roche limit, it broke apart due to tidal forces, forming a debris ring around the planet—similar to the rings seen around Saturn and other gas giants today.
[...] "What makes this finding even more intriguing is the potential climate implications of such a ring system," he said.
The researchers speculate that the ring could have cast a shadow on Earth, blocking sunlight and contributing to a significant global cooling event known as the Hirnantian Icehouse.
This period, which occurred near the end of the Ordovician, is recognised as one of the coldest in the last 500 million years of Earth's history.
"The idea that a ring system could have influenced global temperatures adds a new layer of complexity to our understanding of how extra-terrestrial events may have shaped Earth's climate," Professor Tomkins said.
Normally, asteroids impact the Earth at random locations, so we see impact craters distributed evenly over the Moon and Mars, for example. To investigate whether the distribution of Ordovician impact craters is non-random and closer to the equator, the researchers calculated the continental surface area capable of preserving craters from that time.
They focused on stable, undisturbed cratons with rocks older than the mid Ordovician period, excluding areas buried under sediments or ice, eroded regions, and those affected by tectonic activity. Using a GIS approach (Geographic Information System), they identified geologically suitable regions across different continents. Regions like Western Australia, Africa, the North American Craton, and small parts of Europe were considered well-suited for preserving such craters. Only 30 per cent of the suitable land area was determined to have been close to the equator, yet all the impact craters from this period were found in this region. The chances of this happening are like tossing a three-sided coin (if such a thing existed) and getting tails 21 times.
The implications of this discovery extend beyond geology, prompting scientists to reconsider the broader impact of celestial events on Earth's evolutionary history. It also raises new questions about the potential for other ancient ring systems that could have influenced the development of life on Earth.
Could similar rings have existed at other points in our planet's history, affecting everything from climate to the distribution of life? This research opens a new frontier in the study of Earth's past, providing new insights into the dynamic interactions between our planet and the wider cosmos.
Journal Reference: https://doi.org/10.1016/j.epsl.2024.118991