Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:351

posted by janrinok on Thursday November 20, @02:37PM   Printer-friendly

Use the right tool for the job:

In my first interview out of college I was asked the change counter problem:

Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies).

I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.

But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day.

[...] Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems. Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is.

[...] Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always slower than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver.

[...] Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking.


Original Submission

posted by janrinok on Thursday November 20, @09:52AM   Printer-friendly

Floating solar panels show promise, but environmental impacts vary by location, study finds:

Floating solar panels are emerging as a promising clean energy solution with environmental benefits, but a new study finds those effects vary significantly depending on where the systems are deployed.

Researchers from Oregon State University and the U.S. Geological Survey modeled the impact of floating solar photovoltaic systems on 11 reservoirs across six states. Their simulations showed that the systems consistently cooled surface waters and altered water temperatures at different layers within the reservoirs. However, the panels also introduced increased variability in habitat suitability for aquatic species.

"Different reservoirs are going to respond differently based on factors like depth, circulation dynamics and the fish species that are important for management," said Evan Bredeweg, lead author of the study and a former postdoctoral scholar at Oregon State. "There's no one-size-fits-all formula for designing these systems. It's ecology - it's messy."

While the floating solar panel market is established and growing in Asia, it remains limited in the United States, mostly to small pilot projects. However, a study released earlier this year by the U.S. Department of Energy's National Renewable Energy Laboratory estimated that U.S. reservoirs could host enough floating solar panel systems to generate up to 1,476 terawatt-hours annually, enough to power approximately 100 million homes.

Floating solar panels offer several advantages. The cooling effect of the water can boost panel efficiency by an estimated 5 to 15%. The systems can also be integrated with existing hydroelectric and transmission infrastructure. They may also help reduce evaporation, which is especially valuable in warmer, drier climates.

However, these benefits come with questions about potential impacts on aquatic ecosystems, an area that has received limited scientific attention.

[...] They found that changes in temperature and oxygen dynamics caused by floating solar panels can influence habitat availability for both warm-water and cold-water fish species. For instance, cooler water temperatures in summer generally benefit cold-water species, though this effect is most pronounced when panel coverage exceeds 50%.

The researchers note the need for continued research and long-term monitoring to ensure floating photovoltaic systems support clean energy goals without compromising aquatic ecosystems.

"History has shown that large-scale modifications to freshwater ecosystems, such as hydroelectric dams, can have unforeseen and lasting consequences," Bredeweg said.

Journal Reference: https://doi.org/10.1016/j.limno.2025.126293


Original Submission

posted by janrinok on Thursday November 20, @05:04AM   Printer-friendly
from the fly-me-to-the-moon dept.

Everybody knows Intel's 4004, designed for a calculator, was the first CPU on a chip. Everybody is wrong.

For a long time, what is now considered to be a prime candidate for the title of the 'world's first microprocessor' was a very well-kept secret. The MP944 is the inauspicious name of the chip we want to highlight today. It was developed to be the brains behind U.S. Navy's F-14 Tomcat's Central Air Data Computer (CADC). Thus, it isn't surprising that the MP944 was a cut above the Intel 4004, the world's first commercial microprocessor, designed to power a desktop calculator.

The MP944 was designed by a team of engineers approximately 25-strong. Leading the two-year development of this microprocessor were Steve Geller and Ray Holt.

The processor began service, in the aforementioned F-14 flight / control computer in June 1970, over a year before Intel's 4004 would become available, in November 1971. An MP944 worked as part of a six-chip system for the real-time calculation of flight parameters such as altitude, airspeed, and Mach number – and was a key innovation to enable the Tomcat's articulated sweep-wing system.

By many accounts, the MP944 didn't just pre-date the 4004 by quite a margin, it was significantly more performant. The tweet, we embedded above, suggests Geller & Holt's design was "8x faster than the Intel 4004." Completing all the complicated polynomial calculations required by the CADC likely dictated this degree of performance it delivered.

[...] As well as offering amazing performance for the early 1970s, the MP944 had to satisfy some stringent military-minded specifications. For example, it has to remain operational in temperatures spanning -55 to +125 degrees Celsius.

Being an essential component of a flight system also meant the military pushed for safety and failsafe measures. That was tricky, with such a cutting-edge development in a new industry. What ended up being provided to the F-14 Tomcats was a system that could constantly self-diagnose issues while executing its flight computer duties. These MP944 systems could apparently switch to an identical backup unit, fitted as standard, within 1/18th of a second of a fault being flagged by the self-test system.

As mentioned above, this processor of many firsts seems to be of largely academic interest nowadays. However, if Holt's attempts to publish the research paper outlining the architecture of the F-14's MP944-powered CADC system had been cleared back in 1971, we'd surely now all be living in a different future.


Original Submission

posted by janrinok on Thursday November 20, @12:18AM   Printer-friendly
from the What-is-your-major-malfunction-numbnuts? dept.

Task and Purpose has a short article on a traveling art exhibit of photos taking during the filming of Full Metal Jacket (1987). The actor Matthew Modine played Pvt. Joker in the war film directed Stanley Kubrik. While Modine was playing the role of a war correspondent he also ended up taking photos on set behind the behind the scenes photos.

"If you're going to take pictures on my set, this is the camera you need to get," Kubrick said.

Those instructions, Modine realized, included an unspoken permission slip: to capture behind-the-scenes pictures of the iconic war film as it was being made (which perhaps made sense for the film: Pvt. Joker, after all, is a combat correspondent in the Marines, and snaps photos throughout).

Modine's photographs and a journal he kept during the filming are now the heart of "Full Metal Jacket Diary," an exhibit at the National Veterans Memorial And Museum in Columbus, Ohio. The photographs and other pieces spent much of the year at the National Museum of the Marine Corps in Quantico, Virginia, as the exhibit "Full Metal Modine."

The Internet Movie Database has a detailed entry, as usual, on Stanley Kubrik's Full Metal Jacket.


Original Submission

posted by janrinok on Wednesday November 19, @07:36PM   Printer-friendly
from the Altman-Bezos-Gates-and-Musk-again dept.

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html
https://archive.ph/mgZRE

As neural implant technology and A.I. advance at breakneck speeds, do we need a new set of rights to protect our most intimate data — our minds?

On a recent afternoon in the minimalist headquarters of the M.I.T. Media Lab, the research scientist Nataliya Kosmyna handed me a pair of thick gray eyeglasses to try on. They looked almost ordinary aside from the three silver strips on their interior, each one outfitted with an array of electrical sensors. She placed a small robotic soccer ball on the table before us and suggested that I do some "basic mental calculus." I started running through multiples of 17 in my head. After a few seconds, the soccer ball lit up and spun around. I seemed to have made it move with the sheer force of my mind, though I had not willed it in any sense. My brain activity was connected to a foreign object. "Focus, focus," Kosmyna said. The ball swirled around again. "Nice," she said. "You will get better." Kosmyna, who is also a visiting research scientist at Google, designed the glasses herself. They are, in fact, a simple brain-computer interface, or B.C.I., a conduit between mind and machine. As my mind went from 17 to 34 to 51, electroencephalography (EEG) and electrooculography (EOG) sensors picked up heightened electrical activity in my eyes and brain. The ball had been programmed to light up and rotate whenever my level of neural "effort" reached a certain threshold. When my attention waned, the soccer ball stood still.

For now, the glasses are solely for research purposes. At M.I.T., Kosmyna has used them to help patients with A.L.S. (Amyotrophic Lateral Sclerosis) communicate with caregivers — but she said she receives multiple purchase requests a week. So far she has declined them. She's too aware that they could easily be misused.

Neural data can offer unparalleled insight into the workings of the human mind. B.C.I.s are already frighteningly powerful: Using artificial intelligence, scientists have used B.C.I.s to decode "imagined speech," constructing words and sentences from neural data; to recreate mental images (a process known as brain-to-image decoding); and to trace emotions and energy levels. B.C.I.s have allowed people with locked-in syndrome, who cannot move or speak, to communicate with their families and caregivers and even play video games. Scientists have experimented with using neural data from fMRI imaging and EEG signals to detect sexual orientation, political ideology and deception, to name just a few examples.

Advances in optogenetics, a scientific technique that uses light to stimulate or suppress individual, genetically modified neurons, could allow scientists to "write" the brain as well, potentially altering human understanding and behavior. Optogenetic implants are already able to partially restore vision to patients with genetic eye disorders; lab experiments have shown that the same technique can be used to implant false memories in mammal brains, as well as to silence existing recollections and to recover lost ones.

Neuralink, Elon Musk's neural technology company, has so far implanted 12 people with its rechargeable devices. "You are your brain, and your experiences are these neurons firing," Musk said at a Neuralink presentation in June. "We don't know what consciousness is, but with Neuralink and the progress that the company is making, we'll begin to understand a lot more."

Musk's company aims to eventually connect the neural networks inside our brains to artificially intelligent ones on the outside, creating a two-way path between mind and machine. Neuroethicists have criticized the company for ethical violations in animal experiments, for a lack of transparency and for moving too quickly to introduce the technology to human subjects, allegations the company dismisses. "In some sense, we're really extending the fundamental substrate of the brain," a Neuralink engineer said in the presentation. "For the first time we are able to do this in a mass market product."

The neurotechnology industry already generates billions of dollars of revenue annually. It is expected to double or triple in size over the next decade. Today, B.C.I.s range from neural implants to wearable devices like headbands, caps and glasses that are freely available for purchase online, where they are marketed as tools for meditation, focus and stress relief. Sam Altman founded his own B.C.I. start-up, Merge Labs, this year, as part of his effort to bring about the day when humans will "merge" with machines. Jeff Bezos and Bill Gates are investors in Synchron, a Neuralink competitor.

Even if Kosmyna's glasses aren't for sale, similar technology is on the market. In 2023, Apple patented an AirPods prototype equipped with similar sensors, which would allow the device to monitor brain activity and other so-called biosignals. Last month, Meta unveiled a pair of new smart glasses and a "neural band," which lets users text and surf the web with small gestures alone. Overseas, China is fast-tracking development of the technology for medical and consumer use, and B.C.I.s are among the priorities of its new five-year plan for economic development.

"What's coming is A.I. and neurotechnology integrated with our everyday devices," said Nita Farahany, a professor of law and philosophy at Duke University who studies emerging technologies. "Basically, what we are looking at is brain-to-A.I. direct interactions. These things are going to be ubiquitous. It could amount to your sense of self being essentially overwritten."

To prevent this kind of mind-meddling, several nations and states have already passed neural privacy laws. In 2021, Chile amended its constitution to include explicit protections for "neurorights"; Spain adopted a nonbinding list of "digital rights" that protects individual identity, freedom and dignity from neurotechnologies. In 2023, European nations signed the Léon Declaration on neurotechnology, which prioritizes a "rights oriented" approach to the sector. The legislatures of Mexico, Brazil and Argentina have debated similar measures. California, Colorado, Montana and Connecticut have each passed laws to protect neural data.

The federal government has started taking an interest, too. In September, three senators introduced the Management of Individuals' Neural Data (MIND) Act, which would direct the Federal Trade Commission to examine how neural data should be defined and protected. The Uniform Law Commission, a nonprofit that authors model legislation, has convened lawyers, philosophers and scientists who are working on developing a standard law on mental privacy that states could choose to adopt.

Without regulations governing the collection of neural data and the commercialization of B.C.I.s, there is the real possibility that we might find ourselves becoming even more beholden to our devices and their creators than we all already are. In clinical trials, patients have sometimes been left in the lurch; some have had to have their B.C.I.s surgically explanted because funding for their trial ran out.

And the possibility that therapeutic neurotechnologies could one day be weaponized for political purposes looms heavily over the field. Musk, for example, has expressed a desire to "destroy the woke mind virus." As Quinn Slobodian and Ben Tarnoff argue in a forthcoming book, it does not require a great logical leap to suspect that he sees Neuralink as part of a way to do so.

In the 1920s, the German psychiatrist Hans Berger took the first EEG measurements, celebrating the fact that he could detect brain waves "from the unscathed skull." In the 1940s and '50s, scientists experimented with the use of electrodes to alleviate tremors and epilepsy. The Spanish neurophysiologist José Delgado made headlines in 1965, after he used implanted electrodes to stop a charging bull in its tracks; he bragged that he could "play" the minds of monkeys and cats like "electronic toys."

In a 1970 interview with The New York Times, Delgado prophesied that we would soon be able to alter our own "mental functions" as a result of genetic and neuroscientific advances. "The question is what sort of humans would we like, ideally, to construct?" he asked. The notion that a human being could be "constructed" had been troubling philosophers, scientists and writers since at least the late 18th century, when scientists first manipulated electric currents inside animal bodies. The language of electrification quickly seeped out of science and into politics: The historian Samantha Wesner has shown that in France, Jacobin revolutionaries spoke of "electrifying" people to recruit them to their cause and writers toyed with the possibility that political sentiment could be electrically controlled.

Two centuries later, when Delgado and his colleagues showed that it had become technically possible to use electricity to alter the workings of the animal mind, this too was accompanied by an explosion of political concern about the relation between the citizen and the state. Because the thinking subject is by definition a political subject — "the very presence of mind is a political presence," argues Danielle Carr, a historian of neuroscience who researches the political and cultural history of B.C.I.s and related technologies — the potential to alter the human brain was also understood as a threat to liberal politics itself.

In the U.S., where the Cold War fueled anxiety about potential brainwashing technologies, Delgado's work was at first approached with wonder and confusion, but it soon fell under increasing suspicion. In 1953, the director of the C.I.A., Allen Dulles, warned that the Soviet government was conducting a form of "brain warfare" to control minds. In a forthcoming book, Carr traces how the liberal doctrine of universal human rights and freedoms, including the freedom of thought, was positioned as a protective umbrella against communist mind-meddling, co-opting pre-existing struggles against psychiatric experimentation. While the United States warned of brain warfare abroad, it also worked to deploy it at home. Dulles authorized the creation of the C.I.A.'s clandestine MK-Ultra program, which for 20 years conducted psychiatric and mind-control experiments, often on unwitting and incarcerated subjects, until it was abruptly shut down in 1973.

Around this time, the University of California, Los Angeles, sought to create a Center for the Study and Reduction of Violence leading to widespread speculation that the center would screen people in prisons and mental hospitals for indications of aggression and then subject them to brain surgery. An outcry, led in part by the Black Panthers, shut down funding for the initiative. These developments raised public awareness of neural technologies and contributed to the elevation of laws and rights as a stopgap against their worst uses. "We believe that mind control and behavior manipulation are contrary to the ideas laid down in the Bill of Rights and the American Constitution," the Republican lawmaker Steven Symms argued in a 1974 speech. Over the next decades, the development of neurotechnology drastically slowed. By the 1990s, the end of the Cold War dispelled concerns about communist mind-meddling, and the political climate was ripe for reconsideration of the promises and perils of neurotech. In 2013, President Barack Obama created the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) program, which poured hundreds of millions of dollars into neuroscience. In 2019, the Pentagon's Defense Advanced Research Projects Agency announced that it was funding several teams working to develop nonsurgical neurotechnologies that could, for example, allow service members to control "swarms of unmanned aerial vehicles" with their minds. As experimentation progressed, so did the medical and therapeutic uses of B.C.I.s. In 2004, Matthew Nagle, a tetraplegic, became the first human to be implanted with a sophisticated B.C.I. For individuals who cannot move or speak — those who are living with degenerative disease or paralysis, for example — advances in neural implants have been transformative.

Earlier this year, Bradford Smith, who lives with A.L.S. and was the third person to receive a Neuralink implant, used the A.I. chatbot Grok to draft his X posts. "Neuralink does not read my deepest thoughts or words I think about," Smith explains in an A.I.-generated video about his experience. "It just reads how I want to move and moves the cursor where I want." Because he received the implant as part of a clinical trial, Smith's neural data is protected by HIPAA rules governing private health information. But for mass-market consumer devices — like EEG headbands and glasses, for example — that can be used to enhance cognition, focus and productivity rather than simply restore brain functions that have been compromised — there are very few data protections. "The conflation of consumer and medical devices, and the lack of a consistent definition of neural data itself, adds to the confusion about what is at stake," said Leigh Hochberg, a neurointensive care physician, neuroscientist and director of BrainGate clinical trials. "It's a good societal conversation to have, to reflect on what we individually believe should remain private."

In 2017, driven by a sense of responsibility and terror about the implications of his own research, Rafael Yuste, a neuroscientist at Columbia University, convened scientists, philosophers, engineers and clinicians to create a set of ethical guidelines for the development of neurotechnologies.

One of the group's primary recommendations was that neurorights protecting individual identity, agency and privacy, as well as equal access and protection from bias, should be recognized as basic human rights and protected under the law. Yuste worked with the lawyer Jared Genser to create the Neurorights Foundation in 2021. Together, they surveyed 30 consumer neurotech companies and found that all but one had "no meaningful limitations" to retrieving or selling user neural data. There is consensus that some regulation is necessary given the risks of companies and governments having unfettered access to neural data, and that existing human rights already offer a small degree of protection. But neuroscientists, philosophers, developers and patients disagree about what kinds of regulations should be in place, and about how neurorights should be translated into written laws.

"If we keep inventing new rights, there is a risk that we won't know where one ends and the other begins," said Andrea Lavazza, an ethicist and a philosopher at Pegaso University in Italy who supports the creation of new rights. The United Nations, UNESCO and the World Economic Forum have each convened groups to investigate the implications of neurotechnologies on human rights; dozens of guidance documents on the ethics of the field have been published.

One of the fundamental purposes of law, at least in the United States, is to protect the individual from unwarranted interference. If neurotechnologies have the potential to decode or even change patterns of thought and action, advocates believe that law has the distinct capacity to try to restrain its reach into the innermost chambers of the mind. And while freedom of thought, conscience, opinion, expression and privacy are all recognized as basic human rights in international law, some philosophers and lawyers argue that these fundamental freedoms need to be updated and reinterpreted if they have any hope of protecting individuals against interference from neural devices, because they were conceived when the technology was only a distant dream. Farahany, the law and philosophy professor at Duke, argues that we need to recognize a fundamental right to "cognitive liberty," which scholars have defined as "the right and freedom to control one's own consciousness and electrochemical thought process" — to protect our minds. For Farahany, this kind of liberty "is a precondition to any other concept of liberty, in that, if the very scaffolding of thought itself is manipulated, undermined, interfered with, then any other way in which you would exercise your liberties is meaningless, because you are no longer a self-determined human at that point."

To call for the recognition of a new fundamental right, or even for the enhancement of existing human rights is, at the moment, a countercultural move. Over the past several years, human rights and the international laws designed to protect them have been gravely weakened, while technologies that underlie surveillance capitalism have grown only more widespread. We already live in a world awash with personal data, including sensitive financial and biological information. We leave behind reams of revealing data wherever we go, in both the physical and digital worlds. Our computer cameras are powerful enough to capture our heart rates and make medical diagnoses. Adding neural data on top of this might not constitute such an immense shift. Or it might change everything, offering external actors a portal into our most intimate — and often unarticulated — thoughts and desires. The emergence of B.C.I.s during the mid-20th century was greeted and ultimately torpedoed by Cold War liberalism — Dulles, the C.I.A. director, warned that mind-control techniques could thwart the American project of "spreading the gospel of freedom." Today, we lack a corresponding language with which to push back against the data economy's expanding reach into our minds. In a world where everything can be reduced to data to be bought and sold, where privacy regulations offer only a modicum of protection and where both domestic and international law have been weakened, there are few tools to shield our innermost selves from financialization.

In this sense, the debate over neurorights is a kind of last-ditch effort to ensure that the hard-won protections of the past century carry over into this one — to try to prevent the freedom of thought, conscience and opinion, for example, from being effectively suppressed by the increasingly algorithmic experience of everyday life. How much privacy might we, as a society, be willing to trade in exchange for augmented cognition? "In three years, we will have large-scale models of neural data that companies could put on a device or stream to the cloud, to try to make predictions," said Mackenzie Mathis, a neuroscientist at the Swiss Federal Institute of Technology, Lausanne. How those kinds of data transfers should be regulated, she said, is an urgent question. "We are changing people, just like social media, or large-language models changed people." Under these conditions, the challenge of ensuring that individuals retain the ability to manage access to their innermost selves, however defined, becomes all the more acute. "Our mental life is the core of our self, and we used to be sure that no one could break this barrier," said Lavazza. The collapse of that surety could be an invitation to dread a future in which the unrestricted use of these technologies will have destroyed society as we know it. Or it could be an occasion to rethink the politics that got us here in the first place.


Original Submission

posted by hubie on Wednesday November 19, @02:49PM   Printer-friendly

https://bit-hack.net/2025/11/10/fpga-based-ibm-pc-xt/

Recently I undertook a hobby project to recreate an IBM XT Personal Computer from the 1980s using a mix of authentic parts and modern technology. I had a clear goal in mind: I wanted to be able to play the EGA version of Monkey Island 1 on it, with no features missing. This means I need mouse support, hard drive with write access for saving the game, and Adlib audio, my preferred version of the game's musical score.

The catalyst for this project was the discovery that there are low-power versions of the NEC V20 CPU available (UPD70108H), which is compatible with the Intel 8088 used in the XT. Being a low-power version significantly simplifies its connection to an FPGA, which typically operate with 3.3-volt IO voltages. Coupled with a low-power 1MB SRAM chip (CY62158EV30) to provide the XT with its 640KB of memory, and I started to have the bones of a complete system worked out.

Source code, schematics and gerber files: https://github.com/bit-hack/iceXt


Original Submission

posted by hubie on Wednesday November 19, @10:01AM   Printer-friendly
from the that's-a-long-time-to-have-systemd-around dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20094

Canonical has announced the company will extend support on long-term support (LTS) versions of Ubuntu to supply security updates for 15 years.

"Today, Canonical announced the expansion of the Legacy add-on for Ubuntu Pro, extending total coverage for Ubuntu LTS releases to 15 years. Starting with Ubuntu 14.04 LTS (Trusty Tahr), this extension brings the full benefits of Ubuntu Pro – including continuous security patching, compliance tooling and support for your OS – to long-lived production systems."

The extended support is provided as part of Canonical's Ubuntu Pro service.

Editor's Comment: Ubuntu Pro is free for personal use on up to 5 computers. There is also a pricing system for professional and enterprise use.


Original Submission

posted by hubie on Wednesday November 19, @05:16AM   Printer-friendly

https://itsfoss.com/news/mozilla-ai-window-plans/

Planned browsing mode will let users chat with an AI assistant while surfing the web.

Firefox has been pushing AI features for a while now. Over the past year, they've added AI chatbots in the sidebar, automatic alt text generation, and AI-enhanced tab grouping. It is basically their way of keeping up with Chrome and Edge, both of which have gone all-in on AI.

Of course not everyone is thrilled about AI creeping into their web browsers, and Mozilla (the ones behind Firefox) seems to understand that. Every AI feature in Firefox is opt-in. You can keep using the browser as you always have, or flip on AI tools when you actually need them.

Now, they are taking this approach a step further with something called AI Window.

Mozilla has announced it's working on AI Window, a new browsing mode that comes with a built-in AI assistant. Think of it as a third option alongside the Classic browsing mode and Private Window mode.

Before you get angry, know that it will be fully optional. Switch to AI Window when you want help, or just ignore it entirely. Try it, hate it, disable it. Mozilla's whole pitch is that you stay in control.

On the transparency front, they are making three commitments:

        A fully opt-in experience.
        Features that protect your choice.
        More transparency around how your data is used.

Why bother with all this, you ask? Mozilla sees AI as part of the web's future and wants to shape it their way. They figure ignoring AI while it reshapes the web doesn't help anyone, so they want to steer it toward user control rather than watch browsers from AI companies (read: Big Tech) lock people in.

Ajit Varma, the Vice President and Head of Product at Firefox, put it like this:

"We believe standing still while technology moves forward doesn't benefit the web or humanity. That's why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less."

The feature isn't live. Mozilla's building it "in the open" and wants feedback to shape how it turns out. If you want early access, there's a waitlist at firefox.com/ai to get updates and first dibs on testing.


Original Submission

posted by hubie on Wednesday November 19, @12:31AM   Printer-friendly

https://www.scientificamerican.com/article/raccoons-are-showing-early-signs-of-domestication/
https://archive.fo/HF0AV

City-dwelling raccoons seem to be evolving a shorter snout—a telltale feature of our pets and other domesticated animals

With dexterous childlike hands and cheeky "masks," raccoons are North America's ubiquitous backyard bandits. The critters are so comfortable in human environments, in fact, that a new study finds that raccoons living in urban areas are physically changing in response to life around humans—an early step in domestication.

The study lays out the case that the domestication process is often wrongly thought of as initiated by humans—with people capturing and selectively breeding wild animals. But the study authors claim that the process begins much earlier, when animals become habituated to human environments.

"One thing about us humans is that, wherever we go, we produce a lot of trash," says the study's co-author and University of Arkansas at Little Rock biologist Raffaela Lesch. Piles of human scraps offer a bottomless buffet to wildlife, and to access that bounty, animals need to be bold enough to rummage through human rubbish but not so bold as to become a threat to people. "If you have an animal that lives close to humans, you have to be well-behaved enough," Lesch says. "That selection pressure is quite intense."

Proto-dogs, for example, would have dug through human trash heaps, and cats were attracted to the mice that gathered around refuse. Over time, individual animals that had a reduced fight-or-flight response could feed more successfully around humans and pass their nonreactive behavior on to their offspring.

Oddly, tameness has also long been associated with traits such as a shorter face, a smaller head, floppy ears and white patches on fur—a pattern that Charles Darwin noted in the 1800s. The occurrence of these characteristics is known as domestication syndrome, but scientists didn't have a comprehensive theory to explain how the traits were connected until 2014. That's when a team of evolutionary biologists noticed that many of the physical traits that co-occur with domestication trace back to an important group of cells during embryonic development called neural crest cells. In early development, these form along an organism's back and migrate to different parts of the body, where they become important for the development of different types of cells. The biologists hypothesized that mutations that hamper the proliferation and development of neural crest cells could later result in a shorter muzzle, a lack of cartilage in the ears, a loss of pigmentation in the coat and a dampened fear response—leading to a better chance of survival in proximity to humans.

Lesch says the neural crest cells are the most salient hypothesis scientists have to explain domestication syndrome right now, but they are still gathering and evaluating evidence for or against it. One piece of the puzzle would be seeing if domestication syndrome was observable in real time with wild animals. For the new study, she and 16 graduate and undergraduate students gathered nearly 20,000 photographs of raccoons across the contiguous U.S. from the community science platform iNaturalist. The team found that raccoons in urban environments had a snout that was 3.5 percent shorter than that of their rural cousins.

Journal Reference: Apostolov, A., Bradley, A., Dreher, S. et al. Tracking domestication signals across populations of North American raccoons (Procyon lotor) via citizen science-driven image repositories. Front Zool 22, 28 (2025). https://doi.org/10.1186/s12983-025-00583-1


Original Submission

posted by hubie on Tuesday November 18, @07:47PM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/tech-policy/2025/11/dhs-wants-to-use-biometrics-to-track-immigrant-kids-throughout-their-lives/

Civil and digital rights experts are horrified by a proposed rule change that would allow the Department of Homeland Security to collect a wide range of sensitive biometric data on all immigrants, without age restrictions, and store that data throughout each person's "lifecycle" in the immigration system.

If adopted, the rule change would allow DHS agencies, including Immigration and Customs Enforcement (ICE), to broadly collect facial imagery, finger and palm prints, iris scans, and voice prints. They may also request DNA, which DHS claimed "would only be collected in limited circumstances," like to verify family relations.
[...]
Alarming critics, the update would allow DHS for the first time to collect biometric data of children under 14, which DHS claimed would help reduce human trafficking and other harms by making it easier to identify kids crossing the border unaccompanied or with a stranger.

Jennifer Lynch, general counsel for a digital rights nonprofit called the Electronic Frontier Foundation, told Ars that EFF joined Democratic senators in opposing a prior attempt by DHS to expand biometric data collection in 2020.
[...]
By maintaining a database, the US also risks chilling speech, as immigrants weigh risks of social media comments—which DHS already monitors—possibly triggering removals or arrests.

"People will be less likely to speak out on any issue for fear of being tracked and facing severe reprisals, like detention and deportation, that we've already seen from this administration," Lynch told Ars.
[...]
EFF previously noted that DHS's biometric database was already the second largest in the world. By expanding it, DHS estimated that the agency would collect "about 1.12 million more biometrics submissions" annually, increasing the current baseline to about 3.19 million.

As the data pool expands, DHS plans to hold onto the data until an immigrant who has requested benefits or otherwise engaged with DHS agencies is either granted citizenship or removed.
[...]
DHS said it "recognizes" that its sweeping data collection plans that remove age restrictions don't conform with Department of Justice policies. But the agency claimed there was no conflict since "DHS regulatory provisions control all DHS biometrics collections" and "DHS is not authorized to operate or collect biometrics under DOJ authorities."
[...]
Currently, DHS is seeking public comments on the rule change, which can be submitted over the next 60 days ahead of a deadline on January 2, 2026. The agency suggests it "welcomes" comments, particularly on the types of biometric data DHS wants to collect, including concerns about the "reliability of technology."
[...]
However, DHS claims that's now appropriate, including in cases where children were trafficked or are seeking benefits under the Violence Against Women Act and, therefore, are expected to prove "good moral character."

"Generally, DHS plans to use the biometric information collected from children for identity management in the immigration lifecycle only, but will retain the authority for other uses in its discretion, such as background checks and for law enforcement purposes," DHS's proposal said.

The changes will also help protect kids from removals, DHS claimed, by making it easier for an ICE attorney to complete required "identity, law enforcement, or security investigations or examinations."
[...]
It's possible that some DHS agencies may establish an age threshold for some data collection, the rule change noted.

A day after the rule change was proposed, 42 comments have been submitted. Most were critical, but as Lynch warned, speaking out seemed risky, with many choosing to anonymously criticize the initiative as violating people's civil rights and making the US appear more authoritarian.

One anonymous user cited guidance from the ACLU and the Electronic Privacy Information Center, while warning that "what starts as a 'biometrics update' could turn into widespread privacy erosion for immigrants and citizens alike."
[...]
"You pitch it as a tool against child trafficking, which is a real issue, but does swabbing a newborn really help, or does it just create a lifelong digital profile starting at day one?" the commenter asked. "Accuracy for growing kids is questionable, and the [ACLU] has pointed out how this disproportionately burdens families. Imagine the hassle for parents—it's not protection; it's preemptively treating every child like a data point in a government file."


Original Submission

posted by jelizondo on Tuesday November 18, @03:04PM   Printer-friendly

Wired has a story about the growing resistance to data center deployment. It seems that data centers have exceptionally bad track records in regards to adverse effects on the local communities upon which they are afflicted.

The new report was released by Data Center Watch, a project run by AI security company 10a Labs that tracks community opposition to data centers across the country. The company has been keeping eyes on this topic since 2023, and released its first public findings earlier this year. (While 10a Labs does offer risk analysis for AI companies, report author Miquel Vila says that the Data Center Watch project is separate from the company's main work, and is not paid for by any clients.) But this week's report finds that the tide has turned sharply in the months since the group's first public output. The second quarter of this year, the new report finds, represented "a sharp escalation" in data center opposition across the country.

Data Center Watch's first report from local residents. covered a period from May 2024 to March of 2025; in that period, it found, local opposition had blocked or delayed a total of $64 billion in data center projects (six projects were blocked entirely, while 10 were delayed). But Data Center Watch's new report found that opposition blocked or delayed $98 billion in projects from March to June of 2025 alone—eight projects, including two in Indiana and Kentucky, were blocked in those three months, while nine were delayed. One of those projects, a $17 billion development in the Atlanta suburbs, was put on hold in May after the county imposed a 180-day moratorium on data center development, following significant pushback.

Are data centers in any way useful or are they just another layer riding on top of the LLM tulipomania?

Previously:
(2025) China Submerges a Data Center in the Ocean to Conserve Water, is That Even a Good Idea?
(2025) How AI is Subsidized by Your Utility Bills and Drives Copper Prices
(2025) 'a Black Hole of Energy Use': Meta's Massive AI Data Center is Stressing Out a Louisiana Community
(2024) The True Cost of Data Centers
(2024) AI Demand Is Fueling A Data Center Development Boom In North America
(2022) Amazon and Microsoft Want to Go Big on Data Centres, but the Power Grid Can't Support Them
(2020) Private Equity Firms are Gobbling Up Data Centers
(2015) Why is Google Opening a New Data Center in a Former Coal-Fired Power Plant?
and many more ...


Original Submission

posted by jelizondo on Tuesday November 18, @10:23AM   Printer-friendly
from the actual-good-news-for-consumers?!? dept.

https://arstechnica.com/gadgets/2025/11/google-settlement-with-epic-caps-play-store-fees-boosts-other-android-app-stores/

Google has spent the last few years waging a losing battle against Epic Games, which accused the Android maker of illegally stifling competition in mobile apps.
[...]
Late last month, Google was forced to make the first round of mandated changes to the Play Store to comply with the court's ruling. It grudgingly began allowing developers to direct users to alternative payment options and app downloads outside of Google's ecosystem.
[...]
These changes were only mandated for three years and in the United States. The new agreement includes a different vision for third-party stores on Android—one that Google finds more palatable and that still gives Epic what it wants. If approved, the settlement will lower Google's standard fee for developers. There will also be new support in Android for third-party app stores that will reduce the friction of leaving the Google bubble. Under the terms of the settlement, Google will support these changes through at least June 2032.

Google's Android chief, Sameer Samat, and Epic CEO Tim Sweeny announced the deal late on November 4. Sweeny calls it an "awesome proposal" that "genuinely doubles down on Android's original vision as an open platform."
[...]
The changes detailed in the settlement are not as wide-ranging as Judge Donato's original order but still mark a shift toward openness. Third-party app stores are getting a boost, developers will enjoy lower fees, and Google won't drag the process out for years. The parties claim in their joint motion that the agreement does not seek to undo the jury verdict or sidestep the court's previous order. Rather, it aims to reinforce the court's intent while eliminating potential delays in realigning the app market.

Google and Epic are going to court on Thursday to ask Judge Donato to approve the settlement, and Google could put the billing changes into practice by late this year.

Previously on SoylentNews:
After Two Rejections, Apple Approves Epic Games Store App for iOS - 20240716
Epic's Proposed Google Reforms to End Android App Market Monopoly - 20240414
"You a—Holes": Court Docs Reveal Epic CEO's Anger at Steam's 30% Fees - 20240316


Original Submission

posted by jelizondo on Tuesday November 18, @05:37AM   Printer-friendly

In a blunt assessment that sent shockwaves through the tech and policy worlds, Nvidia CEO Jensen Huang has warned that China is poised to dominate the artificial intelligence (AI) race – not because of superior technology, but due to crippling energy costs and regulatory burdens hobbling Western competitors:

The prolific tech leader was speaking on the sidelines of the FT's Future of AI Summit, where he warned that China would beat the U.S. in artificial intelligence thanks to lower energy costs and looser regulations.

The comments, which CNBC could not verify independently, would represent Huang's starkest warning yet that the U.S. is at risk of losing its global lead in advanced AI technologies.

After the FT published their report, the Nvidia chief softened his tone on X:

"As I have long said, China is nanoseconds behind America in AI. It's vital that America wins by racing ahead and winning developers worldwide."

Previously:


Original Submission

posted by janrinok on Tuesday November 18, @12:53AM   Printer-friendly

https://arstechnica.com/tech-policy/2025/11/us-spy-satellites-built-by-spacex-send-signals-in-the-wrong-direction/

About 170 Starshield satellites built by SpaceX for the US government's National Reconnaissance Office (NRO) have been sending signals in the wrong direction, a satellite researcher found.

The SpaceX-built spy satellites are helping the NRO greatly expand its satellite surveillance capabilities, but the purpose of these signals is unknown. The signals are sent from space to Earth in a frequency band that's allocated internationally for Earth-to-space and space-to-space transmissions.

There have been no public complaints of interference caused by the surprising Starshield emissions. But the researcher who found them says they highlight a troubling lack of transparency in how the US government manages the use of spectrum and a failure to coordinate spectrum usage with other countries.

Scott Tilley, an engineering technologist and amateur radio astronomer in British Columbia, discovered the signals in late September or early October while working on another project. He found them in various parts of the 2025–2110 MHz band, and from his location, he was able to confirm that 170 satellites were emitting the signals over Canada, the United States, and Mexico. Given the global nature of the Starshield constellation, the signals may be emitted over other countries as well.

"This particular band is allocated by the ITU [International Telecommunication Union], the United States, and Canada primarily as an uplink band to spacecraft on orbit—in other words, things in space, so satellite receivers will be listening on these frequencies," Tilley told Ars. "If you've got a loud constellation of signals blasting away on the same frequencies, it has the potential to interfere with the reception of ground station signals being directed at satellites on orbit."

In the US, users of the 2025–2110 MHz portion of the S-Band include NASA and the National Oceanic and Atmospheric Administration (NOAA), as well as nongovernmental users like TV news broadcasters that have vehicles equipped with satellite dishes to broadcast from remote locations.

Experts told Ars that the NRO likely coordinated with the US National Telecommunications and Information Administration (NTIA) to ensure that signals wouldn't interfere with other spectrum users. A decision to allow the emissions wouldn't necessarily be made public, they said. But conflicts with other governments are still possible, especially if the signals are found to interfere with users of the frequencies in other countries.

Tilley previously made headlines in 2018 when he located a satellite that NASA had lost contact with in 2005. For his new discovery, Tilley published data and a technical paper describing the "strong wideband S-band emissions," and his work was featured by NPR on October 17.

Tilley's technical paper said emissions were detected from 170 satellites out of the 193 known Starshield satellites. Emissions have since been detected from one more satellite, making it 171 out of 193, he told Ars. "The apparent downlink use of an uplink-allocated band, if confirmed by authorities, warrants prompt technical and regulatory review to assess interference risk and ensure compliance" with ITU regulations, Tilley's paper said.

Tilley said he uses a mix of omnidirectional antennas and dish antennas at his home to receive signals, along with "software-defined radios and quite a bit of proprietary software I've written or open source software that I use for analysis work." The signals did not stop when the paper was published. Tilley said the emissions are powerful enough to be received by "relatively small ground stations."

Tilley's paper said that Starshield satellites emit signals with a width of 9 MHz and signal-to-noise (SNR) ratios of 10 to 15 decibels. "A 10 dB SNR means the received signal power is ten times greater than the noise power in the same bandwidth," while "20 dB means one hundred times," Tilley told Ars.

Other Starshield signals that were 4 or 5 MHz wide "have been observed to change frequency from day to day with SNR exceeding 20dB," his paper said. "Also observed from time to time are other weaker wide signals from 2025–2110 MHz what may be artifacts or actual intentional emissions."

The 2025–2110 MHz band is used by NASA for science missions and by other countries for similar missions, Tilley noted. "Any other radio activity that's occurring on this band is intentionally limited to avoid causing disruption to its primary purpose," he said.

The band is used for some fully terrestrial, non-space purposes. Mobile service is allowed in 2025–2110 MHz, but ITU rules say that "administrations shall not introduce high-density mobile systems" in these frequencies. The band is also licensed in the US for non-federal terrestrial services, including the Broadcast Auxiliary Service, Cable Television Relay Service, and Local Television Transmission Service.

While Earth-based systems using the band, such as TV links from mobile studios, have legal protection against interference, Tilley noted that "they normally use highly directional and local signals to link a field crew with a studio... they're not aimed into space but at a terrestrial target with a very directional antenna." A trade group representing the US broadcast industry told Ars that it hasn't observed any interference from Starshield satellites.

[...] While Tilley doesn't know exactly what the emissions are for, his paper said the "signal characteristics—strong, coherent, and highly predictable carriers from a large constellation—create the technical conditions under which opportunistic or deliberate PNT exploitation could occur."

PNT refers to Positioning, Navigation, and Timing (PNT) applications. "While it is not suggested that the system was designed for that role, the combination of wideband data channels and persistent carrier tones in a globally distributed or even regionally operated network represents a practical foundation for such use, either by friendly forces in contested environments or by third parties seeking situational awareness," the paper said.

Much more information in the linked source.


Original Submission

posted by janrinok on Monday November 17, @08:14PM   Printer-friendly

Microsoft: the Company Doesn't Have Enough Electricity to Install All the AI GPUs in its Inventory

Microsoft CEO says the company doesn't have enough electricity to install all the AI GPUs in its inventory - 'you may actually have a bunch of chips sitting in inventory that I can't plug in':

Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.

"I think the cycles of demand and supply in this particular case, you can't really predict, right? The point is: what's the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it's power — it's sort of the ability to get the builds done fast enough close to power," Satya said in the podcast. "So, if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today. It's not a supply issue of chips; it's actually the fact that I don't have warm shells to plug into." [Emphasis added]

Nadella's mention of 'shells' refers to a data center shell, which is effectively an empty building with all of the necessary ingredients, such as power and water, needed to immediately begin production.

AI's power consumption has been a topic many experts have discussed since last year. This came to the forefront as soon as Nvidia fixed the GPU shortage, and many tech companies are now investing in research in small modular nuclear reactors to help scale their power sources as they build increasingly large data centers.

This has already caused consumer energy bills to skyrocket, showing how the AI infrastructure being built out is negatively affecting the average American. OpenAI has even called on the federal government to build 100 gigawatts of power generation annually, saying that it's a strategic asset in the U.S.'s push for supremacy in its AI race with China. This comes after some experts said Beijing is miles ahead in electricity supply due to its massive investments in hydropower and nuclear power.

Aside from the lack of power, they also discussed the possibility of more advanced consumer hardware hitting the market. "Someday, we will make a[n] incredible consumer device that can run a GPT-5 or GPT-6-capable model completely locally at a low power draw — and this is like so hard to wrap my head around," Altman said. Gerstner then commented, "That will be incredible, and that's the type of thing that scares some of the people who are building, obviously, these large, centralized compute stacks."

This highlights another risk that companies must bear as they bet billions of dollars on massive AI data centers. While you would still need the infrastructure to train new models, the data center demand that many estimate will come from the widespread use of AI might not materialize if semiconductor advancements enable us to run them locally.

This could hasten the popping of the AI bubble, which some experts like Pat Gelsinger say is still several years away. But if and when that happens, we will be in for a shock as even non-tech companies would be hit by this collapse, exposing nearly $20 trillion in market cap.

We Did the Math on AI's Energy Footprint. Here's the Story You Haven't Heard.

We did the math on AI's energy footprint. Here's the story you haven't heard.:

[...] Now that we have an estimate of the total energy required to run an AI model to produce text, images, and videos, we can work out what that means in terms of emissions that cause climate change.

First, a data center humming away isn't necessarily a bad thing. If all data centers were hooked up to solar panels and ran only when the sun was shining, the world would be talking a lot less about AI's energy consumption. That's not the case. Most electrical grids around the world are still heavily reliant on fossil fuels. So electricity use comes with a climate toll attached.

"AI data centers need constant power, 24-7, 365 days a year," says Rahul Mewawalla, the CEO of Mawson Infrastructure Group, which builds and maintains high-energy data centers that support AI.

That means data centers can't rely on intermittent technologies like wind and solar power, and on average, they tend to use dirtier electricity. One preprint study from Harvard's T.H. Chan School of Public Health found that the carbon intensity of electricity used by data centers was 48% higher than the US average. Part of the reason is that data centers currently happen to be clustered in places that have dirtier grids on average, like the coal-heavy grid in the mid-Atlantic region that includes Virginia, West Virginia, and Pennsylvania. They also run constantly, including when cleaner sources may not be available.

Data centers can't rely on intermittent technologies like wind and solar power, and on average, they tend to use dirtier electricity.

Tech companies like Meta, Amazon, and Google have responded to this fossil fuel issue by announcing goals to use more nuclear power. Those three have joined a pledge to triple the world's nuclear capacity by 2050. But today, nuclear energy only accounts for 20% of electricity supply in the US, and powers a fraction of AI data centers' operations—natural gas accounts for more than half of electricity generated in Virginia, which has more data centers than any other US state, for example. What's more, new nuclear operations will take years, perhaps decades, to materialize.

In 2024, fossil fuels including natural gas and coal made up just under 60% of electricity supply in the US. Nuclear accounted for about 20%, and a mix of renewables accounted for most of the remaining 20%.

Gaps in power supply, combined with the rush to build data centers to power AI, often mean shortsighted energy plans. In April, Elon Musk's X supercomputing center near Memphis was found, via satellite imagery, to be using dozens of methane gas generators that the Southern Environmental Law Center alleges are not approved by energy regulators to supplement grid power and are violating the Clean Air Act.

The key metric used to quantify the emissions from these data centers is called the carbon intensity: how many grams of carbon dioxide emissions are produced for each kilowatt-hour of electricity consumed. Nailing down the carbon intensity of a given grid requires understanding the emissions produced by each individual power plant in operation, along with the amount of energy each is contributing to the grid at any given time. Utilities, government agencies, and researchers use estimates of average emissions, as well as real-time measurements, to track pollution from power plants.

This intensity varies widely across regions. The US grid is fragmented, and the mixes of coal, gas, renewables, or nuclear vary widely. California's grid is far cleaner than West Virginia's, for example.

Time of day matters too. For instance, data from April 2024 shows that California's grid can swing from under 70 grams per kilowatt-hour in the afternoon when there's a lot of solar power available to over 300 grams per kilowatt-hour in the middle of the night.

This variability means that the same activity may have very different climate impacts, depending on your location and the time you make a request. Take that charity marathon runner, for example. The text, image, and video responses they requested add up to 2.9 kilowatt-hours of electricity. In California, generating that amount of electricity would produce about 650 grams of carbon dioxide pollution on average. But generating that electricity in West Virginia might inflate the total to more than 1,150 grams.

What we've seen so far is that the energy required to respond to a query can be relatively small, but it can vary a lot, depending on the type of query and the model being used. The emissions associated with that given amount of electricity will also depend on where and when a query is handled. But what does this all add up to?

ChatGPT is now estimated to be the fifth-most visited website in the world, just after Instagram and ahead of X. In December, OpenAI said that ChatGPT receives 1 billion messages every day, and after the company launched a new image generator in March, it said that people were using it to generate 78 million images per day, from Studio Ghibli–style portraits to pictures of themselves as Barbie dolls.

Given the direction AI is headed—more personalized, able to reason and solve complex problems on our behalf, and everywhere we look—it's likely that our AI footprint today is the smallest it will ever be.

One can do some very rough math to estimate the energy impact. In February the AI research firm Epoch AI published an estimate of how much energy is used for a single ChatGPT query—an estimate that, as discussed, makes lots of assumptions that can't be verified. Still, they calculated about 0.3 watt-hours, or 1,080 joules, per message. This falls in between our estimates for the smallest and largest Meta Llama models (and experts we consulted say that if anything, the real number is likely higher, not lower).

One billion of these every day for a year would mean over 109 gigawatt-hours of electricity, enough to power 10,400 US homes for a year. If we add images and imagine that generating each one requires as much energy as it does with our high-quality image models, it'd mean an additional 35 gigawatt-hours, enough to power another 3,300 homes for a year. This is on top of the energy demands of OpenAI's other products, like video generators, and that for all the other AI companies and startups.

But here's the problem: These estimates don't capture the near future of how we'll use AI. In that future, we won't simply ping AI models with a question or two throughout the day, or have them generate a photo. Instead, leading labs are racing us toward a world where AI "agents" perform tasks for us without our supervising their every move. We will speak to models in voice mode, chat with companions for 2 hours a day, and point our phone cameras at our surroundings in video mode. We will give complex tasks to so-called "reasoning models" that work through tasks logically but have been found to require 43 times more energy for simple problems, or "deep research" models that spend hours creating reports for us. We will have AI models that are "personalized" by training on our data and preferences.

This future is around the corner: OpenAI will reportedly offer agents for $20,000 per month and will use reasoning capabilities in all of its models moving forward, and DeepSeek catapulted "chain of thought" reasoning into the mainstream with a model that often generates nine pages of text for each response. AI models are being added to everything from customer service phone lines to doctor's offices, rapidly increasing AI's share of national energy consumption.

"The precious few numbers that we have may shed a tiny sliver of light on where we stand right now, but all bets are off in the coming years," says Luccioni.

Every researcher we spoke to said that we cannot understand the energy demands of this future by simply extrapolating from the energy used in AI queries today. And indeed, the moves by leading AI companies to fire up nuclear power plants and create data centers of unprecedented scale suggest that their vision for the future would consume far more energy than even a large number of these individual queries.

"The precious few numbers that we have may shed a tiny sliver of light on where we stand right now, but all bets are off in the coming years," says Luccioni. "Generative AI tools are getting practically shoved down our throats and it's getting harder and harder to opt out, or to make informed choices when it comes to energy and climate."

To understand how much power this AI revolution will need, and where it will come from, we have to read between the lines.

See also:


Original Submission #1Original Submission #2Original Submission #3