Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.
I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that's about to wash over schools and society.
At MIT, I studythe history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.
New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.
It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.
[...] Today, there is a cottage industry of consultants, keynoters and "thought leaders" traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.
There is a better approach than making overconfident guesses: rigorously testing new practices and strategies and only widely advocating for the ones that have robust evidence of effectiveness. As with web literacy, that evidence will take a decade or more to emerge.
But there's a difference this time. AI is what I have called an "arrival technology." AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard – it crashes the party and then starts rearranging the furniture. That means schools have to do something. Teachers feel this urgently. Yet they also need support: Over the past two years, my team has interviewed nearly 100 educators from across the U.S., and one widespread refrain is "don't make us go it alone."
[...] First, regularly remind students and teachers that anything schools try – literacy frameworks, teaching practices, new assessments – is a best guess. In four years, students might hear that what they were first taught about using AI has since proved to be quite wrong. We all need to be ready to revise our thinking.
Second, schools need to examine their students and curriculum, and decide what kinds of experiments they'd like to conduct with AI. Some parts of your curriculum might invite playfulness and bold new efforts, while others deserve more caution.
[...] Third, when teachers do launch new experiments, they should recognize that local assessment will happen much faster than rigorous science. Every time schools launch a new AI policy or teaching practice, educators should collect a pile of related student work that was developed before AI was used during teaching. If you let students use AI tools for formative feedback on science labs, grab a pile of circa-2022 lab reports. Then, collect the new lab reports. Review whether the post-AI lab reports show an improvement on the outcomes you care about, and revise practices accordingly.
Between local educators and the international community of education scientists, people will learn a lot by 2035 about AI in schools. We might find that AI is like the web, a place with some risks but ultimately so full of important, useful resources that we continue to invite it into schools. Or we might find that AI is like cellphones, and the negative effects on well-being and learning ultimately outweigh the potential gains, and thus are best treated with more aggressive restrictions.
Everyone in education feels an urgency to resolve the uncertainty around generative AI. But we don't need a race to generate answers first – we need a race to be right.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
While drones flying over different parts of Europe have raised concerns in many countries, some are worried about a more dystopian future with the technology.
Russia's full-scale invasion of Ukraine could lead to a new arms race — one not defined by big submarines or loud missiles, but by small, silent drones.
Ukrainian President Volodymyr Zelenskyy addressed the prospect during his speech at the United Nations General Assembly, where he warned that it is cheaper to stop Russia now "than wondering who will be the first to create a simple drone carrying a nuclear warhead"."
We must use everything we have, together, to force the aggressor to stop. And only then do we have a real chance that this arms race won't end in catastrophe for all of us," he said."Otherwise, [Russian President Vladimir] Putin will keep driving the war forward — wider and deeper."
Experts warn drones carrying nuclear weapons might already exist.
TASS, the Russian state-owned news agency, reported in 2023 on the manufacture of a nuclear-armed underwater drone called Poseidon.
Previously, in 2018, the United States defence ministry also publicly acknowledged Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" or underwater drone.
Mick Ryan, a retired Australian Army major general and senior fellow for military studies at the Lowy Institute, said drones with nuclear warheads "may already be a reality".
"It's something that we should be concerned about," Ryan, who is also a strategic adviser at a US drone company, Skydio, told SBS News.
"Particularly since detecting a drone underwater that's capable of very long ranges would be a significant threat to Western countries, including Australia," Ryan said.
[...] Nuclear warheads are not the only possible future predicted for drones, as politicians are warning about the use of artificial intelligence (AI) to control drones.
During his speech at the UN, Zelenskyy said "it's only a matter of time" before drones operate "all by themselves, fully autonomous, and no human involved, except the few who control AI systems".
Earlier in September, The Wall Street Journal reported that AI-powered drones were introduced on the battlefield, with Ukraine utilising technology that allows groups of drones to make decisions independently.
Ryan said the use of AI might actually help reduce civilian casualties in future warfare.
"AI might actually make them more deadly for the military and less deadly for civilians. Now, that's a perfect scenario, of course, and it's theoretical," he said.
On the other hand, there are concerns about AI gaining access to nuclear weapons.
Foreign Minister Penny Wong told the UN on Thursday: "AI's potential use in nuclear weapons and unmanned systems challenges the future of humanity."
"Decisions of life and death must never be delegated to machines", she said and offered to other leaders to set rules and standards on the use of AI.
Some others have also expressed concerns about ethical and regulatory challenges related to autonomous drones.
Ryan said: "If you have AI controlling a drone that has a nuclear weapon, we should be very concerned about that."
"I think AI for conventional weapons and AI for nuclear weapons are two very different conversations with two very different forms of risk."
[...] The risk of drones, however, has not been limited only to the war zones, with a series of drone incursions being seen in Europe recently.
On Saturday, drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports. There were also reports of drone observation in Germany, Norway and Lithuania.
Danish defence minister Troels Lund Poulsen described the incident as "systematic" and a "hybrid attack".
The Russian government has dismissed any claims of involvement in the drone incidents.
Drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports.
"The European Union formally announced on the weekend it will focus on developing a drone wall system in its eastern defences to defend against incursions.
[...] The drones are the threat of today and will remain the threat of tomorrow. Definitely, no country can afford to ignore this threat and has to take action at different levels ... [and] learn from the partners, including Ukraine, who are at the forefront of developing these systems in modern warfare."
Comets Lemmon and SWAN may be visible around the same time as they race across the solar system:
Skywatchers, rejoice. This month, not one but two comets are set to soar into our night skies for your viewing pleasure.
The two comets, C/2025 R2 (SWAN) and C/2025 A6 (Lemmon), were both discovered in 2025. The celestial visitors are gearing up for a close flyby of Earth in October, becoming more visible as they approach our planet. SWAN will be closest to Earth on October 19, while Lemmon is set for its own close approach on October 21. Both icy comets may even be visible to the naked eye around that time.
Astronomers spotted Lemmon in January using the Mt. Lemmon SkyCenter observatory in Arizona's Santa Catalina Mountains. The comet was speeding toward the inner solar system at speeds up to 130,000 miles per hour (209,000 kilometers per hour).
Later in September, amateur astronomer Vladimir Bezugly discovered comet SWAN in images from the SWAN instrument on NASA's SOHO satellite. The comet became significantly brighter as it emerged from the Sun's direction.
At its closest approach, SWAN will be at a distance of approximately 24 million miles (39 million kilometers) from our planet, or about a quarter of the distance between the Sun and Earth. SWAN is now at a brightness magnitude of around 5.9, according to EarthSky. The unexpectedly bright comet is currently in the southern skies, but it is slowly moving north, according to NASA.
Following SWAN's closest approach, comet Lemmon will be right behind. The comet will be about half the distance between the Sun and Earth before rounding the Sun on November 8. From there, it will begin its next journey around the star. Lemmon will continue to brighten as it approaches the Sun, but it will likely stay visible, and possibly become even brighter, around October 31 to November 1, according to EarthSky.
SWAN is best viewed in the Southern Hemisphere. The comet crossed into the Libra constellation on September 28, and will make its way across Scorpius on October 10. Around October 9-10, it will appear near Beta Librae, the brightest star in the Libra constellation, EarthSky reports.
It may, however, be a bit tricky to spot because its position in the skies will be close to the setting Sun. Sky watchers hoping to catch a glimpse of SWAN need up toward the west after sunset.
Conditions are more favorable for Lemmon. The comet is best viewed in the Northern Hemisphere, where it will be positioned near the Big Dipper for most of October. Sky watchers should look to the eastern skies just before sunrise to spot the comet.
By mid-October, the comet may be easier to view. On October 16, Lemmon will pass near Cor Caroli, a binary star system in the northern constellation of Canes Venatici, according to EarthSky. Around that time, the comet could be visible to the naked eye.
This time three years ago, most people had never heard of generative AI. Today, the technology is a cultural behemoth, and businesses across virtually every industry are facing huge pressure to embrace it.
At least at first glance, customer service would seem to be a field that's particularly ripe for AI-powered automation. Chatbots specialize in fielding simple queries, while newer and more powerful agents can access a business's internal files to provide up-to-date information, send follow-up emails, and perform other complex tasks. Little wonder that a fleet of companies like Salesforce and Microsoft have been replacing human customer service reps with AI.
New research, however, suggests this could turn out to be a mistake -- that despite the huge amount of marketing gusto that's been poured into selling generative AI-powered customer service tools to businesses, the technology could in fact be doing more harm than good.
You know that relief you feel when you finally get past a customer service bot and an actual person picks up the phone? Turns out most other people seem to feel that way too, even in the age of AI.
[...] "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing."
He added that the backlash has spread to social media.
Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media.
More recently, fintech company Klarna started hiring human customer service employees again after realizing that AI was delivering a "lower quality," as company CEO Sebastian Siemiatkowski told Bloomberg. (Siemietkowski told CNBC in May that his company's investments in AI had contributed to an employee headcount reduction of about 40%.)
A global survey of 2,000 CEOs conducted by IBM early this year found that only about one in four internal AI business initiatives has delivered expected ROI. Even more jarringly, a MIT study published in August showed that 95% of businesses' experiments with AI have not delivered any real returns.
[...] In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation...was a mistake."
Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch. At least for the time being, its shortcomings very well may overshadow its benefits.
To cat lovers, a litter box is a necessity. But not to scientists. A team of researchers decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury
To cat owners, a litter box is a nuisance. But to scientists, it's a trove of information. A team of researchers at Nestlé Purina PetCare decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury.
The scientists built a painstaking dictionary of these gestures—a full "ethogram," or catalog, of species-specific behaviors—and then identified the distinct moves in feline bathroom habits: grooming, digging, sniffing litter. "We landed on 39 different behaviors that cats do in a litter box, with the understanding that depending on their satisfaction with the litter box, the environment and the dynamics around them, those behaviors will shift," says Ragen McGowan, director of digital and AI product development at Purina and one of the authors of a paper published recently in Applied Animal Behaviour Science on the development of Purina's AI-powered litter box monitor. "We realized this ethogram could be a window into their health."
And Imma gonna leave a link to where I found it, on Fark
As a very long time user of MythTV and free OTA ATSC 1.0 TV, reading this one did not make my day:
CordCutters published news of a recent FCC decision to allow broadcasters flexibility on switching to ATSC 3.0 technology:
In a major shift for American television viewers, the Federal Communications Commission (FCC) has decided against setting a hard deadline to end the old digital TV system that powers most broadcasts and cable services today. [...] The agency, now headed by Brendan Carr, had initially pushed for a quicker switch to the advanced ATSC 3.0 technology, known as NextGen TV. But after hearing concerns from consumer groups, cable companies, and satellite providers, the FCC is choosing a more flexible, voluntary approach to make the change easier for everyone involved.
According to the new proposal this would "tentatively conclude that television stations should be allowed to choose when to stop broadcasting in 1.0 and start broadcasting exclusively in 3.0."
To understand this, it's helpful to step back and explain the basics. For over 15 years, U.S. TV stations and multichannel video programming distributors (MVPDs)—think cable giants like Comcast or satellite services like DirecTV and DISH—have relied on ATSC 1.0. This is the standard digital TV technology that replaced fuzzy analog signals in 2009, delivering clearer pictures and more channels. It's the "original" digital TV, or what some call the "OG" of modern broadcasting. ATSC 1.0 works universally across free over-the-air antennas, cable boxes, and satellite dishes, reaching nearly every household without special upgrades.
NextGen TV, built on ATSC 3.0, promises even better features: sharper 4K video, interactive apps, and stronger signals that can cut through buildings or bad weather. It's like upgrading from a reliable old smartphone to one with a bigger screen and faster apps. The transition started voluntarily during the Biden administration, with a handful of cities testing it out. But since Trump's return in January 2025—about nine and a half months ago—the push intensified. FCC leaders wanted a nationwide shutdown of ATSC 1.0 by a set date to speed things up, arguing it would modernize broadcasting and free up airwaves for new uses.
This aggressive stance hit a wall of opposition. Consumer advocates, led by the Consumer Technology Association (CTA) and its president Gary Shapiro, warned that forcing the change too fast could leave millions of viewers in the dark. Older TVs and set-top boxes might stop working, forcing families to buy new equipment they can't afford. Cable and satellite lobbies echoed these fears, pointing out the massive costs of rewiring their networks to carry the new signals. For context, imagine every home suddenly needing a software update or new hardware just to watch local news—disruptive and expensive, especially for low-income or rural households.
The FCC's latest move, outlined in a document called the Fifth Further Notice of Proposed Rulemaking (FNPRM), listens to these voices. Instead of a mandatory cutoff, the agency proposes keeping the transition market-driven and optional. Broadcasters— the TV stations that send out signals—would get to decide when, or even if, they fully drop ATSC 1.0. Many are already "simulcasting," meaning they beam both the old and new signals at the same time, like offering two radio stations on one frequency. The FCC wants to ease rules around this, removing red tape that currently limits how long stations can keep the old signal running. This builds on policies from the Democratic-led FCC, extending the grace period without a strict timeline.
The plan also calls for ways to cut costs and smooth the ride for all players. For consumers, that could mean subsidies or incentives to upgrade TVs or antennas without breaking the bank. Manufacturers might get breaks on producing hybrid devices that handle both standards. Smaller broadcasters in rural areas, who often operate on tight budgets, would benefit from fewer mandates. And MVPDs could phase in NextGen support at their own pace, avoiding a sudden overhaul that might raise monthly bills.
But the FCC isn't stopping at flexibility—it's opening the floor for public input on trickier issues. One big question: Should new TVs sold in stores be required to receive ATSC 3.0 signals right out of the box? This echoes a famous FCC rule from the 1960s, when regulators under Chairman Newton Minow mandated UHF tuners in TVs. That move helped spark the growth of companies like Sinclair Inc., now a leading cheerleader for NextGen TV. Yet today, the CTA and others are pushing back hard, saying it could hike prices for basic sets and slow sales.
This compromise feels like a win for balance. Proponents of NextGen, like Sinclair, get regulatory green lights to experiment and expand. Critics, including the cable industry, avoid the chaos of a rushed shutdown. For everyday viewers, it means no panic-buying of new gear tomorrow. The transition, which began quietly years ago at events like a 2019 FCC symposium, can now evolve naturally. Back then, questions about integrating NextGen into cable systems lingered unanswered by groups like Pearl TV or the ATSC standards body. Today's proposal nods to those gaps, seeking fresh input.
Reflecting on history adds irony. A quarter-century ago, ATSC 1.0 was hailed as revolutionary, even as early tech from firms like Sinclair hinted at what 3.0 could become. Now, with costs in mind, the FCC is ensuring the next leap doesn't repeat past disruptions. As comments roll in over the coming months, this could shape TV for the next generation—literally. For now, Americans can keep flipping channels without fear of a digital cliff.
Hardware requirements aside, ATSC 3.0 will have DRM which, as I understand it, will make recording impossible. I know there are far worse things going on in Washington now, but wow this sucks.
On January 1, 2008, at 1:59 am in Calipatria, California, an earthquake happened. You haven't heard of this earthquake; even if you had been living in Calipatria, you wouldn't have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.
Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes.
[...]
"In the best-case scenario, when you adopt these new techniques, even on the same old data, it's kind of like putting on glasses for the first time, and you can see the leaves on the trees," said Kyle Bradley, co-author of the Earthquake Insights newsletter.
[...]
Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven't materialized yet."It really was a revolution," said Joe Byrnes, a professor at the University of Texas at Dallas. "But the revolution is ongoing."
[...]
The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.
[...]
Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that "traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms."
[...]
"The field of seismology historically has always advanced as computing has advanced," Bradley told me.There's a big challenge with traditional algorithms, though: They can't easily find smaller quakes, especially in noisy environments.
[...]
earthquakes have a characteristic "shape." The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it's almost certainly an earthquake.
Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross' lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known
[...]
Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.
[...]
AI detection models solve all of these problems:
- They are faster than template matching.
- Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.
- AI models generalize well to regions not represented in the original dataset.
[...]
To train an AI model, scientists take large amounts of labeled data, like what's above, and do supervised training.
[...]
Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.
[...]
Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection.
[...]
Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.
[...]
The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.You might think AI tools would help predict earthquakes, but that doesn't seem to have happened yet.
[...]
As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research."The schools want you to put the word AI in front of everything," Byrnes said. "It's a little out of control."
This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they've seen a lot of papers based on AI techniques that "reveal a fundamental misunderstanding of how earthquakes work."
[...]
While these are real issues, and ones Understanding AI has reported on before, I don't think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.That's pretty cool.
Earthquake in SoylentNews stories:
Earthquake search on SoylentNews
Kessler syndrome is bad; atmospheric incineration may be worse:
If you had to guess how many Starlink satellites burn up in Earth's atmosphere on an average day, how many would you pick? This isn't a trick question - SpaceX is deorbiting about one or two satellites daily, and that number is only going to grow.
What that means for our planet isn't entirely clear, says Harvard astrophysicist and space tracker Jonathan McDowell. Even so, Starlink isn't the space junk risk that some other satellite operations are.
McDowell commented on the massive volume of reentering Starlink satellites to science news site EarthSky last week. He explained that once Starlink and other planned low Earth orbit constellations together total about 30,000 satellites, roughly five could reenter the atmosphere each day, given an average replacement cycle of around five years.
[...] Starlink isn't the biggest concern when it comes to passing the Kessler tipping point, McDowell told us – but it is still a source of worry.
"Active satellite maneuvers to avoid collisions will help avoid Kessler," McDowell said in a phone conversation. "If they're successful. And that's a big if."
The current strategy to de-orbit Starlink satellites, which operate in a low orbit below 600 kilometers, is to use the satellites' thrusters to move them to such a low orbit that they eventually catch drag in the atmosphere and burn up in what McDowell calls an "uncontrolled but assisted" reentry.
Purposeful de-orbiting, plus successful dodging, mean we can avoid Kessler syndrome, McDowell told us.
[...] Excepting the possibility of unplanned disaster, Starlink's operations aren't the biggest concern, McDowell added. China's satellite plans are far more worrying.
"The region of space closest to Kessler is the 600 to 1,000 kilometer range," McDowell said. "It's full of old Soviet rocket stages and other stuff, and the more we add there, the more likely it is for Kessler syndrome to occur."
While many of China's proposed satellite constellations are going to be in low Earth orbit at the same altitude as Starlink, McDowell noted that a number are planning to fly above 1,000 kilometers. Were something to go wrong up there, McDowell noted, "we're probably screwed."
"That higher altitude means the atmosphere won't drag them down for centuries," McDowell added. "And I haven't seen [China] demonstrate any retirement plans for those satellites."
Kessler's bad, but destroying the atmosphere is worse
It would be a tragedy if humanity polluted Earth's orbit so much that we were effectively cut off from space, but were we to poison ourselves by filling the atmosphere with the remnants of burned-up satellites and die before we reached Kessler syndrome, that would arguably be worse.
McDowell is definitely worried about both, explaining that the effects on our planet of "using the upper atmosphere as an incinerator" are largely unknown, and a massive, dangerous blind spot. Not a lot of research has been done on what the growing number of atmospheric reentries could do to Earth and the life it harbors, but it's already shocking how much stuff is floating around above our heads.
According to the US National Oceanic and Atmospheric Administration, around 10 percent of the aerosol particles in the stratosphere (the second layer of Earth's atmosphere where the ozone layer lives) contain aluminum and exotic metals believed to be from rockets and satellites that have burned up on reentry. NOAA believes that number could grow to as much as 50 percent as space launches and reentries increase.
What little research has been done into the effects of so much foreign material burning up in Earth's atmosphere has been inconclusive, McDowell explained.
"So far answers have ranged from 'this is too small to be a problem' to 'we're already screwed,'" McDowell told us. "But the uncertainty is large enough that there's already a possibility we're damaging the upper atmosphere."
Discord has revealed that one of its customer service providers has suffered a data breach. The attackers gained access to Government-ID images, and user details.
Discord doesn't actually mention when the breach took place, it only says it "recently discovered an incident". The fact that Government ID images were stolen is important, the U.K.'s Online Safety Act came into effect on July 25, 2025. So, that means the data breach happened sometime between then and October 3rd, when the news about the incident was revealed. It's also worth noting that the victim of the hack was a third-party customer service that has not been named.
As for the attack, the incident involved an unauthorized party compromising one of the messaging services' customer service providers, which in turn allowed the hackers access to limited customer data, pertaining to those who had contacted Customer Support and/or Trust & Safety teams. Discord says it revoked the breached service provider's access to its ticketing system. It is investigating the matter with the help of a computer forensics firm, and is working with law enforcement. Users who were impacted by the incident are being notified via an email that is sent from [email protected]
Here's what Discord says the hackers managed to access: Name, Discord username, email and other contact details that were provided to customer support, billing information such as payment type, the last four digits of credit cards, and purchase history of the accounts, IP addresses, messages with customer service agents, and limited corporate data (training materials, internal presentations).
There was something else.
"The unauthorized party also gained access to a small number of government?ID images (e.g., driver's license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive."
The story continues:
Covert Eavesdropping through Computer Mice
The abstract from the arXiv paper states:
High-Performance Optical Sensors in Mice expose a critical vulnerability — one where confidential user speech can be leaked. Attackers can exploit these sensors' ever-increasing polling rate and sensitivity to emulate a makeshift microphone and covertly eavesdrop on unsuspecting users. We present an attack vector that capitalizes on acoustic vibrations propagated through the user's work surface, and we show that existing consumer-grade mice can detect these vibrations. However, the collected signal is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization. We introduce Mic-E-Mouse, a pipeline consisting of successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech. We measure Mic-E-Mouse against consumer-grade sensors on the VCTK and AudioMNIST speech datasets, and we achieve an SI-SNR increase of +19𝑑𝐵, a Speaker-Recognition accuracy of 80% on the automated tests and a WER of 16.79% on the human study
Additional details: Computer mice can eavesdrop on private conversations, researchers discover
High-end computer mice can be used to eavesdrop on the voice conversations of nearby PC users, researchers from the University of California, Irvine, have shown in a new proof-of-concept demonstration.
Given the catchy name 'Mic-E-Mouse' (Microphone-Emulating Mouse), the ingenious technique outlined in Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors is based on the discovery that some optical mice pick up incredibly small sound vibrations reaching them through the desk surfaces on which they are being used.
These vibrations could then be captured by different types of software on PC, Mac or Linux computers, including non-privileged 'user space' programs such as web browsers or games engines or, failing that, privileged components at OS kernel level.
Although the captured signals were inaudible at first, the team were able to enhance them using Wiener and neural network statistical filtering to boost signal strength relative to noise.
As the video demonstration of this process shows, this made it possible to extract spoken words from an eavesdropped data stream that at first sounded impossibly muffled.
"Through our Mic-E-Mouse pipeline, vibrations detected by the mouse on the victim user's desk are transformed into comprehensive audio, allowing an attacker to eavesdrop on confidential conversations," the researchers wrote.
Moreover, they said, this type of attack would be undetectable by defenders: "This process is stealthy since the vibrations signals collection is invisible to the victim user and does not require high privileges on the attacker's side."
[...] However, there are important caveats that limit the scope of Mic-E-Mouse. The noise level of the environment being eavesdropped upon must be low, with desks no more than 3cm thick, and with the mouse mostly stationary to isolate voice vibrations.
The researchers also used mice with a DPI of at least 20,000, significantly above that of the average mouse in use today.
Under real-world conditions, extracting voice data would be possible but challenging. Attackers would likely only be able to capture some conversation, rather than everything being said.
Another weakness is that defending against it wouldn't be difficult: using a rubber pad or mouse mat under a mouse would stop vibrations from being picked up.
Like you, I'm sick to the back teeth of talking about AI. Like you, I keep getting dragged into discussions of AI. Unlike you, I spent the summer writing a book about why I'm sick of writing about AI, which Farrar, Straus and Giroux will publish in 2026.
A week ago, I turned that book into a speech, which I delivered as the annual Nordlander Memorial Lecture at Cornell, where I'm an AD White Professor-at-Large. This was my first-ever speech about AI and I wasn't sure how it would go over, but thankfully, it went great and sparked a lively Q&A. One of those questions came from a young man who said something like "So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"
I said, "Yes, that's right."
He said, "OK, but what can we do about that?"
So I re-iterated the book's thesis: that the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector, e.g. "pivot to video," crypto, blockchain, NFTs, AI, and now "super-intelligence." Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations:
The only thing (I said) that we can do about this is to puncture the AI bubble as soon as possible, to halt this before it progresses any further and to head off the accumulation of social and economic debt. To do that, we have to take aim at the material basis for the AI bubble (creating a growth story by claiming that defective AI can do your job).
"OK," the young man said, "but what can we do about the crash?" He was clearly very worried.
"I don't think there's anything we can do about that. I think it's already locked in. I mean, maybe if we had a different government, they'd fund a jobs guarantee to pull us out of it, but I don't think Trump'll do that, so –"
[...] I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.
[...] The data-center buildout has genuinely absurd finances – there are data-center companies that are collateralizing their loans by staking their giant Nvidia GPUs as collateral. This is wild: there's pretty much nothing (apart from fresh-caught fish) that loses its value faster than silicon chips. That goes triple for GPUs used in AI data-centers, where it's normal for tens of thousands of chips to burn out over a single, 54-day training run.
That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.
That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).
[...] Industry darlings like Coreweave (a middleman that rents out data-centers) are sitting on massive piles of debt, secured by short-term deals with tech companies that run out long before the debts can be repaid. If they can't find a bunch of new clients in a couple short years, they will default and collapse.
[...] Plan for a future where you can buy GPUs for ten cents on the dollar, where there's a buyer's market for hiring skilled applied statisticians, and where there's a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.
[...] The most important thing about AI isn't its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people. AI isn't going to wake up, become superintelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer.
Last week the U.S. Education Secretary Linda McMahon produced the latest attack on academia, "Compact for Academic Excellence in Higher Education," which was addressed to a small group of well known US universities. If you missed it, there is a description at https://en.wikipedia.org/wiki/Compact_for_Academic_Excellence_in_Higher_Education
Today (10/10/2025) MIT was the first of the group to reject the offer. Here is the letter from MIT's president, https://orgchart.mit.edu/letters/regarding-compact
It's not long and worth a read, here is the punch line,
In our view, America's leadership in science and innovation depends on independent thinking and open competition for excellence. In that free marketplace of ideas, the people of MIT gladly compete with the very best, without preferences. Therefore, with respect, we cannot support the proposed approach to addressing the issues facing higher education.
And here's one of her bullet points,
MIT opens its doors to the most talented students regardless of their family's finances. Admissions are need-blind. Incoming undergraduates whose families earn less than $200,000 a year pay no tuition. Nearly 88% of our last graduating class left MIT with no debt for their education. We make a wealth of free courses and low-cost certificates available to any American with an internet connection. Of the undergraduate degrees we award, 94% are in STEM fields. And in service to the nation, we cap enrollment of international undergraduates at roughly 10%.
Baseload power is functionally extinct:
Much has been made of the notion that "renewables can't supply baseload power". This line suggests we need to replace Australia's ageing coal fleet with new coal or nuclear. The fact of the matter is that, already, "baseload" is an outdated concept and baseload generators face extinction.
Traditional utility grid management suggests there are three types of load: baseload, shoulder, and peak. Baseload is underlying 24/7 energy demand. Peak load is regular, but short-lived periods of high demand and shoulder loads are what lie in between. Under this model, system planning is straightforward – assign different types of energy generation to the different loads according to the price and qualitative characteristics.
Traditional, simple dispatch of generation technologies according to cost and flexibility
Historically in Australia, coal supplies most baseload demand since it is relatively cheap and very slow to ramp its output up or down. In some countries, baseload is met with nuclear since it is even less flexible than coal, but only two countries generate more than 50% of their energy from nuclear.
With the roles of different generators clearly delineated, power planners' jobs are much easier in this idealised system than today's grid.
In a system with lots of solar, prices fall dramatically at around midday because solar has no fuel cost. Because much of Australian solar is on rooftops, grid demand also falls. For those hours, baseload generators must either operate at a loss or shut down. Continuing to generate produces more energy than the grid requires at very low or negative prices. This is not a conscious choice—it is the structure of the market that the cheapest bid gets dispatched first.
In practice, most baseload generators are simply not capable of ramping up and down fast enough – they must bear loss-making prices in the middle of the day and try to make it up with high prices at peak periods. Moreover, this daily up/down ramp (called "load-following") brings efficiency losses and extra maintenance costs.
The situation in modern Australia – because baseload generators cannot be turned off, cheap solar is curtailed in the middle of the day.
As solar increases, this dynamic makes baseload generators impractical and unprofitable. Already, this is the situation in South Australia – in the last week of Winter 2024, SA ran on more than 100% net renewables. SA is instantaneously meeting 100% of demand from solar alone most days. It is no surprise that SA's last coal-fired power plant shut nearly a decade ago, in 2016, after years of being operated only seasonally.
The rest of Australia has not yet caught up to SA and Tasmania in terms of renewables and there is still a case for coal in the national energy market. However, the trend in solar uptake is abundantly clear and there will be no economic case for coal in just a few short years' time anywhere in Australia.
Excess energy in the middle of the day is useless if no-one wants to use it or if they want to use it overnight; this is where firming is required. When variable renewables are paired with enough storage or back-up power, it is called "firm". For a utility grid, this means large amounts of storage such as batteries and pumped hydro energy storage, as well as flexible generation such as hydro and possibly open cycle gas turbines.
In our transitioning grid, baseload generators run at a loss in the day while storage offtakes cheap solar to sell at peak times. This is called energy arbitrage —buying low and selling high — and it is extremely profitable. It is tempting to think this arrangement could continue, but it cannot. As more batteries come online, the economics of baseload generators gets worse.
We are set for a storage surge as:
utility batteries come online, electric vehicles ingrate with the grid, Albanese offers household battery subsidies, and battery prices continue to plummet. In this future, midday energy is still practically free because storage cannot consume it all and peak power prices are reduced because of battery arbitrage. Without profitable peak power prices, the economics of baseload generation are well and truly dead.
Power-hungry data centres have been meeting planning roadblocks because they consume more power than local infrastructure can handle. Rather than waiting for third parties to build out infrastructure, big tech companies want to take matters into their own hands. The possibility of big tech companies commissioning or commandeering nuclear reactors to supply new data centres with 24/7 power has created a media buzz.
It is unlikely that a self-reliant data centre would look to 100% renewables. This is not because renewables are unreliable, it is because firming renewables is easier at larger scales – wide geography helps to smooth out locally variable weather. Although nuclear is the most expensive option, big tech has cash to burn. The bigger hurdle to new nuclear is a 10-year-plus build timeline.
But whether or not data centres adopt nuclear is irrelevant for civil electricity because utility electricity grids are not data centres. If big tech builds nuclear to power data centres, it neither proves nor disproves that that technology is a good option for the whole grid.
Peter Dutton, if he succeeds in the upcoming election, faces an uphill battle to enact his nuclear energy policy. Not only must he overturn federal and state bans on nuclear power, he has to figure out how they would make money. If Dutton were to build a nuclear plant, it would require a forever-subsidy to compete in the market.
The industry is aware of this. Daniel Westerman, chief executive of the market operator AEMO, was recently quoted as saying: "Australia's operational paradigm is no longer 'baseload-and-peaking." AEMO has said competition from renewables is a key reason why coal has been retiring faster than announced.
The market is aware, and the industry is aware that baseload is not endangered, it is already functionally extinct. If the Coalition do build a nuclear power plant, Australian taxpayers will be the proud owners of an unprofitable, uncompetitive, expensive and unsellable liability.
David C Brock interviewed Ken Thompson for the Computer History Museum. It's a long interview with a video with a written transcript. The video is just over 4.5 hours long. The transcript weighs in at 64 pages as a downloadable PDF locked behind a CPU- and RAM-chewing web app.
This is an oral history interview with Ken Thompson, created in partnership by the Association for Computing Machinery and the Computer History Museum, in connection with his A.M. Turing Award in 1983. The interview begins with Thompson's family background and youth, detailing the hobbies he pursued intently from electronics and radio projects, to music, cars, and chess. He describes his experience at the University of California, Berkeley, and his deepening engagement with computers and computer programming there.
The interview then moves to his recruitment to the Bell Telephone Laboratories, and his experience of the Multics project. Thompson next describes his development of Unix and, with Dennis Ritchie, the programming language C. He describes the development of Unix and the Unix community at Bell Labs, and then details his work using Unix for the Number 5 Electronic Switching System. Thompson details his Turing Award lecture, the work on compromised compilers that led to it, and his views on computer security.
Next, he details his career in computer chess and work he did for Bell Labs artist Lillian Schwartz. Thompson describes his work on the Plan 9 operating system at Bell Labs with Rob Pike, and his efforts to create a digital music archive. He then details his post Bell Labs career at Entrisphere and then Google, including his role in Google Books and the creation of the Go programming language.
Previously:
(2025) Why Bell Labs Worked
(2022) Unix History: A Mighty Origin Story
(2019) Vintage Computer Federation East 2019 -- Brian Kernighan Interviews Ken Thompson
An interesting article about software quality over the years - by Denis Stetskov
The Apple Calculator leaked 32GB of RAM.
Not used. Not allocated. Leaked. A basic calculator app is haemorrhaging more memory than most computers had a decade ago.
Twenty years ago, this would have triggered emergency patches and post-mortems. Today, it's just another bug report in the queue.
We've normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn't about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.
The Numbers Nobody Wants to Discuss:
I've been tracking software quality metrics for three years. The degradation isn't gradual—it's exponential.
Memory consumption has lost all meaning:
VS Code: 96GB memory leaks through SSH connections
Microsoft Teams: 100% CPU usage on 32GB machines
Chrome: 16GB consumption for 50 tabs is now "normal"
Discord: 32GB RAM usage within 60 seconds of screen sharing
Spotify: 79GB memory consumption on macOS
These aren't feature requirements. They're memory leaks that nobody bothered to fix.
This isn't sustainable. Physics doesn't negotiate. Energy is finite. Hardware has limits.
The companies that survive won't be those who can outspend the crisis.
There'll be those who remember how to engineer. We're living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems.