Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Hubble Space Telescope captures rare collision in nearby planetary system:
In an unprecedented celestial event, NASA's Hubble Space Telescope (HST) captured the dramatic aftermath of colliding space rocks within a nearby planetary system.
When astronomers initially spotted a bright object in the sky, they assumed it was a dust-covered exoplanet, reflecting starlight. But when the "exoplanet" disappeared and a new bright object appeared, the international team of astrophysicists — including Northwestern University's Jason Wang — realized these were not planets at all. Instead, they were the illuminated remains of a cosmic fender bender.
Two distinct, violent collisions generated two luminous clouds of debris in the same planetary system. The discovery offers a unique real-time glimpse into the mechanisms of planet formation and the composition of materials that coalesce to form new worlds.
The study was published in the journal Science.
"Spotting a new light source in the dust belt around a star was surprising. We did not expect that at all," Wang said. "Our primary hypothesis is that we saw two collisions of planetesimals — small rocky objects, like asteroids — over the last two decades. Collisions of planetesimals are extremely rare events, and this marks the first time we have seen one outside our solar system. Studying planetesimal collisions is important for understanding how planets form. It also can tell us about the structure of asteroids, which is important information for planetary defense programs like the Double Asteroid Redirection Test (DART)."
"This is certainly the first time I've ever seen a point of light appear out of nowhere in an exoplanetary system," said lead author Paul Kalas, an astronomer at the University of California, Berkeley. "It's absent in all of our previous Hubble images, which means that we just witnessed a violent collision between two massive objects and a huge debris cloud unlike anything in our own solar system today."
[...] For years, astronomers have puzzled over a bright object called Fomalhaut b, an exoplanet candidate residing just outside the star Fomalhaut. Located a mere 25 light-years from Earth in the Piscis Austrinus constellation, Fomalhaut is more massive than the sun and encircled by an intricate system of dusty debris belts.
"The system has one of the largest dust belts that we know of," said Wang, who is part of the team that has monitored the system for two decades. "That makes it an easy target to study."
Since discovering Fomalhaut b in 2008, astronomers have struggled to determine whether it is, indeed, an actual planet or a large expanding cloud of dust. In 2023, researchers used the HST to further examine the strange light source. Surprisingly, it was no longer there. But another bright point of light emerged in a slightly different location within the same system.
"With these observations, our original intention was to monitor Fomalhaut b, which we initially thought was a planet," Wang said. "We assumed the bright light was Fomalhaut b because that's the known source in the system. But, upon carefully comparing our new images to past images, we realized it could not be the same source. That was both exciting and caused us to scratch our heads."
The disappearance of Fomalhaut b (now called Fomalhaut cs1) supports the hypothesis that it was a dissipating dust cloud, likely produced by a collision. The appearance of a second point of light (now called Fomalhaut cs2) further supports the theory that neither are planets, but the dusty remnants of dramatic smashups between planetesimals — the rocky building blocks of planets.
The location and brightness of Fomalhaut cs2 bear striking similarities to the initial observations of Fomalhaut cs1 two decades prior. By imaging the system, the team was able to calculate how frequent such planetesimal collisions occur.
"Theory suggests that there should be one collision every 100,000 years, or longer. Here, in 20 years, we've seen two," Kalas said. "If you had a movie of the last 3,000 years, and it was sped up so that every year was a fraction of a second, imagine how many flashes you'd see over that time. Fomalhaut's planetary system would be sparkling with these collisions."
[...] "Fomalhaut cs2 looks exactly like an extrasolar planet reflecting starlight," Kalas said. "What we learned from studying cs1 is that a large dust cloud can masquerade as a planet for many years. This is a cautionary note for future missions that aim to detect extrasolar planets in reflected light."
Although Fomalhaut cs1 has faded from view, the research team will continue to observe the Fomalhaut system. They plan to track the evolution of Fomalhaut cs2 and potentially uncover more details about the dynamics of collisions in the stellar neighborhood.
Journal Reference: https://www.science.org/doi/10.1126/science.adu6266
https://spectrum.ieee.org/alan-turings-delilah
A collection of documents was recently sold at auction for almost half a million dollars. The documents detail a top-secret voice-encryption project led by Alan Turing, culminating in the creation of the Delilah machine.
It was 8 May 1945, Victory in Europe Day. With the German military's unconditional surrender, the European part of World War II came to an end. Alan Turing and his assistant Donald Bayley celebrated victory in their quiet English way, by taking a long walk together. They had been working side by side for more than a year in a secret electronics laboratory, deep in the English countryside. Bayley, a young electrical engineer, knew little about his boss's other life as a code breaker, only that Turing would set off on his bicycle every now and then to another secret establishment about 10 miles away along rural lanes, Bletchley Park. As Bayley and the rest of the world would later learn, Bletchley Park was the headquarters of a vast, unprecedented code-breaking operation.
When they sat down for a rest in a clearing in the woods, Bayley said, "Well, the war's over now—it's peacetime, so you can tell us all."
"Don't be so bloody silly," Turing replied. That was the end of that conversation," Bayley recalled 67 years later.
Turing's incredible code-breaking work is now no longer secret. What's more, he is renowned both as a founding father of computer science and as a pioneering figure in artificial intelligence. He is not so well-known, however, for his work in electrical engineering. This may be about to change.
In November 2023, a large cache of his wartime papers—nicknamed the "Bayley papers"—was auctioned in London for almost half a million U.S. dollars. The previously unknown cache contains many sheets in Turing's own handwriting, telling of his top-secret "Delilah" engineering project from 1943 to 1945. Delilah was Turing's portable voice-encryption system, named after the biblical deceiver of men. There is also material written by Bayley, often in the form of notes he took while Turing was speaking. It is thanks to Bayley that the papers survived: He kept them until he died in 2020, 66 years after Turing passed away.
When the British Government learned about the sale of these papers at auction, it acted swiftly to put a ban on their export, declaring them to be "an important part of our national story," and saying "It is right that a UK buyer has the opportunity to purchase these papers." I was lucky enough to get access to the collection prior to the November sale, when the auction house asked for my assistance in identifying some of the technical material. The Bayley papers shine new light on Turing the engineer.
At the time, Turing was traveling from the abstract to the concrete. The papers offer intriguing snapshots of his journey from his prewar focus on mathematical logic and number theory, into a new world of circuits, electronics, and engineering math.
During the war, Turing realized that cryptology's new frontier was going to be the encryption of speech. The existing wartime cipher machines—such as the Japanese " Purple" machine, the British Typex, and the Germans' famous Enigma and teletypewriter-based SZ42—were all for encrypting typewritten text. Text, though, is scarcely the most convenient way for commanders to communicate, and secure voice communication was on the military wish list.
Bell Labs' pioneering SIGSALY speech-encryption system was constructed in New York City, under a U.S. Army contract, during 1942 and 1943. It was gigantic, weighing over 50 thousand kilograms and filling a room. Turing was familiar with SIGSALY and wanted to miniaturize speech encryption. The result, Delilah, consisted of three small units, each roughly the size of a shoebox. Weighing just 39 kg, including its power pack, Delilah would be at home in a truck, a trench, or a large backpack.
In 1943, Turing set up bench space in a Nissen hut and worked on Delilah in secret. The hut was at Hanslope Park, a military-run establishment in the middle of nowhere, England. Today, Hanslope Park is still an ultrasecret intelligence site known as His Majesty's Government Communications Centre. In the Turing tradition, HMGCC engineers supply today's British intelligence agents with specialized hardware and software.
Turing seems to have enjoyed the two years he spent at Hanslope Park working on Delilah. He made an old cottage his home and took meals in the Army mess. The commanding officer recalled that he "soon settled down and became one of us." In 1944, Turing acquired his young assistant, Bayley, who had recently graduated from the University of Birmingham with a bachelor's degree in electrical engineering. The two became good friends, working together on Delilah until the autumn of 1945. Bayley called Turing simply "Prof," as everyone did in the Bletchley-Hanslope orbit.
"I admired the originality of his mind," Bayley told me when I interviewed him in the 1990s. "He taught me a great deal, for which I have always been grateful."
In return, Bayley taught Turing bench skills. When he first arrived at Hanslope Park, Bayley found Turing wiring together circuits that resembled a "spider's nest," he said. He took Turing firmly by the hand and dragged him through breadboarding boot camp.
A year later, as the European war ground to a close, Turing and Bayley got a prototype system up and running. This "did all that could be expected of it," Bayley said. He described the Delilah system as "one of the first to be based on rigorous cryptographic principles."
How Turing's Voice-Encryption System Worked
Turing drew inspiration for the voice-encryption system from existing cipher machines for text. Teletypewriter-based cipher machines such as the Germans' sophisticated SZ42—broken by Turing and his colleagues at Bletchley Park—worked differently from the better known Enigma machine. Enigma was usually used for messages transmitted over radio in Morse code. It encrypted the letters A through Z by lighting up corresponding letters on a panel, called the lampboard, whose electrical connections with the keyboard were continually changing. The SZ42, by contrast, was attached to a regular teletypewriter that used a 5-bit telegraph code and could handle not just letters, but also numbers and a range of punctuation. Morse code was not involved. (This 5-bit telegraph code was a forerunner of ASCII and Unicode and is still used by some ham radio operators.)
The SZ42 encrypted the teletypewriter's output by adding a sequence of obscuring telegraph characters, called key (the singular form "key" was used by the codebreakers and codemakers as a mass noun, like "footwear" or "output"), to the plain message. For example, if the German plaintext was ANGREIFEN UM NUL NUL UHR (Attack at zero hundred hours), and the obscuring characters that were being used to encrypt these three words (and also the space between them) were Y/RABV8WOUJL/H9VF3JX/D5Z, then the cipher machine would first add "Y" to "A"—that is to say, add the 5-bit code of the first letter of the key to the 5-bit code of the first letter of the plaintext—and then added "/" to "N", then "R" to "G", and so on. Under the SZ42's rules for character addition (which were hardwired into the machine), these 24 additions would produce PNTDOOLLHANC9OAND9NK9CK5, which was the encrypted message. This principle of generating the obscuring key and then adding it to the plain message was the concept that Turing extended to the new territory of speech encryption.
Inside the SZ42, the key was produced by a key generator, consisting of a system of 12 wheels. As the wheels turned, they churned out a continual stream of seemingly random characters. The wheels in the receiver's machine were synchronized with the sender's, and so produced the same characters—Y/RABV8WOUJL/H9VF3JX/D5Z in our example. The receiving machine subtracted the key from the incoming ciphertext PNTDOOLLHANC9OAND9NK9CK5, revealing the plaintext ANGREIFEN9UM9NUL9NUL9UHR (a space was always typed as "9").
Applying a similar principle, Delilah added the obscuring key to spoken words. In Delilah's case, the key was a stream of pseudorandom numbers—that is, random-seeming numbers that were not truly random. Delilah's key generator contained five rotating wheels and some fancy electronics concocted by Turing. As with the SZ42, the receiver's key generator had to be synchronized with the sender's, so that both machines produced identical key. In their once highly secret but now declassified report, Turing and Bayley commented that the problem of synchronizing the two key generators had presented them with "formidable difficulties." But they overcame these and other problems, and eventually demonstrated Delilah using a recording of a speech given by Winston Churchill, successfully encrypting, transmitting, and decrypting it.
The encryption-decryption process began with discretizing the audio signal, which today we'd call analog-to-digital conversion. This produced a sequence of individual numbers, each corresponding to the signal's voltage at a particular point in time. Then numbers from Delilah's key were added to these numbers. During the addition, any digits that needed to be carried over to the next column were left out of the calculation—called "noncarrying" addition, this helped scramble the message. The resulting sequence of numbers was the encrypted form of the speech signal. This was transmitted automatically to a second Delilah at the receiving end. The receiving Delilah subtracted the key from the incoming transmission, and then converted the resulting numbers to voltages to reproduce the original speech.
The result was "whistly" and full of background noise, but usually intelligible—although if things went wrong, there could be "a sudden crack like a rifle shot," Turing and Bayley reported cheerfully.
But the war was winding down, and the military was not attracted to the system. Work on the Delilah project stopped not long after the war ended, when Turing was hired by the British National Physical Laboratory to design and develop an electronic computer. Delilah "had little potential for further development," Bayley said and "was soon forgotten." Yet it offered a very high level of security, and was the first successful demonstration of a compact portable device for voice encryption.
What's more, Turing's two years of immersion in electrical engineering stood him in good stead, as he moved on to designing electronic computers.
Turing's Lab Notebook
The two years Turing spent on Delilah produced the Bayley papers. The papers comprise a laboratory notebook, a considerable quantity of loose sheets (some organized into bundles), and—the jewel of the collection—a looseleaf ring binder bulging with pages.
The greenish-gray quarto-size lab notebook, much of it in Turing's handwriting, details months of work. The first experiment Turing recorded involved measuring a pulse emitted by a multivibrator, which is a circuit that can be triggered to produce a single voltage pulse or a chain of pulses. In the experiment, the pulse was fed into an oscilloscope and its shape examined. Multivibrators were crucial components of Turing's all-important key generator, and the next page of the notebook, labeled "Measurement of 'Heaviside function,' " shows the voltages measured in part of the same multivibrator circuit.
Today, there is intense interest in the use of multivibrators in cryptography. Turing's key generator, the most original part of Delilah, contained eight multivibrator circuits, along with the five-wheel assembly mentioned previously. In effect the multivibrators were eight more very complicated "wheels," and there was additional circuitry for enhancing the random appearance of the numbers the multivibrators produced.
Subsequent experiments recorded in the lab book tested the performance of all the main parts of Delilah—the pulse modulator, the harmonic analyzer, the key generator, the signal and oscillator circuits, and the radio frequency and aerial circuits. Turing worked alone for approximately the first six months of the project, before Bayley's arrival in March 1944, and the notebook is in Turing's handwriting up to and including the testing of the key generator. After this, the job of recording experiments passed to Bayley.
The Bandwidth Theorem
Among the piles of loose sheets covered with Turing's riotously untidy handwriting, one page is headed "Bandwidth Theorem." Delilah was in effect an application of a bandwidth theorem that today is known as the Nyquist-Shannon sampling theorem. Turing's proof of the theorem is scrawled over two sheets. Most probably he wrote the proof out for Bayley's benefit. The theorem—which expresses what the sampling rate needs to be if sound waves are to be reproduced accurately—governed Delilah's conversion of sound waves into numbers, done by sampling vocal frequencies several thousand times a second.
At Bell Labs, Claude Shannon had written a paper sketching previous work on the theorem and then proving his own formulation of it. Shannon wrote the paper in 1940, although it was not published until 1949. Turing worked at Bell Labs for a time in 1943, in connection with SIGSALY, before returning to England and embarking on Delilah. It seems likely that he and Shannon would have discussed sampling rates.
Turing's "Red Form" Notes
During the war, Hanslope Park housed a large radio-monitoring section. Shifts of operators continuously searched the airwaves for enemy messages. Enigma transmissions, in Morse code, were identified by their stereotypical military format, while the distinctive warble of the SZ42's radioteletype signals was instantly recognizable. After latching onto a transmission, an operator filled out an Army-issue form (preprinted in bright red ink). The frequency, the time of interception, and the letters of ciphertext were noted down. This "red form" was then rushed to the code breakers at Bletchley Park.
Old yellow paper with red marking.
Writing paper was in short supply in wartime Britain. Turing evidently helped himself to large handfuls of red forms, scrawling out screeds of notes about Delilah on the blank reverse sides. In one bundle of red forms, numbered by Turing at the corners, he considered a resistance-capacitance network into which a "pulse of area A at time 0" is input. He calculated the charge as the pulse passes through the network, and then calculated the "output volts with pulse of that area." The following sheets are covered with integral equations involving time, resistance, and charge. Then a scribbled diagram appears, in which a wavelike pulse is analyzed into discrete "steps"—a prelude to several pages of Fourier-type analysis. Turing appended a proof of what he termed the "Fourier theorem," evidence that these pages may have been a tutorial for Bayley.
The very appearance of these papers speaks to the character and challenging nature of the Delilah project. The normally top-secret Army red forms, the evidence of wartime shortages, the scribbled formulas, the complexity of the mathematics, the tutorials for Bayley—all contribute to the picture of the Prof and his young assistant working closely together at a secret military establishment on a device that pushed the engineering envelope.
Turing's Lectures for Electrical Engineers
The cover of the looseleaf ring binder is embossed in gilt letters "Queen Mary's School, Walsall," where Bayley had once been a pupil. It is crammed with handwritten notes taken by Bayley during a series of evening lectures that Turing gave at Hanslope Park. The size of Turing's audience is unknown, but there were numerous young engineers like Bayley at Hanslope.
These notes can reasonably be given the title Turing's Lectures on Advanced Mathematics for Electrical Engineers. Running to 180 pages, they are the most extensive noncryptographic work by Turing currently known, vying in length with his 1940 write-up about Enigma and the Bombe, affectionately known at Bletchley Park as "Prof's Book."
Stepping back a little helps to put this important discovery into context. The traditional picture of Turing held him to be a mathematician's mathematician, dwelling in a realm far removed from practical engineering. In 1966, for instance, Scientific American ran an article by the legendary computer scientist and AI pioneer John McCarthy, in which he stated that Turing's work did not play "any direct role in the labors of the men who made the computer a reality." It was a common view at the time.
As we now know, though, after the war Turing himself designed an electronic computer, called the Automatic Computing Engine, or ACE. What's more, he designed the programming system for the Manchester University "Baby" computer, as well as the hardware for its punched-tape input/output. Baby came to life in mid-1948. Although small, it was the first truly stored-program electronic computer. Two years later, the prototype of Turing's ACE ran its first program. The prototype was later commercialized as the English Electric DEUCE (Digital Electronic Universal Computing Engine). Dozens of DEUCEs were purchased—big sales in those days—and so Turing's computer became a major workhorse during the first decades of the Digital Age.
Yet the image has persisted of Turing as someone who made fundamental yet abstract contributions, rather than as someone whose endeavors sometimes fit onto the spectrum from bench electronics through to engineering theory. The Bayley papers bring a different Turing into focus: Turing the creative electrical engineer, with blobs of solder all over his shoes—even if his soldered joints did have a tendency to come apart, as Bayley loved to relate.
Turing's lecture notes are in effect a textbook, terse and selective, on advanced math for circuit engineers, although now very out-of-date, of course.
There is little specifically about electronics in the lectures, aside from passing mentions, such as a reference to cathode followers. When talking about the Delilah project, Bayley liked to say that Turing had only recently taught himself elementary electronics, by studying an RCA vacuum tube manual while he crossed the Atlantic from New York to Liverpool in March 1943. This cannot be entirely accurate, however, because in 1940 Turing's "Prof's Book" described the use of some electronics. He detailed an arrangement of 26 thyratron tubes powered by a 26-phase supply, with each tube controlling a double-coil relay "which only trips if the thyratron fails to fire."
Turing's knowledge of practical electronics was probably inferior to his assistant's, initially anyway, since Bayley had studied the subject at university and then was involved with radar before his transfer to Hanslope Park. When it came to the mathematical side of things, however, the situation was very different. The Bayley papers demonstrate the maturity of Turing's knowledge of the mathematics of electrical circuit design—knowledge that was essential to the success of the Delilah project.
The unusual breadth of Turing's intellectual talents—mathematician, logician, code breaker, philosopher, computer theoretician, AI pioneer, and computational biologist—is already part and parcel of his public persona. To these must now also be added an appreciation of his idiosyncratic prowess in electrical engineering.
When you're swaying in a beachside hammock on a lazy summer day, take a moment to thank the Indigenous cultures that invented it.
Native to South America and the Caribbean, hammocks were traditionally woven by women, who were frequently fiber-workers in Indigenous cultures, said Binghamton University Associate Professor of English John Kuhn, who recently co-authored an article on the topic.
"The oldest preserved specimen is 4,000 years old, but they may actually be much older," said Kuhn, who also directs the Institute for Advanced Studies in the Humanities at Binghamton. "We just don't know; textiles don't preserve well in the tropics."
[...] Portable, versatile and easy to clean, hammocks are a comfortable way to sleep in a hot climate. They also protect the user from insects, especially when compared to the ground-based bedding common to European colonizers.
"Colonists basically adopt them right from the jump," Kuhn said. "They learn to use them because the hammock was a major component in hospitality rituals that are being extended to them by Indigenous groups who are seeking alliance and friendship."
The technology proved useful for military expeditions in the Americas and was adopted by figures such as English explorer Sir Walter Raleigh. As colonial settlements began to develop, their use was adopted by a wider population, from elites to slaves.
[...] The spread of hammock use among colonizers belies the common belief that European technology was far superior to that of Indigenous people. It's far from the only example of cultural borrowing; take chocolate and tobacco, which originated as stimulants developed by Indigenous cultures.
Kuhn is currently working on a book about another Indigenous technology: birchbark canoes, which North American colonists immediately adopted for their own use.
"Sometimes people have this idea that Indigenous cultures were just destroyed, and they aren't necessarily seen as huge technological contributors to the Atlantic world that emerges out of colonization," Kuhn said. "The next time you see a hammock, just take a minute to marvel at the ingenuity of the cultures that it sprang from!"
Journal Reference: Norton, M., Kuhn, J. Towards a history of the hammock: An Indigenous technology in the Atlantic world. [OPEN] Postmedieval (2025). https://doi.org/10.1057/s41280-025-00379-w
Nvidia reportedly plans 30-40% cut in GeForce GPU production in early 2026:
Recent reports have claimed that Nvidia intends to reduce its production capacity for GeForce RTX 50 series GPUs in the first half of 2026. These cuts are reportedly due to shortages of memory, not just GDDR7, but all memory types.
30-40% of Nvidia's GeForce GPU production could be axed. This implies that Nvidia cannot get enough GDDR7 memory to produce GPUs at its current rate. Alternatively, it implies that Nvidia expects significantly reduced GPU sales in 2026, possibly due to rising NAND and DRAM costs and their impact on PC prices.
Note that there is no mention of non-GeForce RTX PRO series GPUs. If GDDR7 memory supply is indeed limited, Nvidia may be allocating its limited memory stocks to its more profitable RTX PRO GPU lineup, sacrificing its GeForce lineup.
[...] Benchlife has claimed that Nvidia plans to start its cuts by targeting its RTX 5060 Ti 16GB and RTX 5070 Ti. Targeting the RTX 5060 Ti 16GB makes sense, as this GPU has the same amount of memory as an RTX 5080, a much more expensive GPU. The same is true for the RTX 5070 Ti. It's memory could also be used for more profitable RTX 5080 GPUs.
If this is true, Nvidia wants to allocate its memory to its most profitable products. This makes sense from a business perspective. However, this tactic will hit consumers hard. Nvidia's RTX 5060 Ti 16GB is a much better product than its 8GB counterpart. Why? It has enough VRAM to run modern games without compromises. Nvidia's shift in production will force more consumers to purchase their 8GB GPU models.
[...] DDR5 memory prices are already through the roof, and it's likely that these price increases will soon impact the GPU market. Manufacturers will prioritise GPUs with lower memory and higher-margin models with more memory. That's bad news for gamers who want graphics cards with plenty of VRAM.
With Nvidia reportedly reducing its GPU production, one has to wonder whether this will cause a GPU shortage and drive up GPU prices.
A recent review suggests that gifted education and talent programs have been based on false premises:
Traditional research into giftedness and expertise assumes that the key factors to develop outstanding achievements are early performance (e.g., in a school subject, sport, or in concerts) and corresponding abilities (e.g., intelligence, motor skills, musicality) along with many years of intensive training in a discipline. Accordingly, talent programs typically aim to select the top-performing youth and then seek to further accelerate their performance through intensive discipline-specific training. However, this is apparently not the ideal way to promote young talent, as a team led by Arne Güllich, professor of sports science at RPTU University of Kaiserslautern-Landau, has recently discovered.
The starting point: Until recently, research into giftedness and expertise has focused on young and sub-elite performers. For example, school and college students, young athletes and chess players, or musicians at conservatories. The conclusions drawn from this research have recently been called into question by evidence from adult world-class athletes. "Traditional research into giftedness and expertise did not sufficiently consider the question of how world-class performers at peak performance age developed in their early years," Arne Güllich summarizes. His research intention in the current Review was, therefore, to investigate the development of these top performers. [...]
[...] A key finding: top performers undergo a different development pattern than previous research assumed. "And a common pattern emerges across the different disciplines," Arne Güllich emphasizes. He identifies three key findings. The first is that the best at a young age and the best later in life are mostly different individuals. Second, those who reached the world-class level showed rather gradual performance development in their early years and were not yet among the best of their age group. And the third finding is that those who later achieved peak performance did not specialize in a single discipline at an early age, but engaged in various disciplines (e.g., different subjects of study, genres of music, sports, or professions).
How can these findings, which deviate from the prevailing opinion, be explained? "We propose three explanatory hypotheses for discussion," says Güllich. The search-and-match hypothesis suggests that experiences with different disciplines improve one's chances of finding an optimal discipline for oneself over the years. The enhanced-learning-capital hypothesis implies that varied learning experiences in different disciplines enhance one's learning capital, which improves the performer's subsequent ongoing learning at the highest level in a discipline. And the limited-risks hypothesis suggests that multidisciplinary engagement mitigates risks of career-hampering factors, such as misbalanced work-rest ratios, burnout, being stuck in a discipline one ceases to enjoy, or injuries in psychomotor disciplines (sports, music). Arne Güllich: "Those who find an optimal discipline for themselves, develop enhanced potential for long-term learning, and have reduced risks of career-hampering factors, have improved chances of developing world-class performance."
Considering the latest findings, what can Arne Güllich recommend today? How should society promote young talented people to develop into future top performers? "Here's what the evidence suggests: Don't specialize in just one discipline too early. Encourage young people and provide them opportunities to pursue different areas of interest. And promote them in two or three disciplines." These may be disciplines that are not directly related to on another: language and mathematics, for example, or geography and philosophy. Or just think of Albert Einstein and his violin—one of the most important physicists, who was also passionate about music from an early age.
Journal Reference: https://doi.org/10.1126/science.adt7790
All of the 1,600 workers currently employed at that plant will be laid off before its upcoming conversion:
BlueOval SK CEO Michael Adams informed workers at the Glendale plant of this move in a video statement, saying that Ford's shift would result in "the end of all BlueOval SK Positions in Kentucky." As of this writing, it's unclear when the layoffs will take place, though workers will still continue to receive paychecks and benefits for the next 60 days. Employees will also be able to apply for a position at the revamped site, expected to open in 2027, where Ford will hire 2,100 workers.
"Right now, our primary focus is helping the affected BlueOval employees find new jobs," said Kentucky Governor Andy Beshear. "Team Kentucky is coordinating with company and community leaders to directly support these employees, in addition to planning job fairs and creating a website offering resources." Kentucky officials are renegotiating terms of the state's incentive agreement with Ford as well.
From WRDB.com:
According to the Wall Street Journal, Ford has lost $13 billion on its EV business since 2023.
Wall Street Journal automotive reporter Chris Otts, who covered Ford extensively for years at WDRB, said not all batteries are created equally.
"They built the wrong kind of battery and the wrong chemistry for that here in Kentucky, so they have to change the entire plant to make a different product," Otts said. "That's why you're seeing this long lead time and this mass layoff of all the employees here."
Previously: Ford Cancels Electric F-150
https://www.extremetech.com/internet/google-search-now-lets-you-upload-images-and-pdfs-for-analysis
The new plus button sits to the left of the search bar.
Google Search now allows users to upload images and documents directly via a new plus button. After a file is uploaded, the user can ask questions about it or enter AI Mode, where Gemini breaks down the file's content. The plus button is live on Google's homepage for US-based desktop users and will roll out to all countries where AI Mode is available later this week.
"It's another step in making it easier for you to ask anything, any way," a Google spokesperson told CNET.
The plus button is to the left of the Google search bar and, when clicked, opens a menu to upload an image or file. Any questions the user asks about their file are answered conversationally by Gemini, which can offer summaries, explanations, tips, and follow-up questions.
Adding AI Mode to Google's search bar is another step toward making AI accessible to a wider audience, especially those who are not familiar with AI tools such as ChatGPT, Perplexity, and others. Most people already know Google and how to use it, and these small additions make it easier for the general public to use AI Mode—or AI in general—without having to learn an entirely new platform or interface.
https://9to5linux.com/firefox-will-ship-with-an-ai-kill-switch-to-completely-disable-all-ai-features
Mozilla said that all the AI features that are or will be included in Firefox will also be opt-in.
After the controversial news shared earlier this week by Mozilla's new CEO that Firefox will evolve into "a modern AI browser," the company now revealed it is working on an AI kill switch for the open-source web browser.
On Tuesday, Anthony Enzor-DeMeo was named the new CEO of Mozilla Corporation, the company behind the beloved Firefox web browser used by almost all GNU/Linux distributions as the default browser.
In his message as new CEO, Anthony Enzor-DeMeo stated that Firefox will grow from a browser into a broader ecosystem of trusted software while remaining the company's anchor, and that Firefox will evolve into a modern AI browser and support a portfolio of new and trusted software additions.
What was not made clear is that Firefox will also ship with an AI kill switch that will let users completely disable all the AI features that are included in Firefox. Mozilla shared this important update earlier today to make it clear to everyone that Firefox will still be a trusted web browser.
"Something that hasn't been made clear: Firefox will have an option to completely disable all AI features. We've been calling it the AI kill switch internally. I'm sure it'll ship with a less murderous name, but that's how seriously and absolutely we're taking this," said Firefox developer Jake Archibald on Mastodon.
In addition, Mozilla said that all the AI features that are or will be included in Firefox will also be opt-in. "I think there are some grey areas in what 'opt-in' means to different people (e.g. is a new toolbar button opt-in?), but the kill switch will absolutely remove all that stuff, and never show it in future. That's unambiguous."
Personally, I do hope Firefox will remain the same web browser I've been using for the past 20 years. As long as AI remains opt-in and it's not shoved down our throats, I have no problem with that. The upcoming release, Firefox 147, is expected on January 13th, 2026, with support for the XDG Base Directory Specification.
Previously: Mozilla's New CEO: Firefox Will Become an "AI Browser"
https://www.bamsoftware.com/hacks/zipbomb/
This article shows how to construct a non-recursive zip bomb that achieves a high compression ratio by overlapping files inside the zip container. "Non-recursive" means that it does not rely on a decompressor's recursively unpacking zip files nested within zip files: it expands fully after a single round of decompression. The output size increases quadratically in the input size, reaching a compression ratio of over 28 million (10 MB → 281 TB) at the limits of the zip format. Even greater expansion is possible using 64-bit extensions. The construction uses only the most common compression algorithm, DEFLATE, and is compatible with most zip parsers.
Saturn's largest natural satellite, Titan, is believed to have a sub-surface ocean containing liquid water from data sent by NASA's Cassini mission. However, new analysis indicates that this might be slushy ice rather than liquid water, as an article in Gizmodo explains.
The Cassini spacecraft made 124 fly-bys of Titan collecting radar and gravity measurements which scientists interpreted as indicating the existence of a sub-surface ocean of water and ammonia. The Huygens lander, deployed on Titan by Cassini, collected data from radio signals further reinforcing this hypothesis.
Due to the presence of liquid water, Titan became a candidate for the existence of life, and perhaps future probes.
However, the Cassini data were inconclusive. Titan is deformed by tidal forces during its orbit of Saturn, which means that its interior cannot be completely solid. An alternative hypothesis has been proposed which says that under the solid crust there may be an ocean of slushy ice and pockets of liquid water rather that a single, continuous liquid ocean.
Models of Titan predict that the liquid water may get as warm as 20C and convection would circulate minerals from the rocky core up to the crust.
The UEFI firmware implementation in some motherboards from ASUS, Gigabyte, MSI, and ASRock is vulnerable to direct memory access (DMA) attacks that can bypass early-boot memory protections.
The security issue has received multiple identifiers (CVE-2025-11901, CVE-2025‑14302, CVE-2025-14303, and CVE-2025-14304) due to differences in vendor implementations
DMA is a hardware feature that allows devices such as graphics cards, Thunderbolt devices, and PCIe devices to read and write directly to RAM without involving the CPU.
IOMMU is a hardware-enforced memory firewall that sits between devices and RAM, controlling which memory regions are accessible for each device.
During early boot, when UEFI firmware initializes, IOMMU must activate before DMA attacks are possible; otherwise, there is no protection in place to stop reading or writing on memory regions via physical access.
The vulnerability was discovered by Riot Games researchers Nick Peterson and Mohamed Al-Sharifi. It causes the UEFI firmware to show that the DMA protection is enabled even if the IOMMU did not initialize correctly, leaving the system exposed to attacks.
Peterson and Al-Sharifi disclosed the security isssue responsibly and worked with CERT Taiwan to coordinate a response and reach affected vendors.
The researchers explain that when a computer system is turned on, it is "in its most privileged state: it has full, unrestricted access to the entire system and all connected hardware."
Protections become available only after loading the initial firmware, which is UEFI most of the time, which initializes hardware and software in a secure way. The operating system is among the last to load in the boot sequence.
On vulnerable systems, some Riot Games titles, such as the popular Valorant, will not launch. This is due to the Vanguard system that works at the kernel level to protect against cheats.
"If a cheat loads before we do, it has a better chance of hiding where we can't find it. This creates an opportunity for cheats to try and remain undetected, wreaking havoc in your games for longer than we are ok with" - Riot Games
Although the researchers described the vulnerability from the perspective of the gaming industry, where cheats could be loaded early on, the security risk extends to malicious code that can compromise the operating system.
The attacks require physical access, where a malicious PCIe device needs to be connected for a DMA attack before the operating system starts. During that time, the rogue device may read or modify the RAM freely.
"Even though firmware asserts that DMA protections are active, it fails to properly configure and enable the IOMMU during the early hand-off phase in the boot sequence," reads the advisory from the Carnegie Mellon CERT Coordination Center (CERT/CC).
"This gap allows a malicious DMA-capable Peripheral Component Interconnect Express (PCIe) device with physical access to read or modify system memory before operating system-level safeguards are established."
Due to exploitation occurring before OS boot, there would be no warnings from security tools, no permission prompts, and no alerts to notify the user.
Carnegie Mellon CERT/CC confirmed that the vulnerability impacts some motherboard models from ASRock, ASUS, GIGABYTE, and MSI, but products from other hardware manufacturers may be affected.
The specific models impacted for each manufacturer are listed in the security bulletins and firmware updates from the makers (ASUS, MSI, Gigabyte, ASRock).
Users are recommended to check for available firmware updates and install them after backing up important data.
Riot Games has updated Vanguard, its kernel-level anti-cheat system that provides protection against bots and scripts in games like Valorant and League of Legends.
If a system is affected by the UEFI vulnerability, Vannguard will block Valorant from launching and prompt users with a pop-up providing details on what is required to start the game.
"Our VAN:Restriction system is Vanguard's way of telling you we cannot guarantee system integrity due to the outlined disabled security features," Riot Games researchers say.
Developers of apps that use end-to-end encryption to protect private communications could be considered hostile actors in the UK.
That is the stark warning from Jonathan Hall KC, the government's Independent Reviewer of State Threats Legislation and Independent Reviewer of Terrorism Legislation, in a new report on national security laws.
In his independent review of the Counter-Terrorism and Border Security Act and the newly implemented National Security Act, Hall KC highlights the incredibly broad scope of powers granted to authorities.
He warns that developers of apps like Signal and WhatsApp could technically fall within the legal definition of "hostile activity" simply because their technology "make[s] it more difficult for UK security and intelligence agencies to monitor communications."
He writes: "It is a reasonable assumption that this would be in the interests of a foreign state even if though the foreign state has never contemplated this potential advantage."
The report also notes that journalists "carrying confidential information" or material "personally embarrassing to the Prime Minister on the eve of important treaty negotiations" could face similar scrutiny.
While it remains to be seen how this report will influence future amendments, it comes at a time of increasing pressure from lawmakers against encryption.
While the report's strong wording may come as a shock, it doesn't exist in a vacuum. Encrypted apps are increasingly in the crosshairs of UK lawmakers, with several pieces of legislation targeting the technology.
Most notably, Apple was served with a technical capability notice under the Investigatory Powers Act (IPA) demanding it weaken the encryption protecting iCloud data. That legal standoff led the tech giant to disable its Advanced Data Protection instead of creating a backdoor.
The Online Safety Act is already well known for its controversial age verification requirements. However, its most contentious provisions have yet to be fully implemented, and experts fear these could undermine encryption even further.
On Monday, Parliament debated the Act following a petition calling for its repeal. Instead of rolling back the law, however, MPs pushed for stricter enforcement. During the discussion, lawmakers specifically called for a review of other encrypted tools, like the best VPNs.
The potential risks of the Act's tougher stance on encryption were only briefly mentioned during the discussion, suggesting a stark disconnect between MPs and security experts.
Olivier Crépin-Leblond, of the Internet Society, told TechRadar he was disappointed by the outcome of the debate. "When it came to Client Side Scanning (CSS), most felt this could be one of the 'easy technological fixes' that could help law enforcement greatly, especially when they showed their frustration at Facebook rolling end-to-end encryption," he said.
"It's clearly not understood that any such software could fall prey to hackers."
It is clear that for many lawmakers, encryption is viewed primarily as an obstacle to law enforcement. This stands in sharp contrast to the view of digital rights experts, who stress that the technology is vital for protecting privacy and security in an online landscape where cyberattacks are rising.
"The government signposts end-to-end encryption as a threat, but what they fail to consider is that breaking it would be a threat to our national security too," Jemimah Steinfeld, CEO of Index on Censorship, told TechRadar.
She also added that this ignores encryption's vital role for dissidents, journalists, and domestic abuse victims, "not to mention the general population who should be afforded basic privacy."
With the battle lines drawn, we can expect a challenging year ahead for services like Signal and WhatsApp. Both companies have previously pledged to leave the UK market rather than compromise their users' privacy and security.
https://www.theverge.com/science/841169/ai-data-center-opposition
If there's one thing Republicans and Democrats came together on in 2025 — at least at the local level — it was to stop big, energy-hungry data center projects.
For communities sick of rising electricity bills and pollution from power plants, data centers have become an obvious target. Fights against new data centers surged this year as grassroots groups, voters, and local lawmakers demanded more accountability from developers. Already, they've managed to block or stall tens of billions of dollars' worth of potential investment in proposed data centers. And they're not letting up.
"We expect that opposition is going to keep growing," says Miquel Vila, an analyst at the research firm Data Center Watch who's been tracking campaigns against data centers across the US since 2023.
The group's latest report found that developers either canceled or delayed 20 projects after facing pushback from locals, representing $98 billion in proposed investments in the second quarter of this year. In fact, from late March through June, $24.2 billion in projects were blocked and $73.7 billion delayed. That's an increase compared to 16 blocked or postponed projects from 2023 through the first quarter of this year, the group notes.
The number of proposed data center projects has grown, which is a big reason why opposition is also picking up steam. Inventory in the four biggest data center markets in North America — Northern Virginia, Chicago, Atlanta, and Phoenix — grew by 43 percent year-over-year in the first quarter of this year, according to commercial real estate company CBRE. But plans for massive new facilities have also sparked battles across the nation.
Data centers eat up a lot of electricity, particularly for more powerful chips used for new AI models. Power demand for data centers is expected to grow by 22 percent by the end of the year compared to last year. A high-density rack of servers in an AI data center might use as much as 80 to 100 homes' worth of power, or upward of 100 kilowatts, according to Dan Thompson, a principal research analyst at S&P Global. AI also requires a lot of water to keep servers cool and generate electricity and could use as much annually as the indoor needs of 18.5 million US households by 2028 by one estimate.
Google dropped its plans for a new data center in Franklin Township, Indiana, in September after residents raised concerns about how much water and electricity the new data center would use. The Indianapolis City-County Council was reportedly expected to deny the project's rezoning application. That victory for residents in Indiana isn't captured in the Data Center Watch report, which is only updated with information through June.
Other data center projects that are moving forward or already operating still face resistance. Elon Musk's xAI, for example, faces a potential lawsuit from the NAACP and Southern Environmental Law Center over pollution from its data center in Memphis. Peak nitrogen dioxide concentration levels have jumped by 79 percent in the area surrounding the data center since it started operating in 2024, according to research from the University of Tennessee, Knoxville requested by Time magazine.
xAI, which is building a second, larger data center in Memphis, didn't immediately respond to a request for comment from The Verge, but says "We're moving toward a future where we will harness our cluster's full power to solve intractable problems," on its website.
"No community should be forced to sacrifice clean air, clean water, or safe homes so that corporations and billionaires can build energy-hungry facilities," the NAACP said in guiding principles that it shared with The Verge in September for other grassroots groups working to hold data center developers accountable for their impact on nearby neighborhoods.
Meta is facing a backlash against its largest data center yet planned for Richland Parish, Louisiana. Local utility Entergy broke ground this month on two of three gas plants it's building to meet that facility's electricity demands, expected to reach triple the amount of power New Orleans uses in a year. "Entergy LA customers are now set to subsidize Meta's data center costs," the Union of Concerned Scientists says in a November blog post, including an estimated $3.2 billion for the three gas-fired plants and a new $550 million transmission line. Entergy, on the other hand, contends that "Meta's electric payments to Entergy will lower what customers pay for resilience upgrades by approximately 10%," according to communications manager Brandon Scardigli.
"Our agreement with Entergy was structured to ensure that other customers are not paying for our data center energy use," Meta spokesperson Ashley Settle says in an email to The Verge. Settle adds that Meta is contributing $15 million to Entergy's ratepayer support program and more than $200 million for local infrastructure improvements.
Rising electricity costs became a flashpoint during November elections in the US this year, helping to propel two Democrats to the governor's offices in New Jersey and Virginia. New Jersey residents have faced one of the steepest rises in power prices of any state in the nation, while Virginia is home to "data center alley," through which 70 percent of internet traffic passes.
"Now, we have a bogey man — data centers who are these large energy users who are coming in, and in many states, getting sweetheart deals on wholesale electricity prices, when regular consumers don't have that type of sway," Tony Reames, a professor of environmental justice at the University of Michigan and former Department of Energy official under President Biden, said to The Verge after the election.
States, both red and blue, are starting to set some limits on those sweetheart deals. After South Dakota lawmakers rejected a bill that would have offered developers sales tax refunds, Applied Digital paused plans for a $16 billion AI campus in the state. Virginia, Maryland, and Minnesota, meanwhile, have introduced legislation attempting to rein in tax incentives for data centers or energy costs for other consumers, the Data Center Watch report says.
Nationally, more than 230 health and environmental groups have called for a moratorium on data center construction. The organizations, led by the nonprofit Food & Water Watch, sent a letter to Congress with their demands in December. They argue that there aren't enough policies in place to prevent data centers from burdening nearby communities with higher bills and more pollution. President Donald Trump released an "AI Action Plan" in July that aims to speed data center development in part by rolling back environmental regulations.
With midterm elections next year, we're likely to see more data center fights playing into local politics, Vila expects. "It's going to be very interesting to track how this opposition impacts the regulatory framework," he says.
Researchers succeed in detecting and tracking microplastics across varying ocean depths:
Publishing in the journal Environmental Science & Technology, researchers at Kyushu University report that they have developed a new method to more accurately analyze the distribution of small microplastics in the ocean at various depths. Their findings showed that concentrations of small microplastics suspended in the ocean range from 1,000 to 10,000 particles per cubic meter. The team also discovered that small microplastics sink to the depths of the ocean in two distinct ways: some attain near-neutral buoyancy and drift at specific depths, while others sink rapidly to the deep sea.
Since the advent of plastic in the early 20th century, plastic waste and pollution have been a global issue. As plastics degrade, they break off into smaller pieces. When they reach less than 5 mm in size, they are called microplastics.
"When these microplastics degrade further to 10-300 µm, we call them small microplastics. Many researchers are investigating the distribution and movement of microplastics in the ocean. However, when they reach that size, they become harder to collect and analyze," explains Professor Atsuhiko Isobe of Kyushu University Research Institute for Applied Mechanics and one of the researchers who led the study. "There was no standardized protocol to evaluate the presence of small microplastics in the ocean that could minimize contamination, particle loss, and potential fragmentation."
Most ocean microplastics are made of polyethylene and polypropylene. These materials are less dense than seawater, so they float near the sea surface. However, over time, algae, bacteria, and other marine organisms attach to their surface in a process called biofouling. This results in the microplastic increasing in weight and sinking toward the seafloor.
[...] "Our findings revealed that small microplastics reach sea depths via two distinct pathways: drifting and sinking. In the first pathway, small microplastics reach neutral buoyancy with the seawater. They then drift in an area of the ocean where water density is between 1,023 and 1,025 kilograms per cubic meter at depths of about 100 to 300 meters," Isobe continues. "These small microplastics will drift through this layer for approximately 20 to 40 years."
The other way small microplastics reach the depths of the sea is by increasing their density through biofouling, causing them to sink to the seafloor. The team observed that the concentration of small microplastics drifting in the ocean ranged from 1,000 to 10,000 particles per cubic meter of seawater.
Journal Reference: Mao Kuroda, Atsuhiko Isobe, Keiichi Uchida, Ryuichi Hagita, and Satoru Hamada, "Settling and Along-Isopycnal Subduction of Small Microplastics Into Subsurface Layers of the Western North Pacific Ocean", Environmental Science & Technology, https://doi.org/10.1021/acs.est.5c08983
A barely perceptible keystroke delay was the smoking gun that led to the uncovering of a malign imposter.
A North Korean imposter was uncovered, working as a sysadmin at Amazon U.S., after their keystroke input lag raised suspicions with security specialists at the online retail giant. Normally, a U.S.-based remote worker's computer would send keystroke data within tens of milliseconds. This suspicious individual's keyboard lag was "more than 110 milliseconds," reports Bloomberg.
Amazon is commendably proactive in its pursuit of impostors, according to the source report. The news site talked with Amazon's Chief Security Officer, Stephen Schmidt, about this fascinating new case of North Koreans trying to infiltrate U.S. organizations to raise hard currency for the Democratic People's Republic of Korea (DPRK), and sometimes indulge in espionage and/or sabotage.
Schmidt says that Amazon has foiled more than 1,800 DPRK infiltration attempts since April 2024. Moreover, the rate of attempts continues apace, with Amazon reckoning it is seeing a 27% QoQ uplift in North Koreans trying to get into the Amazon corporation.
You have to look for them, to find them
However, Amazon's success can be almost entirely credited to the fact that it is actively looking for DPRK impostors, warns its Chief Security Officer. "If we hadn't been looking for the DPRK workers," Schmidt said, "we would not have found them."
With this company policy explained, a blip on the Amazon security radar was caused earlier this year when a new sysadmin's Amazon laptop monitor alerted security personnel about unusual behavior.
Amazon security experts took a closer look at the flagged 'U.S. remote worker' and determined that their remote laptop was being remotely controlled – causing the extra keystroke input lag. Schmidt emphasizes that good-quality security software was key to this investigation.
It turns out that the DPRK had access to this Amazon laptop located in Arizona. A woman found to be facilitating this fraud on behalf of North Korean imposter workers was sentenced to several years in prison earlier this year.
As well as red flag computer network symptoms, the fumbling use of American idioms and English-language articles continues to be a giveaway when conversing with such impostors.
Tip of the iceberg
The problem of North Koreans infiltrating U.S. corporations for profit, mischief, and more is undoubtedly a serious one. We've covered sizable FBI seizures of equipment recently, perhaps showing just the tip of the iceberg. More successful infiltrations by the DPRK, as well as hostile nations like Iran, Russia, and China, are likely to be ongoing.