Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://www.bbc.com/news/articles/c62vl05rz0ko
A firm considered one of the leading global voices in encryption has cancelled the announcement of its leadership election results after an official lost the encrypted key needed to unlock them.
The International Association for Cryptologic Research (IACR) uses an electronic voting system which needs three members, each with part of an encrypted key, to access the results.
In a statement, the scientific organisation said one of the trustees had lost their key in "an honest but unfortunate human mistake", making it impossible for them to decrypt - and uncover - the final results.
The IACR said it would rerun the election, adding "new safeguards" to stop similar mistakes happening again.
The IACR is a global non-profit organisation which was founded in 1982 with the aim to "further research" in cryptology, the science of secure communication.
It opened votes for three Director and four Officer positions on 17 October, with the process closing on 16 November.
The Association used an open source electronic voting system called Helios for the process.
The browser-based system uses cryptography to encrypt votes, or keep them secret.
Three members of the association were chosen as independent trustees to each be given a third of the encrypted material, which when shared together would give the verdict.
Whilst two of the trustees uploaded their share of the encrypted material online, a third never did.
The IACR said in a statement that the lack of results was due to one of the trustees "irretrievably" losing their private key, leaving it "technically impossible" for the firm to know the final verdict.
It said it was therefore left with no choice but to cancel the election.
The association added it was "deeply sorry" for the mistake, which it took "very seriously".
American cryptographer Bruce Schneier told the BBC that failures in cryptographic systems often lie in the fact that "to provide any actual security" they have to be "operated by humans".
"Whether it's forgetting keys, improperly sharing keys, or making some other mistake," he said, "cryptographic systems often fail for very human reasons".
Voting for the IACR positions has been renewed and will run until 20 December.
The association said that it had replaced the initial trustee who lost the encrypted information and will now adopt a "2-out-of-3" threshold mechanism for the management of private keys, with a clear written procedure for trustees to follow.
Researchers invited over 100 people to complete escape room challenges in small groups, observing their interactions and behaviours throughout the tasks.
The findings have been published in the journal Behavioral Sciences.
"Although this took place in a fun, social setting, the teams still needed to build trust, share ideas and plan together to complete the challenges," explained Dr Reece Bush-Evans, Senior Lecturer in Psychology at Bournemouth University who led the study. "These are exactly the skills needed for success in real-world teams. Our results showed that when one person believes they're superior to their teammates, it can damage team dynamics and lead to failure."
Dr Bush-Evans and his team identified two distinct forms of narcissism among participants: Narcissistic Admiration – where individuals are charming, confident, and drawn to the spotlight, and Narcissistic Rivalry - where people are combative, competitive and quick to dismiss others' ideas or take offence.
Before and after the challenge, all participants rated themselves and their teammates on traits including friendliness, confidence, trustworthiness and aggression. The researchers then examined how these perceptions influenced team cohesion, team conflict, and overall performance (i.e., did they escape the rooms).
Teams with higher levels of narcissistic rivalry showed significantly less unity and performed worse in the escape room.
"We noticed that competitive and rivalrous individuals were more likely to ignore or dismiss their teammate's ideas, hold back information, and find the experience more frustrating. This wrecked the team bond that was needed to get the job done," Dr Bush-Evans explained.
In contrast, narcissistic admiration didn't seem to help or harm performance, though those individuals were increasingly viewed as less hardworking and more arrogant by their teammates as the challenge progressed.
"Their charisma may have impressed their colleagues at first, but this wore thin when it wasn't backed up with useful contributions," said Dr Bush-Evans.
The researchers believe these insights are relevant not just for social settings but for modern workplaces – especially in face-to-face, online and hybrid teams.
"Confidence and charm can easily be mistaken for competence," Dr Bush-Evans concluded. "Our study shows that these traits can actually limit what a team achieves. The most successful teams weren't the loudest, but the most cooperative. Leaders should value good listeners just as much as outspoken voices."
Journal Reference: Bush-Evans, Reece D., Claire M. Hart, Sylwia Z. Cisek, Liam P. Satchell, and Constantine Sedikides. 2025. "Narcissism in Action: Perceptions, Team Dynamics, and Performance in Naturalistic Escape Room Settings" Behavioral Sciences 15, no. 11: 1461. https://doi.org/10.3390/bs15111461
https://linuxiac.com/mozilla-resolves-21-year-old-bug-adds-full-xdg-directory-support/
Firefox 147 adds support for the XDG Base Directory Specification, ending a 21-year wait and aligning the browser's Linux file storage with modern standards.
The upcoming Firefox 147 will introduce a long-requested change for Linux users by finally adopting the XDG Base Directory Specification, closing a bug that has been open for more than 21 years.
The update modernizes how the browser stores files on Linux systems and aligns its behavior with that of most desktop applications, which have been doing so for years. Here's what I'm talking about.
Until now, Firefox placed nearly all of its user files—settings, profiles, data, and cache—inside a single folder called ~/.mozilla in the user's home directory. This approach worked, but it also contributed to the familiar clutter many Linux users see when applications each create their own hidden folders.
At the same time, the XDG Base Directory Specification is a widely used standard that aims to organize those files cleanly. Instead of placing everything directly in a single directory, applications are encouraged to use three dedicated locations: one for configuration files, one for application data, and one for cache files. These are typically found under ~/.config, ~/.local/share, and ~/.cache.
Starting with Firefox 147, newly created profiles on Linux will follow this structure. Configuration files, long-term data, and temporary cache files will now be stored in their proper locations.
It's important to note that this doesn't affect existing users immediately: if a legacy ~/.mozilla folder already exists, Firefox keeps using it to avoid breaking profiles. But for anyone installing Firefox fresh or creating new profiles, the browser will behave like other modern Linux applications.
As I said in the beginning, the change also marks the end of one of the browser's longest-standing issues. Believe it or not, bug 259356 was first reported in 2003, and the request to support XDG directories has resurfaced repeatedly among Linux users and distributions over the years.
It is expected that this change will finally simplify file management, reduce home-folder clutter, and, most importantly, align the browser with the expectations of today's Linux environments.
Here's a "grassroots" initiative bringing manufacturing back to the USA from Asia, https://reshorenow.org/ It was started by Harry Moser, the third generation of his family involved in USA manufacturing--primarily the Singer Sewing Machine Company... at one time a huge New Jersey factory of 5 million square feet. His main tool is free-to-use software, the
Total Cost of Ownership Estimator
Most companies make sourcing decisions based solely on price, oftentimes resulting in a 20 to 30 percent miscalculation of actual offshoring costs. The Total Cost of Ownership (TCO) Estimator is a free online tool that helps companies account for all relevant factors — overhead, balance sheet, risks, corporate strategy and other external and internal business considerations — to determine the true total cost of ownership. Using this information, companies can better evaluate sourcing, identify alternatives and even make a case when selling against offshore competitors. See Impact of Using TCO Instead of Price for further explanation.
The message makes sense to this AC, don't worry about national politics, work the cost numbers in detail and let the numbers guide purchasing decisions. He reports that many, many purchasing managers never look beyond the simple price quote--of course that will be cheaper from off-shore. The reality is that when all the relevant factors are included, in many cases it's actually cheaper to buy locally.
For a somewhat independent assessment, the Association for Manufacturing Technology (AMT) summarizes the Reshoring Initiative year-end report from 2024 here, https://www.amtonline.org/article/reshoring-initiative-annual-report-287-000-jobs-announced with a Figure 2 caption, "Reshoring Initiative Library: The cumulative number of jobs brought back since 2010 is nearing two million (Figure 2) - about 40% of what we lost to offshoring."
Team Xbox, and Activision are making Zork I, Zork II, and Zork III available under the MIT License.
In collaboration with Jason Scott, the well-known digital archivist of Internet Archive fame, they have officially submitted upstream pull requests to the historical source repositories of Zork I, Zork II, and Zork III. Those pull requests add a clear MIT LICENSE and formally document the open-source grant.
Each repository includes:
This release focuses purely on the code itself. It does not include commercial packaging or marketing materials, and it does not grant rights to any trademarks or brands, which remain with their respective owners. All assets outside the scope of these titles' source code are intentionally excluded to preserve historical accuracy.
Some Dell and HP laptop owners have been befuddled by their machines' inability to play HEVC/H.265 content in web browsers, despite their machines' processors having integrated decoding support.
Laptops with sixth-generation Intel Core and later processors have built-in hardware support for HEVC decoding and encoding. AMD has made laptop chips supporting the codec since 2015. However, both Dell and HP have disabled this feature on some of their popular business notebooks.
HP discloses this in the data sheets for its affected laptops, which include the HP ProBook 460 G11 [PDF], ProBook 465 G11 [PDF], and EliteBook 665 G11 [PDF].
"Hardware acceleration for CODEC H.265/HEVC (High Efficiency Video Coding) is disabled on this platform," the note reads.
Despite this notice, it can still be jarring to see a modern laptop's web browser eternally load videos that play easily in media players. As a member of a group for system administrators on Reddit recalled recently:
People with older hardware were not experiencing problems, whereas those with newer machines needed to either have the HEVC codec from the Microsoft Store removed entirely from [Microsoft Media Foundation] or have hardware acceleration disabled in their web browser/web app, which causes a number of other problems / feature [degradations]. For example, no background blurring in conference programs, significantly degraded system performance ...
Owners of some Dell laptops are also experiencing this, as the OEM has also disabled HEVC hardware decoding in some of its laptops. This information, however, isn't that easy to find. For example, the product page for the Dell 16 Plus 2-in-1, which has HEVC hardware decoding disabled, makes no mention of HEVC. There's also no mention of HEVC in the "Notes, cautions, and warnings" or specifications sections of the laptop's online owner's manual. The most easily identifiable information comes from a general support page that explains that Dell laptops only support HEVC content streaming on computer configurations with:
- An optional discrete graphics card
- An optional add-on video graphics card
- An integrated 4K display panel
- Dolby Vision
- A CyberLink Blu-ray playerWhen reached for comment, representatives from HP and Dell didn't explain why the companies disabled HEVC hardware decoding on their laptops' processors.
A statement from an HP spokesperson said:
In 2024, HP disabled the HEVC (H.265) codec hardware on select devices, including the 600 Series G11, 400 Series G11, and 200 Series G9 products. Customers requiring the ability to encode or decode HEVC content on one of the impacted models can utilize licensed third-party software solutions that include HEVC support. Check with your preferred video player for HEVC software support.
Dell's media relations team shared a similar statement:
HEVC video playback is available on Dell's premium systems and in select standard models equipped with hardware or software, such as integrated 4K displays, discrete graphics cards, Dolby Vision, or Cyberlink BluRay software. On other standard and base systems, HEVC playback is not included, but users can access HEVC content by purchasing an affordable third-party app from the Microsoft Store. For the best experience with high-resolution content, customers are encouraged to select systems designed for 4K or high-performance needs.
While HP's and Dell's reps didn't explain the companies' motives, it's possible that the OEMs are looking to minimize costs, since OEMs may pay some or all of the licensing fees associated with HEVC hardware decoding and encoding support, as well as some or all of the royalties per the number of devices that they sell with HEVC hardware decoding and encoding support [PDF]. Chipmakers may take some of this burden off of OEMs, but companies don't typically publicly disclose these terms.
The OEMs disabling codec hardware also comes as associated costs for the international video compression standard are set to increase in January, as licensing administrator Access Advance announced in July. Per a breakdown from patent pool administration VIA Licensing Alliance, royalty rates for HEVC for over 100,001 units are increasing from $0.20 each to $0.24 each in the United States. To put that into perspective, in Q3 2025, HP sold 15,002,000 laptops and desktops, and Dell sold 10,166,000 laptops and desktops, per Gartner.
Last year, NAS company Synology announced that it was ending support for HEVC, as well as H.264/AVC and VCI, transcoding on its DiskStation Manager and BeeStation OS platforms, saying that "support for video codecs is widespread on end devices, such as smartphones, tablets, computers, and smart TVs."
"This update reduces unnecessary resource usage on the server and significantly improves media processing efficiency. The optimization is particularly effective in high-user environments compared to traditional server-side processing," the announcement said.
Despite the growing costs and complications with HEVC licenses and workarounds, breaking features that have been widely available for years will likely lead to confusion and frustration.
"This is pretty ridiculous, given these systems are $800+ a machine, are part of a 'Pro' line (jabs at branding names are warranted – HEVC is used professionally), and more applications these days outside of Netflix and streaming TV are getting around to adopting HEVC," a Redditor wrote.
Is Matrix Multiplication Ugly?
A few weeks ago I was minding my own business, peacefully reading a well-written and informative article about artificial intelligence, when I was ambushed by a passage in the article that aroused my pique. That's one of the pitfalls of knowing too much about a topic a journalist is discussing; journalists often make mistakes that most readers wouldn't notice but that raise the hackles or at least the blood pressure of those in the know.
The article in question appeared in The New Yorker. The author, Stephen Witt, was writing about the way that your typical Large Language Model, starting from a blank slate, or rather a slate full of random scribbles, is able to learn about the world, or rather the virtual world called the internet. Throughout the training process, billions of numbers called weights get repeatedly updated so as to steadily improve the model's performance. Picture a tiny chip with electrons racing around in etched channels, and slowly zoom out: there are many such chips in each server node and many such nodes in each rack, with racks organized in rows, many rows per hall, many halls per building, many buildings per campus. It's a sort of computer-age version of Borges' Library of Babel. And the weight-update process that all these countless circuits are carrying out depends heavily on an operation known as matrix multiplication.
Witt explained this clearly and accurately, right up to the point where his essay took a very odd turn.
Here's what Witt went on to say about matrix multiplication:
"'Beauty is the first test: there is no permanent place in the world for ugly mathematics,' the mathematician G. H. Hardy wrote, in 1940. But matrix multiplication, to which our civilization is now devoting so many of its marginal resources, has all the elegance of a man hammering a nail into a board. It is possessed of neither beauty nor symmetry: in fact, in matrix multiplication, a times b is not the same as b times a."
The last sentence struck me as a bizarre non sequitur, somewhat akin to saying "Number addition has neither beauty nor symmetry, because when you write two numbers backwards, their new sum isn't just their original sum written backwards; for instance, 17 plus 34 is 51, but 71 plus 43 isn't 15."
The next day I sent the following letter to the magazine:
"I appreciate Stephen Witt shining a spotlight on matrices, which deserve more attention today than ever before: they play important roles in ecology, economics, physics, and now artificial intelligence ("Information Overload", November 3). But Witt errs in bringing Hardy's famous quote ("there is no permanent place in the world for ugly mathematics") into his story. Matrix algebra is the language of symmetry and transformation, and the fact that a followed by b differs from b followed by a is no surprise; to expect the two transformations to coincide is to seek symmetry in the wrong place — like judging a dog's beauty by whether its tail resembles its head. With its two-thousand-year-old roots in China, matrix algebra has secured a permanent place in mathematics, and it passes the beauty test with flying colors. In fact, matrices are commonplace in number theory, the branch of pure mathematics Hardy loved most."
[...] I'm guessing that part of Witt's confusion arises from the fact that actually multiplying matrices of numbers to get a matrix of bigger numbers can be very tedious, and tedium is psychologically adjacent to distaste and a perception of ugliness. But the tedium of matrix multiplication is tied up with its symmetry (whose existence Witt mistakenly denies). When you multiply two n-by-n matrices A and B in the straightforward way, you have to compute n2 numbers in the same unvarying fashion, and each of those n2 numbers is the sum of n terms, and each of those n terms is the product of an element of A and an element of B in a simple way. It's only human to get bored and inattentive and then make mistakes because the process is so repetitive. We tend to think of symmetry and beauty as synonyms, but sometimes excessive symmetry breeds ennui; repetition in excess can be repellent. Picture the Library of Babel and the existential dread the image summons.
G. H. Hardy, whose famous remark Witt quotes, was in the business of proving theorems, and he favored conceptual proofs over calculational ones. If you showed him a proof of a theorem in which the linchpin of your argument was a 5-page verification that a certain matrix product had a particular value, he'd say you didn't really understand your own theorem; he'd assert that you should find a more conceptual argument and then consign your brute-force proof to the trash. But Hardy's aversion to brute force was specific to the domain of mathematical proof, which is far removed from math that calculates optimal pricing for annuities or computes the wind-shear on an airplane wing or fine-tunes the weights used by an AI. Furthermore, Hardy's objection to your proof would focus on the length of the calculation, and not on whether the calculation involved matrices. If you showed him a proof that used 5 turgid pages of pre-19th-century calculation that never mentioned matrices once, he'd still say "Your proof is a piece of temporary mathematics; it convinces the reader that your theorem is true without truly explaining why the theorem is true."
If you forced me at gunpoint to multiply two 5-by-5 matrices together, I'd be extremely unhappy, and not just because you were threatening my life; the task would be inherently unpleasant. But the same would be true if you asked me to add together a hundred random two-digit numbers. It's not that matrix-multiplication or number-addition is ugly; it's that such repetitive tasks are the diametrical opposite of the kind of conceptual thinking that Hardy loved and I love too. Any kind of mathematical content can be made stultifying when it's stripped of its meaning and reduced to mindless toil. But that casts no shade on the underlying concepts. When we outsource number-addition or matrix-multiplication to a computer, we rightfully delegate the soul-crushing part of our labor to circuitry that has no soul. If we could peer into the innards of the circuits doing all those matrix multiplications, we would indeed see a nightmarish, Borgesian landscape, with billions of nails being hammered into billions of boards, over and over again. But please don't confuse that labor with mathematics.
A simple proposal on a 1982 electronic bulletin board helped sarcasm flourish online:
On September 19, 1982, Carnegie Mellon University computer science research assistant professor Scott Fahlman posted a message to the university's bulletin board software that would later come to shape how people communicate online. His proposal: use :-) and :-( as markers to distinguish jokes from serious comments. While Fahlman describes himself as "the inventor...or at least one of the inventors" of what would later be called the smiley face emoticon, the full story reveals something more interesting than a lone genius moment.
The whole episode started three days earlier when computer scientist Neil Swartz posed a physics problem to colleagues on Carnegie Mellon's "bboard," which was an early online message board. The discussion thread had been exploring what happens to objects in a free-falling elevator, and Swartz presented a specific scenario involving a lit candle and a drop of mercury.
That evening, computer scientist Howard Gayle responded with a facetious message titled "WARNING!" He claimed that an elevator had been "contaminated with mercury" and suffered "some slight fire damage" due to a physics experiment. Despite clarifying posts noting the warning was a joke, some people took it seriously.
The incident sparked immediate discussion about how to prevent such misunderstandings and the "flame wars" (heated arguments) that could result from misread intent.
"This problem caused some of us to suggest (only half seriously) that maybe it would be a good idea to explicitly mark posts that were not to be taken seriously," Fahlman later wrote in a retrospective post published on his CMU website. "After all, when using text-based online communication, we lack the body language or tone-of-voice cues that convey this information when we talk in person or on the phone."
On September 17, 1982, the next day after the misunderstanding on the CMU bboard, Swartz made the first concrete proposal: "Maybe we should adopt a convention of putting a star (*) in the subject field of any notice which is to be taken as a joke."
Within hours, multiple Carnegie Mellon computer scientists weighed in with alternative proposals. Joseph Ginder suggested using % instead of *. Anthony Stentz proposed a nuanced system: "How about using * for good jokes and % for bad jokes?" Keith Wright championed the ampersand (&), arguing it "looks funny" and "sounds funny." Leonard Hamey suggested {#} because "it looks like two lips with teeth showing between them."
Meanwhile, some Carnegie Mellon users were already using their own solution. A group on the Gandalf VAX system later revealed they had been using \__/ as "universally known as a smile" to mark jokes. But it apparently didn't catch on beyond that local system.
Two days after Swartz's initial proposal, Fahlman entered the discussion with his now-famous post: "I propose that the following character sequence for joke markers: :-) Read it sideways." He added that serious messages could use :-(, noting, "Maybe we should mark things that are NOT jokes, given current trends."
What made Fahlman's proposal work wasn't that he invented the concept of joke markers—Swartz had done that. It wasn't that he invented smile symbols at Carnegie Mellon, since the \__/ already existed. Rather, Fahlman synthesized the best elements from the ongoing discussion: the simplicity of single-character proposals, the visual clarity of face-like symbols, the sideways-reading principle hinted at by Hamey's {#}, and a complete binary system that covered both humor :-) and seriousness :-(.
[...] The emoticons spread quickly across ARPAnet, the precursor to the modern Internet, reaching other universities and research labs. By November 10, 1982—less than two months later—Carnegie Mellon researcher James Morris began introducing the smiley emoticon concept to colleagues at Xerox PARC, complete with a growing list of variations. What started as an internal Carnegie Mellon convention over time became a standard feature of online communication, often simplified without the hyphen nose to :) or :(, among many other variations.
[...] While Fahlman's text-based emoticons spread across Western online culture that remained text-character-based for a long time, Japanese mobile phone users in the late 1990s developed a parallel system: emoji. For years, Shigetaka Kurita's 1999 set for NTT DoCoMo was widely cited as the original. However, recent discoveries have revealed earlier origins. SoftBank released a picture-based character set on mobile phones in 1997, and the Sharp PA-8500 personal organizer featured selectable icon characters as early as 1988.
Unlike emoticons that required reading sideways, emoji were small pictographic images that could convey emotion, objects, and ideas with more detail. When Unicode standardized emoji in 2010 and Apple added an emoji keyboard to iOS in 2011, the format exploded globally. Today, emoji have largely replaced emoticons in casual communication, though Fahlman's sideways faces still appear regularly in text messages and social media posts.
As Fahlman himself notes on his website, he may not have been "the first person ever to type these three letters in sequence." Others, including teletype operators and private correspondents, may have used similar symbols before 1982, perhaps even as far back as 1648. Author Vladimir Nabokov suggested before 1982 that "there should exist a special typographical sign for a smile." And the original IBM PC included a dedicated smiley character as early as 1981 (perhaps that should be considered the first emoji).
What made Fahlman's contribution significant wasn't absolute originality but rather proposing the right solution at the right time in the right context. From there, the smiley could spread across the emerging global computer network, and no one would ever misunderstand a joke online again. :-)
https://edition.cnn.com/2025/11/12/science/bees-visual-stimulus-study-scli-intl
Bumblebees can process the duration of flashes of light and use the information to decide where to look for food, a new study has found.
This is the first evidence of such an ability in insects, according to doctoral student Alex Davidson and his supervisor Elisabetta Versace, a senior lecturer in psychology at Queen Mary University of London. The discovery could help settle a long-standing debate among scientists about whether insects are able to process complex patterns, Versace told CNN.
"In the past, it was thought that they were just very basic reflex machines that don't have any flexibility," she said.
To reach its finding, the team set up a maze through which individual bees would travel when they left their nest to forage for food.
Researchers presented the insects with two visual cues: one circle that would light up with a short flash and another with a long flash of light.
Approaching these respective circles, the bees would find a sweet food that they like at one, and a bitter food, which they don't, at the other.
The circles were in different positions at each room in the maze, but the bees still learned over varying amounts of time to fly toward the short flash of light associated with the sweet food.
Davidson and Versace then tested the bees' behavior when there was no food present, to rule out the possibility that the bees could see or smell the sugary food.
They found that the bees were able to tell the circles apart based on the duration of the flashes of light, rather than other cues.
"And so in this way, we show that the bee is actually processing the time difference between them to guide its foraging choice," Davidson said.
"We were happy to see that, in fact, the bees can process stimuli that, during the course of evolution, they have never seen before," Versace said, referring to the flashes of light.
"They're able to use novel stimulus they have never seen before to solve tasks in a flexible way," Versace added. "I think this is really remarkable."
The researchers say bumblebees are one of only a small number of animals, including humans and other vertebrates such as macaques and pigeons, that have been found to be able to differentiate between short and long flashes, in this case between 0.5 and five seconds.
For example, this ability helps humans to understand Morse code, in which a short flash is used to communicate the letter "E" and long flash the letter "T."
It is not clear how bees are able to judge time duration, but the team plans to investigate the neural mechanisms that allow the insects to do so.
The scientists are also planning to conduct similar research with bees that are able to move freely in colonies, rather than individually, and investigate the cognitive differences that allow some bees to learn to assess time duration faster than others.
Davidson hopes that the results will help people to appreciate that bees and other insects are not simple "machines essentially driven by instinct," but rather "complex animals with inner lives that have unique experiences."
"In fact, they do have complex cognition, this flexibility in learning and memory and behavior," he added.
This may help people to perceive bees as more than unthinking pollinators, Versace said.
"They are not just machines for our purposes," she said.
The findings also raise important ideas about our own understanding of time, according to Davidson.
"It's such a fundamental part of our lives and the lives of all animals," but we still don't really understand what time is and how we deal with it in our minds, he said.
"I think this study is really interesting because it shows that it's not just a human question," Davidson said.
The researchers reported their findings Wednesday in the journal Biology Letters.
The study shows "that bees possess a sophisticated sense of time," according to Cintia Akemi Oi, a postdoctoral research fellow at the Centre for Biodiversity and Environment Research at University College London. Oi was not involved in the new research.
"This finding makes perfect sense, as bees must carefully manage their time while foraging to maximize rewards and minimize the costs of returning to the nest," she said.
"Such studies not only help to understand insect cognition, but also shed light on the shared and unique features of their neuronal functions, offering valuable insights to the field."
Jolyon Troscianko, a visual ecologist at the University of Exeter in England, who was not involved in the study, told CNN that the results show that the bees "must be using learning that can measure the length of time."
The method shows that bees can learn using information from outside their usual ecological context, "which I find fascinating as it demonstrates how this type of general learning can be achieved with brains many orders of magnitude smaller than the birds and rodents that prior work has focused on," he said.
"Bigger brains are therefore not always necessary to show really impressive cognitive abilities."
Students are not just undermining their ability to learn, but to someday lead:
I have been in and out of college classrooms for the last 10 years. I have worked as an adjunct instructor at a community college, I have taught as a graduate instructor at a major research institution, and I am now an assistant professor of history at a small teaching-first university.
Since the spring semester of 2023, it has been apparent that an ever-increasing number of students are submitting AI-generated work. I am no stranger to students trying to cut corners by copying and pasting from Wikipedia, but the introduction of generative AI has enabled them to cheat in startling new ways, and many students have fully embraced it.
Plagiarism detectors have and do work well enough for what I might call "classical cheating," but they are notoriously bad at detecting AI-generated work. Even a program like Grammarly, which is ostensibly intended only to clean up one's own work, will set off alarms.
So, I set out this semester to look more carefully for AI work. Some of it is quite easy to notice. The essays produced by ChatGPT, for instance, are soulless, boring abominations. Words, phrases and punctuation rarely used by the average college student — or anyone for that matter (em dash included) — are pervasive.
But there is a difference between recognizing AI use and proving its use. So I tried an experiment.
A colleague in the department introduced me to the Trojan horse, a trick capable of both conquering cities and exposing the fraud of generative AI users. This method is now increasingly known (there's even an episode of "The Simpsons" about it) and likely has already run its course as a plausible method for saving oneself from reading and grading AI slop. To be brief, I inserted hidden text into an assignment's directions that the students couldn't see but that ChatGPT can.
I assigned Douglas Egerton's book "Gabriel's Rebellion," which tells the story of the thwarted rebellion of enslaved people in 1800, and asked the students to describe some of the author's main points. Nothing too in-depth, as it's a freshman-level survey course. They were asked to use either the suggestions I provided or to write about whatever elements of Egerton's argument they found most important.
I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.
The percentage was surprising and deflating. I explained my disappointment to the students, pointing out that they cheated on a paper about a rebellion of the enslaved — people who sacrificed their lives in pursuit of freedom, including the freedom to learn to read and write. In fact, Virginia made it even harder for them to do so after the rebellion was put down.
I'm not sure all of them grasped my point. Some certainly did. I received several emails and spoke with a few students who came to my office and were genuinely apologetic. I had a few that tried to fight me on the accusations, too, assuming I flagged them as AI for "well written sentences." But the Trojan horse did not lie.
There's a lot of talk about how educators have to train students to use AI as a tool and help them integrate it into their work. Recently, the American Historical Association even made recommendations on how we might approach this in the classroom. The AHA asserts that "banning generative AI is not a long-term solution; cultivating AI literacy is." One of their suggestions is to assign students an AI-generated essay and have them assess what it got right, got wrong or if it even understood the text in question.
But I don't know if I agree with the AHA. Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper "from a Marxist perspective." Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn't.
I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, "I thought it sounded smart."
[...] I have no doubt that many students are actively making the decision to cheat. But I also do not doubt that, because of inconsistent policies and AI euphoria, some were telling the truth when they told me they didn't realize they were cheating. Regardless of their awareness or lack thereof, each one of my students made the decision to skip one of the many challenges of earning a degree — assuming they are only here to buy it (a very different cultural conversation we need to have). They also chose to actively avoid learning because it's boring and hard.
Now, I'm not equipped to make deep sociological or philosophical diagnoses on these choices. But this is a problem. How do we solve it? Is it a return to analog? Do we use paper and pen and class time for everything? Am I a professor or an academic policeman?
The answer is the former. But students, society and administrations that are unwilling to take a hard stance (unless it's the promotion of AI) are crushing higher ed. A college degree is not just about a job afterward — you have to be able to think, solve problems and apply those solutions, regardless of the field. How do we teach that without institutional support? How do we teach that when a student doesn't want to and AI enables it?
[...] But a handful said something I found quite sad: "I just wanted to write the best essay I could." Those students in question, who at least tried to provide some of their own thoughts before mixing them with the generated result, had already written the best essay they could. And I guess that's why I hate AI in the classroom as much as I do.
Students are afraid to fail, and AI presents itself as a savior. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.
[...] We live in an era where personal expression is saturated by digital filters, hivemind thinking is promoted through endless algorithms and academic freedom itself is under assault by the weakest minds among us. AI has only made this worse. It is a crisis.
I can offer no solutions other than to approach it and teach about it that way. I'm sure angry detractors will say that is antiquated, and maybe it is.
But I am a historian, so I will close on a historian's note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognized that oppression is best maintained by keeping the masses illiterate, and those oppressed recognized that literacy is liberation. To my students and to anyone who might listen, I say: Don't surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.
https://phys.org/news/2025-11-simple-destroy-pfas-carbon.html
Researchers at Clarkson University have discovered a new way to destroy "forever chemicals," known as PFAS, using only stainless steel ball milling equipment. The method does not need added chemicals, heat, or solvents.
PFAS, or per- and polyfluoroalkyl substances, are a group of man-made chemicals used in products like nonstick pans, firefighting foam and water-resistant clothing. They are called "forever chemicals" because they do not break down easily in the environment and can build up in people, animals and water supplies.
The study, published in Environmental Science & Technology Letters, demonstrates that PFAS adsorbed on granular activated carbon can be completely destroyed by milling the material in a stainless steel ball mill.
"Granular activated carbon is widely used to remove PFAS from water," said Yang Yang, an associate professor of civil and environmental engineering at Clarkson University. "But dealing with used carbon filled with PFAS has become a big problem. Our new process offers a clean and simple way to get rid of these chemicals at room temperature."
Graduate students Jinyuan Zhu, Xiaotian Xu, and Nanyang Yang worked on the project with Yang. The team found that the collision of steel balls during the milling process generates triboelectrons, which facilitate the breakdown of PFAS through reactions with carbon.
The method worked on many types of PFAS found in both lab-made and real-world carbon samples. After treatment, no PFAS release was detected when the samples were tested under conditions similar to a landfill. This suggests the treated carbon may now be safe for disposal, which has been a long-standing challenge.
More information: Jinyuan Zhu et al, Additive-Free Ball Milling in Stainless Steel Mills Enables Destruction of PFAS on Granular Activated Carbon, Environmental Science & Technology Letters (2025). DOI: 10.1021/acs.estlett.5c00976
https://boingboing.net/2025/11/14/rings-new-feature-turns-your-doorbell-into-a-biometric-spy.html
Good news, everyone! According to the Electronic Frontier Foundation's EFFector newsletter, Amazon's already invasive Ring security cameras and doorbells may soon be monitoring you so closely that their surveillance will feel inescapable. The EFF reports that Amazon plans not only to photograph and record us on video without our permission but will also soon collect biometric data from us.
The feature, called "Familiar Faces," aims to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.
Given that many computers can be unlocked using an individual's faceprint, it's easy to see why this concerns privacy and cybersecurity advocates. Should Amazon proceed with its Familiar Faces plans, we would be one data leak away from chaos. Worth noting: Amazon readily shares information collected from Ring hardware with law enforcement agencies and maintains a close relationship with a company that collaborates with ICE. These connections should give users pause.
It's somewhat encouraging that this collection of biometric data violates data privacy laws in multiple states and localities. Unfortunately, a well-funded company can easily circumvent these protections: Amazon can roll out Familiar Faces where it's legally permitted while disabling it in areas where it's prohibited. We'll be monitoring this development closely and will provide updates as soon as possible.
Phone thieves in London are increasingly selective, often returning Android phones to victims and keeping only iPhones, the newsletter London Centric reports.
In January, someone named Sam was walking past a Royal Mail depot in south London when eight men blocked his way, robbed him of his phone, camera, and hat, and then returned his Android after seeing it was not an iPhone. The thief bluntly told him, "Don't want no Samsung," and ran off, Sam told London Centric.
Quite a few Android users across the city have experienced the same thing. Some have had their phones taken only for thieves to discard them moments later, or hand them right back after checking the brand.
Experts say that the probable reason for this trend is the higher resale value of iPhones globally. An advisor at cybersecurity firm ESET told London Centric that thieves chase that Apple-driven profit, as Android often has a lower value on the secondhand market, and some criminals think it's not worth getting charged over something less valuable.
Reports over the last decade show this preference is long-standing. Previous data from the UK government's Home Office shows iPhones regularly top lists of models most likely to be stolen, years before criminal groups began focusing on shipping Apple devices abroad.
For Android owners, the current pattern may be a cold comfort: While your phone might be less desirable to some, it could save you a headache down the line.
https://www.theregister.com/2025/11/21/magician_password_hand_rfid
Storing credentials safely and securely is the real trick
It's important to have your login in hand, literally. Zi Teng Wang, a magician who implanted an RFID chip in his appendage, has admitted losing access to it because he forgot the password.
It seemed like such a neat idea – get an RFID chip implanted in your hand and then do magical stuff with it. Except it didn't work out that way. "It turns out," said Zi, "that pressing someone else's phone to my hand repeatedly, trying to figure out where their phone's RFID reader is, really doesn't come off super mysterious and magical and amazing."
Then there are the people who don't even have their phone's RFID reader enabled. Using his own phone would, in Zi's words, lack a certain "oomph."
Oh well, how about making the chip spit out a Bitcoin address? "That literally never came up either."
In the end, Zi rewrote the chip to link to a meme, "and if you ever meet me in person you can scan my chip and see the meme."
It was all suitably amusing until the Imgur link Zi was using went down. Not everything on the World Wide Web is forever, and there is no guarantee that a given link will work indefinitely. Indeed, access to Imgur from the United Kingdom was abruptly cut off on September 30 in response to the country's age verification rules.
Still, the link not working isn't the end of the world. Zi could just reprogram the chip again, right?
Wrong. "When I went to rewrite the chip, I was horrified to realize I forgot the password that I had locked it with."
The link eventually started working again, but if and when it stops, Zi's party piece will be a little less entertaining.
He said: "Techie friends I've consulted with have determined that it's too dumb and simple to hack, the only way to crack it is to strap on an RFID reader for days to weeks, brute forcing every possible combination."
Or perhaps some surgery to remove the offending hardware.
Zi's idea is not innovative – individuals such as Professor Kevin Warwick and his cyborg ambitions spring to mind – but forgetting the password certainly highlights one of the risks of inserting hardware under the skin.
Zi goes by the stage name "Zi the Mentalist" and, in addition to performing close-up magic, also refers to himself as "an accomplished scientist with a focus in biology."
"I'm living my own cyberpunk dystopia life right now, locked out of technology inside my body, and it's my own damn fault," said Zi. "And I can honestly say that I forgot the password to my own hand."
https://phys.org/news/2025-11-full-earth-simulation-tool-climate.html
Climate change is responsible for more extreme hurricanes, more destructive wildfires, severe droughts, and increased human disease, among other harmful outcomes. Experts warn that if carbon emissions are not significantly reduced within a few decades, the damage to Earth's ecosystem will be irreversible.
Among the most effective tools scientists have developed to understand climate change are digital simulations of Earth. These simulations are produced by developing specific algorithms to run on the world's most powerful supercomputers. But simulating how human activity influences the climate has been an extraordinarily difficult challenge.
A mind-boggling number of variables need to be taken into consideration—such as the cycles of water, energy, and carbon, how those factors relate to each other, and how diverse physical, biological, and chemical processes interact over space and time. For these reasons, previous state-of-the-art simulations have not been able to achieve what is referred to as a "Full Earth System" simulation.
The Gordon Bell Climate Prize-winning team reached a landmark this year by being the first team ever to develop a Full Earth Simulation at 1 km (extremely high) Resolution. In their introduction, they explain, "We present the first-ever global simulation of the full Earth system at 1.25 km grid spacing, achieving highest time compression with an unseen number of degrees of freedom.
"Our model captures the flow of energy, water, and carbon through key components of the Earth system: atmosphere, ocean, and land. To achieve this landmark simulation, the team harnessed the power of 8192 GPUs on Alps and 4096 GPUs on JUPITER, two of the world's largest GH200 superchip installations."
The innovations the team employed to make the Full Earth Simulation possible include: exploiting functional parallelism by efficiently mapping components to specialized heterogeneous systems and simplifying the implementation and optimization of an important component by separating its implementation in Fortran from the optimization details of the target architecture.
In the conclusion to their paper they write, "This has enormous and enduring potential to provide full global Earth system information on local scales about the implications of future warming for both people and eco-systems, information that otherwise would not exist."
More information: Daniel Klocke et al, Computing the Full Earth System at 1km Resolution [OPEN], Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2025). DOI: 10.1145/3712285.3771789