Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Beware of OpenAI's 'Grantwashing' on AI Harms:
This month, OpenAI announced "up to $2 million" in funding for research studies on AI safety and well-being. At its surface, this may seem generous, but following in the footsteps of other tech giants facing scrutiny over their products' mental health impacts, it's nothing more than grantwashing.
This industry practice commits a pittance to research that is doomed to be ineffective due to information and resources that companies hold back. When grantwashing works, it compromises the search for answers. And that's an insult to anyone whose loved one's death involved chatbots.
OpenAI's pledge came a week after the company's lawyers argued that the company isn't to blame in the death of a California teenager who ChatGPT encouraged to commit suicide. In the company's attempt to disclaim responsibility in court, they even requested a list of invitees to the teen's memorial and video footage of the service and the people there. In the last year, OpenAI and other generative AI companies have been accused of causing numerous deaths and psychotic breaks by encouraging people into suicide, feeding delusions, and giving them risky instructions.
As scientists who study developmental psychology and AI, we agree that society urgently needs better science on AI and mental health. The company has recruited a group of genuinely credible scientists to give them closed-door advice on the issue, like so many other companies accused of causing harm. But OpenAI's funding announcement reveals how small a fig leaf they think will persuade a credulous public.
Look at the size of the grants. High quality public health research on mental health harms requires a sequence of studies, large sample sizes, access to clinical patients, and an ethics safety net that supports people at risk. The median research project grant from the National Institutes of Mental Health in 2024 was $642,918. In contrast, OpenAI is offering a measly $5,000 to $100,000 to researchers studying AI and mental health, one sixth of a typical NIMH grant at best.
Despite the good ideas Open AI suggests, the company is holding back the resource that would contribute most to science on those questions: records about their systems and how people use their products. OpenAI's researchers have purportedly developed ways to identify users who potentially face mental health distress. A well-designed data access program would accelerate the search for answers while preserving privacy and protecting vulnerable users. European regulators are still deciding if OpenAI will face data access requirements under the Digital Services Act, but OpenAI doesn't have to wait for Europe.
We have seen this playbook before from other companies. In 2019, Meta announced a series of fifty thousand dollar grants to six scientists studying Instagram, safety, and well being. Even as the company touted its commitment to science on user well-being, Meta's leaders were pressuring internal researchers to "amend their research to limit Meta's potential liability," according to a recent ruling in the D.C. Superior Court.
Whether or not OpenAI leaders intend to muddy the waters of science, grantwashing hinders technology safety as one of us recently argued in Science. It adds uncertainty and debate in areas where companies want to avoid liability and that uncertainty gives the appearance of science. These underfunded studies inevitably produce inconclusive results, forcing other researchers to do more work to clean up the resulting misconceptions.
[...] Two decades of Big Tech funding for safety science has taught us that the grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change. And journalists quote the corporate lobbyist and move on until the next round of deaths creates another news cycle.
The problem is that we do desperately need better, faster science on technology safety. Companies are pushing out AI products to hundreds of millions of people with limited safety guardrails faster than safety science can match. One idea, proposed by Dr. Alondra Nelson, borrows from the Human Genome Project. In 1990, the project's leadership allocated 3-5% of its annual research budget to independent "ethical, legal, and social inquiry" about genomics. The result was a scientific endeavor that kept on top of emerging risks from genetics, at least at moments when projects had the freedom to challenge the genomics establishment.
[...] We can't say whether specific deaths were caused by ChatGPT or whether generative AI will cause a new wave of mental health crises. The science isn't there yet. The legal cases are ongoing. But we can say that OpenAI's grantwashing is the perfect corporate action to make sure we don't find the answers for years.
The Register reports that UNIX V4, the first with the kernel written in C, has been recovered, restored and run.
The source code and binaries were recovered from a 1970s-vintage nine-track tape and posted to the Internet Archive where it can be downloaded.
It's very small: it contains around 55,000 lines of code, of which about 25,000 lines are in C, with under 1,000 lines of comments. But then, the late Dennis M. Ritchie and co-creator Ken Thompson were very definitely Real Programmers, and as is recorded in ancient wisdom: "Real Programmers don't need comments – the code is obvious."
For those who don't already know:
UNIX started out as a quick hack by two geniuses in their spare time, so that they could use a spare computer – an extremely rare thing in the 1960s – to run a simulation game flying the player around a 2D of the Solar System that one of them had written. It was called Space Travel.
https://therecord.media/south-korea-facial-recognition-phones
South Korea will begin requiring people to submit to facial recognition when signing up for a new mobile phone number in a bid to fight scams, the Ministry of Science and ICT announced Friday.
The effort is meant to block people from illegally registering devices used for identity theft.
The plan reportedly applies to the country's three major mobile carriers and mobile virtual network operators. The new policy takes effect on March 23 after a pilot that will begin this week.
"By comparing the photo on an identification card with the holder's actual face on a real-time basis, we can fully prevent the activation of phones registered under a false name using stolen or fabricated IDs," the ministry reportedly said in a press release.
In August, South Korean officials unveiled a plan to combat voice phishing scams that included harsher penalties for mobile carriers that do not sufficiently act to prevent the scams were reportedly a central feature of that plan.
South Korea has been plagued by voice phishing scams, with 21,588 reported as of November, the ministry said.
In April, South Korea's SK Telecom was hacked and SIM card data belonging to nearly 27 million subscribers was stolen.
Privacy regulators determined the telecom "did not even implement basic access control," allowing hackers to take authentication data and subscriber information on a mass basis.
https://linuxiac.com/phoenix-emerges-as-a-modern-x-server-written-from-scratch-in-zig/
Phoenix is a new X server written from scratch in Zig, aiming to modernize X11 without relying on Xorg code.
Although Wayland has largely replaced Xorg, and most major Linux distributions and desktop environments have either already dropped support for the aging display protocol or are in the process of doing so, efforts to extend Xorg's life or replace it with similar alternatives continue. Recent examples include projects such as XLibre Xserver and Wayback. And now, a new name is joining this group: Phoenix.
It is a new X server project that takes a fundamentally different approach to X11. Written entirely from scratch in the Zig programming language, it is not yet another fork of the Xorg codebase and does not reuse its legacy code. Instead, according to devs, Phoenix aims to show that the X11 protocol itself is not inherently obsolete and can be implemented in a simpler, safer, and more modern way.
Phoenix is designed for real desktop and professional use, not for full protocol coverage. It supports only the X11 features that modern applications need, including older software like GTK2-based programs. By omitting rarely used or outdated parts, Phoenix keeps things simpler while still supporting many applications.
Right now, Phoenix is still experimental and not ready for daily use. It can run simple, hardware-accelerated apps using GLX, EGL, or Vulkan, but only in a nested setup under another X server. This will stay the case until the project is ready for more demanding use.
On the security side, which is actually one of the aspects for which Xorg receives the most criticism, Phoenix isolates applications by default, and access to sensitive capabilities such as screen recording or global hotkeys is mediated through explicit permission mechanisms. Importantly, this is done without breaking existing clients, as unauthorized access attempts return dummy data rather than protocol errors.
Under the hood, Phoenix includes a built-in compositor that enables tear-free rendering by default, supports disabling compositing for full-screen applications, and is designed to reduce compositor and vsync latency. Proper multi-monitor support is a priority, allowing different refresh rates, variable refresh rate displays, and future HDR support without relying on a single global framebuffer.
Phoenix is also looking at extending the protocol when needed. For example, new features like per-monitor DPI reporting are planned to ensure apps scale properly across mixed-DPI setups. If needed, Phoenix will add protocol extensions for things like HDR, while still working with existing software.
It is important to make it clear that the project does not aim to replace Xorg. Phoenix deliberately avoids supporting legacy hardware, obscure protocol features, and configurations such as multiple X11 screens or indirect GLX rendering, and focuses entirely on modern systems.
Wayland compatibility is part of the long-term plan. The developers say Phoenix might eventually support Wayland clients directly or use bridging tools to run Wayland-only apps in an X11 environment. Running Phoenix nested under Wayland, as an alternative to Xwayland, is also being considered.
Finally, as I mentioned, the project is in its early stages, and it remains to be seen whether it will develop into production-ready stable versions and be accepted for wider use. Until then, for more information about this new initiative, check here.
The experiment showed that physical violence is not necessary to scare off gulls:
Shouting at seagulls makes them more likely to leave your food alone, research shows.
University of Exeter researchers put a closed Tupperware box of chips on the ground to pique herring gulls' interest.
Once a gull approached, they played either a recording of a male voice shouting the words, "No, stay away, that's my food", the same voice speaking those words, or the 'neutral' birdsong of a robin.
They tested a total of 61 gulls across nine seaside towns in Cornwall and found that nearly half of those gulls exposed to the shouting voice flew away within a minute.
Only 15% of the gulls exposed to the speaking male voice flew away, while the rest walked away from the food, still sensing danger.
In contrast, 70% of gulls exposed to the robin song stayed near the food for the duration of the experiment.
"We found that urban gulls were more vigilant and pecked less at the food container when we played them a male voice, whether it was speaking or shouting," said Dr Neeltje Boogert of the Centre for Ecology and Conservation at Exeter's Penryn Campus in Cornwall.
"But the difference was that the gulls were more likely to fly away at the shouting and more likely to walk away at the speaking.
"So when trying to scare off a gull that's trying to steal your food, talking might stop them in their tracks but shouting is more effective at making them fly away."
The recordings, in which five male volunteers recorded themselves uttering the same phrase in a calm speaking voice and, separately, in a shouting voice, were adjusted to be at the same volume, which suggests gulls can detect differences in the acoustic properties of human voices.
"Normally when someone is shouting, it's scary because it's a loud noise, but in this case all the noises were the same volume, and it was just the way the words were being said that was different," said Dr Boogert.
"So it seems that gulls pay attention to the way we say things, which we don't think has been seen before in any wild species, only in those domesticated species that have been bred around humans for generations, such as dogs, pigs and horses."
The experiment is designed to show that physical violence is not necessary to scare off gulls, and the researchers used male voices as most crimes against wildlife are carried out by men.
"Most gulls aren't bold enough to steal food from a person, I think they've become quite vilified," said Dr Boogert.
"What we don't want is people injuring them. They are a species of conservation concern, and this experiment shows there are peaceful ways to deter them that don't involve physical contact."
Journal Reference: Céline M. I. Rémy, Christophoros Zikos, Laura Ann Kelley, Neeltje Janna Boogert et al.; Herring gulls respond to the acoustic properties of men's voices. Biol Lett 1 November 2025; 21 (11): 20250394. https://doi.org/10.1098/rsbl.2025.0394
E-ink enjoyers can upgrade old tablets into part of the desktop experience using a simple server setup
While the traditional computer monitor is a tried-and-true standard, many who use screens insist on a higher standard of readability and eye comfort. For these happy few screen enthusiasts, a recent project and tutorial from software engineer Alireza Alavi offers Linux users the ability to add an E-ink display to their desktop by reusing an existing E-ink tablet.
The project turns an E-ink tablet into a mirrored clone of an existing second display in a desktop setup. Using VNC for network remote control of a computer, this implementation turns the E-ink tablet into both a display and an input device, opening up options from being used as a primarily reading and writing display to also being usable as a drawing tablet or other screen-as-input-device use cases.
The example video above [in article] shows Alavi using the E-ink display as an additional monitor, first to write his blog post about the VNC protocol, then to read a lengthy document. The tablet runs completely without a wired connection, as VNC sharing happens over the network and thus enables a fully wireless monitor experience. The second screen, seen behind the tablet, reveals the magic by displaying the source of the tablet's output.
Many readers will correctly observe that the latency and lag on the E-ink tablet are a bit higher than desirable for a display. This is not due to the streaming connection's efficacy, but rather to the slowness that stems from the old age of the Boox tablet used by Alavi. A newer, more modern E-ink display, such as Modos Tech's upcoming 75Hz E-ink monitor, will get the job done without any such visible latency distractions.
For Linux users wanting to follow along and create a tertiary monitor of their very own, Alavi's blog has all the answers. Setting up a VNC connection between the computer and the tablet display took only around 20 minutes, with help from the Arch Linux wiki.
Very simply put, the order of operations in setting up the system is first installing a VNC package of your choice (this project used TigerVNC), setting up a list of permissions for multiple possible users of your new TigerVNC server now being broadcast from the host PC, and setting up the screen limitations of the stream, ensuring that your host screen that is to be mirrored is setup to the exact same resolution and aspect ratio as the E-ink tablet being streamed to.
From here, connecting to the new TigerVNC server with the E-ink tablet is as simple as installing an Android VNC client and connecting to the correct port and IP address set up previously. Alavi's blog post goes into the nitty-gritty of setting up permissions and port options, as well as how to get the setup to start automatically or manually with scripts.
While this project is very cool, prospective copycats should take note of a few things. First, this specific implementation requires effectively sacrificing one of your existing screens to become a copy of what the tablet will output. This is not a solution for using an E-ink display as a third monitor, though having the E-ink display mirrored in color on a less laggy screen has its own perks for debugging and troubleshooting, among other things. Also, this system requires the E-ink display to be an Android tablet with its own brain that can install and run a VNC client like AVNC; a standalone display or a tablet with an incompatible operating system won't cut the mustard.
However, for those who need E-ink and the will to jump through some Linux terminal hoops, this application has serious promise. Be sure to see Alavi's original blog post on the setup for a complete setup guide and more notes from the inventor. For those considering the E-ink lifestyle but without the same growing pains, the E-ink monitor market is getting better all the time; Boox's newest 23.5-inch, 1800p color E-ink monitor is an attractive prospect, though the sticker shock of its $1,900 price tag is enough to make many enthusiasts weep.
If your product is even a third as innovative and useful as you claim it is, you shouldn't have to go around trying a little too hard to convince people. The product's usefulness should speak for itself. And you definitely shouldn't be forcing people to use products they've repeatedly told you they don't actually appreciate or want.
LG and Microsoft learned that lesson recently when LG began installing Microsoft's Copilot "AI" assistant on people's televisions, without any way to disable it:
"According to affected users, Copilot appears automatically after installing the latest webOS update on certain LG TV models. The feature shows up on the home screen alongside streaming apps, but unlike Netflix or YouTube, it cannot be uninstalled."
To be clear this isn't the end of the world. Users can apparently "hide" the app, but people are still generally annoyed at the lack of control. Especially coming from two companies with a history of this sort of behavior.
Many people just generally don't like Copilot, much like they didn't really like a lot of the nosier features integrated into Windows 11. Or they don't like being forced to use Copilot when they'd prefer to use ChatGPT or Gemini.
You only have to peruse this Reddit thread to get a sense of the annoyance. You can also head over to the Microsoft forums to get a sense of how Microsoft customers are very very tired of all the forced Copilot integration across Microsoft's other products, even though you can (sometimes) disable the integration.
But "smart" TVs are already a sector where user choice and privacy take a backseat to the primary goal of collecting and monetizing viewer behavior. And LG has been at the forefront of disabling features if you try to disconnect from the internet. So there are justifiable privacy concerns raised over this tight integration (especially in America, which is too corrupt to pass even a baseline internet privacy law).
This is also coming on the heels of widespread backlash over another Microsoft "AI" feature, Recall. Recall takes screenshots of your PC's activity every five seconds, giving you an "explorable timeline of your PC's past," that Microsoft's AI-powered assistant, Copilot, can then help you peruse.
Here, again, there was widespread condemnation over the privacy implications of such tight integration. Microsoft's response was to initially pretend to care, only to double down. It's worth noting that Microsoft's forced AI integration into its half-assed journalism efforts, like MSN, has also been a hot, irresponsible mess. So this is not a company likely to actually listen to its users.
It's not like Microsoft hasn't had some very intimate experiences surrounding the backlash of forcing products down customers' throats. But like most companies, Microsoft knows U.S. consumer protection and antitrust reform has been beaten to a bloody pulp, and despite the Trump administration's hollow and performative whining about the power of "big tech," big tech giants generally have carte blanche to behave like assholes for the foreseeable future, provided they're polite to the dim autocrats in charge.
According to the Oxford English Dictionary, the word recent is defined as "having happened or started only a short time ago." A simple, innocent sounding definition. And yet, in the world of scientific publishing, it may be one of the most elastic terms ever used. What exactly qualifies as "a short time ago"? A few months? A couple of years? The advent of the antibiotic era?
In biomedical literature, "recent" is something of a linguistic chameleon. It appears everywhere: in recent studies, recent evidence, recent trials, recent literature, and so forth. It is a word that conveys urgency and relevance, while neatly sidestepping any commitment to a specific year—much like saying "I'll call you soon" after a first date: reassuring, yet infinitely interpretable. Authors wield it with confidence, often citing research that could have been published in the previous season or the previous century.
Despite its ubiquity, "recent" remains a suspiciously vague descriptor. Readers are expected to blindly trust the author's sense of time. But what happens if we dare to ask the obvious question? What if we take "recent" literally?
In this festive horological investigation, we decided to find out just how recent the recent studies really are. Armed with curiosity, a calendar, and a healthy disregard for academic solemnity, we set out to measure the actual age of those so-called fresh references. The results may not change the course of science, but they might make you raise an eyebrow the next time someone cites a recent paper from the past decade.
On 5 June 2025, we—that is, the junior author, while the senior author remained in supervisory orbit—performed a structured search in PubMed using the following terms: "recent advance*" or "recent analysis" or "recent article*" or "recent data" or "recent development" or "recent evidence" or "recent finding*" or "recent insights" or "recent investigation*" or "recent literature" or "recent paper*" or "recent progress" or "recent report*" or "recent research" or "recent result*" or "recent review*" or "recent study" or "recent studies" or "recent trial*," or "recent work*." These terms were selected on the basis that they appear frequently in the biomedical literature, convey an aura of immediacy, and are ideal for concealing the fact that the authors are citing papers from before the invention of UpToDate.
To avoid skewing the results towards only the freshest of publications (and therefore ruining the fun), we sorted the search results by best match rather than by date. This method added a touch of algorithmic chaos and ensured a more diverse selection of articles. We then included articles progressively until reaching a sample size of 1000, a number both sufficiently round and statistically unnecessary, but pleasing to the eye.
We—again, the junior author, while the senior author offered moral support and the occasional pat on the back—reviewed the full text of each article to identify expressions involving the word "recent," ensuring they were directly linked to a bibliographic reference. [...]
For every eligible publication, we—still the junior author, whose dedication was inversely proportional to his contract stability—manually recorded the following: the doi of the article, its title, the journal of publication, the year it was published, the country where the article's first author was based, the broad medical specialty to which the article belonged, the exact "recent" expression used, the reference cited immediately after that expression, the year in which that reference was published, and the journal's impact factor as of 2024 (as reported in the Journal Citation Reports, Clarivate Analytics). [...]
[...] The final analysis comprised 1000 articles. The time lag between the citing article and the referenced "recent" publication ranged from 0 to 37 years, with a mean of 5.53 years (standard deviation 5.29) and a median of 4 years (interquartile range 2-7). The most frequent citation lag was one year, which was observed for 159 publications. The distribution was right skewed (skewness=1.80), with high kurtosis (4.09), indicating a concentration of values around the lower end with a long tail of much older references. A total of 177 articles had a citation lag of 10 years or longer, 26 articles had a lag of 20 years or longer, and four articles cited references that were at least 30 years old. The maximum lag observed was 37 years, found in one particularly ambitious case.
[...] Our investigation confirms what many readers have long suspected, but none have dared to quantify: in the land of biomedical publishing, "recent" is less a measure of time than a narrative device. With a mean citation lag of 5.5 years and a median of 4, the average "recent" reference is just about old enough to have survived two guideline updates and a systematic review debunking its relevance. Our findings align with longstanding concerns over vague or imprecise terminology in scientific writing, which technical editors have highlighted for decades.3
To be fair, some references were genuinely fresh—barely out of the editorial oven. But then there were the mavericks: 177 articles cited works 10 years or older, 26 drew on sources more than 20 years old, and in a moment of true historical boldness, four articles described "recent" studies that predated the launch of the first iPhone. The record holder clocked in at a 37 year lag, leaving us to wonder whether the authors confused recent with renaissance.
[...] The lexicon of "recent" expressions also revealed fascinating differences. Recent publication and recent article showed reassuringly tight timelines, suggesting that for these terms, recent still means what the dictionary intended. Recent trial, recent guidelines, recent paper, and recent result also maintained a commendable sense of urgency, as if they had checked the calendar before going to press. At the other end of the spectrum, recent study, the most commonly used expression, behaved more like recent-ish study, with a median lag of 5.5 years and a long tail stretching into academic antiquity. Recent discovery and recent approach performed even worse, reinforcing the suspicion that some authors consider "recent" a purely ornamental term. Readers may be advised to handle these terms with protective gloves.
[...] In this study, we found that the term "recent" in biomedical literature can refer to anything from last month's preprint to a study published before the invention of the mobile phone. Despite the rhetorical urgency such expressions convey, the actual citation lag often suggests a more relaxed interpretation of time. Although some fields and phrases showed more temporal discipline than others, the overall picture is one of creative elasticity.
The use of vague temporal language appears to be a global constant, transcending specialties, regions, and decades. Our findings do not call for the abolition of the word "recent," but perhaps for a collective pause before using it— a moment to consider whether it is truly recent or just rhetorically convenient. Authors may continue to deploy "recent" freely, but readers and reviewers might want to consider whether it is recent enough to matter.
Journal Reference: BMJ 2025; 391 doi: https://doi.org/10.1136/bmj-2025-086941 (Published 11 December 2025)
[Ed. note: Dec. 30 - The headline has been updated to reflect the fact that the Microsoft researcher who posted the original LinkedIn post stating they want to rewrite all Windows code in Rust later qualified his statement by saying that this is a research project --hubie]
The Register reports that Microsoft wants to replace all of its C and C++ code bases with Rust rewrites by 2030, developing new technology to do the translation along the way.
"Our strategy is to combine AI and Algorithms to rewrite Microsoft's largest codebases," he added. "Our North Star is '1 engineer, 1 month, 1 million lines of code.'"
The article goes on to quote much management-speak drivel from official Microsoft sources making grand claims about magic bullets and the end of all known software vulnerabilities with many orders of magnitude productivity improvements promised into the bargain.
Unlike C and C++, Rust is a memory-safe language, meaning it uses automated memory management to avoid out-of-bounds reads and writes, and use-after-free errors, as both offer attackers a chance to control devices. In recent years, governments have called for universal adoption of memory-safe languages – and especially Rust – to improve software security.
Automated memory management? Is the magic not in the compiler rather than the runtime? Do these people even know what they're talking about? And anyway, isn't C++2.0 going to solve all problems and be faster than Rust and better than Python? It'll be out Real Soon Now(TM). Plus you'll only have to half-rewrite your code.
Are we witnessing yet another expensive wild goose chase from Microsoft? Windows Longhorn, anyone?
Swearing boosts performance by helping people feel focused, disinhibited, study finds:
Letting out a swear word in a moment of frustration can feel good. Now, research suggests that it can be good for you, too: Swearing can boost people's physical performance by helping them overcome their inhibitions and push themselves harder on tests of strength and endurance, according to research published by the American Psychological Association.
"In many situations, people hold themselves back—consciously or unconsciously—from using their full strength," said study author Richard Stephens, PhD, of Keele University in the U.K. "Swearing is an easily available way to help yourself feel focused, confident and less distracted, and 'go for it' a little more."
Previous research by Stephens and others has found when people swear, they perform better on many physical challenges, including how long they can keep their hand in ice water and how long they can support their body weight during a chair push-up exercise.
"That is now a well replicated, reliable finding," Stephens said. "But the question is—how is swearing helping us? What's the psychological mechanism?"
He and his colleagues believed that it might be that swearing puts people in a disinhibited state of mind. "By swearing, we throw off social constraint and allow ourselves to push harder in different situations," he said.
To test this, the researchers conducted two experiments with 192 total participants. In each, they asked participants to repeat either a swear word of their choice, or a neutral word, every two seconds while doing a chair push-up. After completing the chair push-up challenge, participants answered questions about their mental state during the task. The questions included measures of different mental states linked to disinhibition, including how much positive emotion participants felt, how funny they found the situation, how distracted they felt and how self-confident they felt. The questions also included a measure of psychological "flow," a state in which people become immersed in an activity in a pleasant, focused way.
Overall, and confirming earlier research, the researchers found that participants who swore during the chair push-up task were able to support their body weight significantly longer than those who repeated a neutral word. Combining the results of the two experiments as well as a previous experiment conducted as part of an earlier study, they also found that this difference could be explained by increases in participants' reports of psychological flow, distraction and self-confidence—all important aspects of a disinhibition.
"These findings help explain why swearing is so commonplace," said Stephens. "Swearing is literally a calorie neutral, drug free, low cost, readily available tool at our disposal for when we need a boost in performance."
Journal Reference:Stephens, R., Dowber, H., Richardson, C., & Washmuth, N. B. (2025). "Don't hold back": Swearing improves strength through state disinhibition. American Psychologist. Advance online publication. https://doi.org/10.1037/amp0001650 [PDF]
Study finds built-in browsers across gadgets often ship years out of date
Web browsers for desktop and mobile devices tend to receive regular security updates, but that often isn't the case for those that reside within game consoles, televisions, e-readers, cars, and other devices. These outdated, embedded browsers can leave you open to phishing and other security vulnerabilities.
Researchers affiliated with the DistriNet Research Unit of KU Leuven in Belgium have found that newly released devices may contain browsers that are several years out of date and include known security bugs.
In a research paper [PDF] presented at the USENIX Symposium on Usable Privacy and Security (SOUPS) 2025 in August, computer scientists Gertjan Franken, Pieter Claeys, Tom Van Goethem, and Lieven Desmet describe how they created a crowdsourced browser evaluation framework called CheckEngine to overcome the challenge of assessing products with closed-source software and firmware.
The framework functions by providing willing study participants with a unique URL that they're asked to enter into the integrated browser in the device being evaluated. During the testing period between February 2024 and February 2025, the boffins received 76 entries representing 53 unique products and 68 unique software versions.
In 24 of the 35 smart TVs and all 5 e-readers submitted for the study, the embedded browsers were at least three years behind current versions available to users of desktop computers. And the situation is similar even for newly released products.
"Our study shows that integrated browsers are updated far less frequently than their standalone counterparts," the authors state in their paper. "Alarmingly, many products already embed outdated browsers at the time of release; in fact, eight products in our sample included a browser that was over three years obsolete when it hit the market."
According to KU Leuven, the study revealed that some device makers don't provide security updates for the browser, even though they advertise free updates.
[...] In December 2024, the EU Cyber Resilience Act came into force, initiating a transition period through December 2027, when vendors will be fully obligated to tend to the security of their products. The KU Leuven researchers say that many of the devices tested are not yet compliant.
[...] The authors put some of the blame on development frameworks like Electron that bundle browsers with other components.
"We suspect that, for some products, this issue stems from the user-facing embedded browser being integrated with other UI components, making updates challenging – especially when bundled in frameworks like Electron, where updating the browser requires updating the entire framework," they said in their paper. "This can break dependencies and increase development costs."
But in other cases, they suggest the issue arises from inattention on the part of vendors or a choice not to implement essential security measures.
While they suggest mechanisms like product labels may focus consumer and vendor attention on updating embedded browsers, they conclude that broad voluntary compliance is unlikely and that regulations should compel vendors to take responsibility for the security of the browsers they embed in their products.
https://events.ccc.de/congress/2025/infos/index.html
The 39th Chaos Communication Congress (39C3) takes place in Hamburg on 27–30 Dec 2025, and is the 2025 edition of the annual four-day conference on technology, society and utopia organized by the Chaos Computer Club (CCC) and volunteers.
Congress offers lectures and workshops and various events on a multitude of topics including (but not limited to) information technology and generally a critical-creative attitude towards technology and the discussion about the effects of technological advances on society.
Starting in 1984, Congress has been organized by the community and appreciates all kinds of participation. You are encouraged to contribute by volunteering, setting up and hosting hands-on and self-organized events with the other components of your assembly or presenting your own projects to fellow hackers.
Find infos how to get in contact & chat with other participants and the organizing teams on our Communication page.
= More Information:
- Chaos Computer Club at Wikipedia
- Media
- 2025 Hub
Interesting talks, upcoming and previously recorded, available on their streams page --Ed.
With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.
In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. "I think the vast majority of Codex is built by Codex, so it's almost entirely just being used to improve itself," said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.
Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user's code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT's web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.
The "Codex" name itself dates back to a 2021 OpenAI model based on GPT-3 that powered GitHub Copilot's tab completion feature. Embiricos said the name is rumored among staff to be short for "code execution." OpenAI wanted to connect the new agent to that earlier moment, which was crafted in part by some who have left the company.
"For many people, that model powering GitHub Copilot was the first 'wow' moment for AI," Embiricos said. "It showed people the potential of what it can mean when AI is able to understand your context and what you're trying to do and accelerate you in doing that."
It's no secret that the current command-line version of Codex bears some resemblance to Claude Code, Anthropic's agentic coding tool that launched in February 2025. When asked whether Claude Code influenced Codex's design, Embiricos parried the question but acknowledged the competitive dynamic. "It's a fun market to work in because there's lots of great ideas being thrown around," he said. He noted that OpenAI had been building web-based Codex features internally before shipping the CLI version, which arrived after Anthropic's tool.
OpenAI's customers apparently love the command line version, though. Embiricos said Codex usage among external developers jumped 20 times after OpenAI shipped the interactive CLI extension alongside GPT-5 in August 2025. On September 15, OpenAI released GPT-5 Codex, a specialized version of GPT-5 optimized for agentic coding, which further accelerated adoption.
It hasn't just been the outside world that has embraced the tool. Embiricos said the vast majority of OpenAI's engineers now use Codex regularly. The company uses the same open-source version of the CLI that external developers can freely download, suggest additions to, and modify themselves. "I really love this about our team," Embiricos said. "The version of Codex that we use is literally the open source repo. We don't have a different repo that features go in."
[...] The system runs many processes autonomously, addresses feedback, spins off and manages child processes, and produces code that ships in real products. OpenAI employees call it a "teammate" and assign it tasks through the same tools they use for human colleagues. Whether the tasks Codex handles constitute "decisions" or sophisticated conditional logic smuggled through a neural network depends on definitions that computer scientists and philosophers continue to debate. What we can say is that a semi-autonomous feedback loop exists: Codex produces code under human direction, that code becomes part of Codex, and the next version of Codex produces different code as a result.
[...] Despite OpenAI's claims of success with Codex in house, it's worth noting that independent research has shown mixed results for AI coding productivity. A METR study published in July found that experienced open source developers were actually 19 percent slower when using AI tools on complex, mature codebases—though the researchers noted AI may perform better on simpler projects.
Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. "You can add Codex, and you can basically assign issues to Codex now," Bayes told Ars. "Codex is literally a teammate in your workspace."
This integration means that when someone posts feedback in a Slack channel, they can tag Codex and ask it to fix the issue. The agent will create a pull request, and team members can review and iterate on the changes through the same thread. "It's basically approximating this kind of coworker and showing up wherever you work," Bayes said.
[...] Given this teammate approach, will there be anything left for humans to do? When asked, Embiricos drew a distinction between "vibe coding," where developers accept AI-generated code without close review, and what AI researcher Simon Willison calls "vibe engineering," where humans stay in the loop. "We see a lot more vibe engineering in our code base," he said. "You ask Codex to work on that, maybe you even ask for a plan first. Go back and forth, iterate on the plan, and then you're in the loop with the model and carefully reviewing its code."
He added that vibe coding still has its place for prototypes and throwaway tools. "I think vibe coding is great," he said. "Now you have discretion as a human about how much attention you wanna pay to the code."
Over the past year, "monolithic" large language models (LLMs) like GPT-4.5 have apparently become something of a dead end in terms of frontier benchmarking progress as AI companies pivot to simulated reasoning models and also agentic systems built from multiple AI models running in parallel. We asked Embiricos whether agents like Codex represent the best path forward for squeezing utility out of existing LLM technology.
He dismissed concerns that AI capabilities have plateaued. "I think we're very far from plateauing," he said. "If you look at the velocity on the research team here, we've been shipping models almost every week or every other week." He pointed to recent improvements where GPT-5-Codex reportedly completes tasks 30 percent faster than its predecessor at the same intelligence level. During testing, the company has seen the model work independently for 24 hours on complex tasks.
[...] But will tools like Codex threaten software developer jobs? Bayes acknowledged concerns but said Codex has not reduced headcount at OpenAI, and "there's always a human in the loop because the human can actually read the code." Similarly, the two men don't project a future where Codex runs by itself without some form of human oversight. They feel the tool is an amplifier of human potential rather than a replacement for it.
The practical implications of agents like Codex extend beyond OpenAI's walls. Embiricos said the company's long-term vision involves making coding agents useful to people who have no programming experience. "All humanity is not gonna open an IDE or even know what a terminal is," he said. "We're building a coding agent right now that's just for software engineers, but we think of the shape of what we're building as really something that will be useful to be a more general agent."
What happens when a computer can do your job better than you can? What happened to all those people who studied in school and trained to draft designs on huge desks with filing cabinets that would kill you if it fell? What happened, well, to any job that could be done faster, cheaper, or more effectively? Gone like the dodos. So, in this vein, how long do lawyers have before their profession is made redundant? If an LLM can find which law applies, how it applies, and write the legal argument needed, then why pay tens of thousands for a human to do this? Have lawyers had their day in sun and are now the buggy whip makers of the 21st century?
The Texas Attorney General sued five major television manufacturers, accusing them of illegally collecting their users' data by secretly recording what they watch using Automated Content Recognition (ACR) technology.
The lawsuits target Sony, Samsung, LG, and China-based companies Hisense and TCL [PDF files] Technology Group Corporation. Attorney General Ken Paxton's office also highlighted "serious concerns" about the two Chinese companies being required to follow China's National Security Law, which could give the Chinese government access to U.S. consumers' data.
According to complaints filed this Monday in Texas state courts, the TV makers can allegedly use ACR technology to capture screenshots of television displays every 500 milliseconds, monitor the users' viewing activity in real time, and send this information back to the companies' servers without the users' knowledge or consent.
Paxton's office described ACR technology as "an uninvited, invisible digital invader" designed to unlawfully collect personal data from smart televisions, alleging that the harvested information then gets sold to the highest bidder for ad targeting.
"Companies, especially those connected to the Chinese Communist Party, have no business illegally recording Americans' devices inside their own homes," Paxton said.
"This conduct is invasive, deceptive, and unlawful. The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries."
[...] Almost a decade ago, in February 2017, Walmart-owned smart TV manufacturer Vizio paid $2.2 million to settle charges brought by the U.S. Federal Trade Commission and the New Jersey Attorney General that it collected viewing data from 11 million consumers without their knowledge or consent using a "Smart Interactivity feature.
The two agencies said that since February 2014, Vizio and an affiliated company have manufactured and sold smart TVs (and retrofitted older models by installing tracking software remotely) that captured detailed information on what is being watched, including content from cable, streaming services, and DVDs.
According to the complaint, Vizio also attached demographic information (such as sex, age, income, and education) to the collected data and sold it to third parties for targeted advertising purposes.
In August 2022, the FTC published a consumer alert on securing Internet-connected devices, advising Americans to adjust the tracking settings on their smart TVs to protect their privacy.
Related:
• Smart TVs Are Like "a Digital Trojan Horse" in People's Homes
• Vizio Settles With FTC, Will Pay $2.2 Million and Delete User Data