Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Just a few days ago, we reported on Jolla's push for a new Linux phone, which needed at least 2,000 pre-orders to move forward. Now, only days later, there's good news: interest in the device has surpassed all expectations.
Jolla's community-funded smartphone project has cleared its production threshold, securing more than 3,200 pre-orders. The strong response ensures that the new Linux phone, developed under the Do It Together (DIT) model, will move forward into manufacturing, with Batch #1 already sold out and Batch #2 now available.
The device is positioned as an independent European Linux phone shaped directly by its users. Pre-orders require a €99 refundable deposit, deducted from the final price of €549 for Batch #2. Markets include the EU, UK, Norway, and Switzerland. According to Jolla, Batch #1 sold out in under 48 hours.
Let's recall the device's hardware specifications. It features a 6.36-inch Full HD AMOLED display, 12GB of RAM, 256GB of storage with microSD expansion, and a high-performance 5G MediaTek platform, powered by Sailfish OS, a privacy-focused Linux-based mobile operating system that offers an alternative to mainstream Android and iOS.
The phone also includes a 50MP main camera, user-replaceable 5,500mAh battery, dual nano-SIM support, Wi-Fi 6, NFC, and a fingerprint reader, as Jolla guarantees a minimum of five years of OS updates and availability of spare components, including back covers and batteries.
The pre-order campaign remains open until January 4, 2026, but once again, the funding goal has already been exceeded by a wide margin, reaching roughly 160 percent. Production will commence once Batch #1 is delivered, with first units expected by the end of the first half of 2026.
For more information, visit Jolla's website.
Previously: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone
The Trump administration's decision to close a world-leading research centre for atmospheric science is a blow to weather forecasting and climate modelling that could leave humanity more exposed to the impacts of global warming.
In a statement to USA Today, White House official Russ Vought said the National Center for Atmospheric Research (NCAR) is a source of "climate alarmism" and will be broken up. "Green new scam research" will be eliminated, while "vital functions" like weather modelling and supercomputing will be moved elsewhere, the White House said.
NCAR's models underpin the reports of the United Nations' Intergovernmental Panel on Climate Change, which countries rely on for decisions about how to reduce carbon emissions and adapt to extreme weather.
"Shutting it down would lead to greater uncertainty about what our climate future might be and leave us less able to prepare effectively," says Michael Meredith at the British Antarctic Survey. "It's hard to see this as anything other than shooting the messenger."
NCAR was started in 1960 to facilitate atmospheric science too large-scale for individual universities. Its 830 employees are involved in research "from the ocean floor to the Sun's core", according to its unofficial motto, with programmes to monitor everything from flooding and wildfires to space weather.
At its hilltop laboratory in the Colorado Rockies, NCAR invented the GPS dropsonde, a sensor-laden device that is dropped into hurricanes, revolutionising our understanding of tropical storms. Its researchers developed wind-shear warning systems for airports that have prevented countless crashes.
But perhaps its greatest contribution has been providing data, modelling and supercomputing to other researchers. Weather Underground, which in the 1990s was one of the first to offer local forecasts online, wouldn't have existed without software and weather data from NCAR, according to its founder, meteorologist Jeff Masters.
NCAR develops and administers the Weather Research and Forecasting Model, which is widely used for both day-to-day forecasting and the study of regional climates. It also collaborates with the US National Oceanic and Atmospheric Administration to advance weather modelling, especially for predicting severe storms.
If this work is disrupted, it could halt improvements to forecasts on weather apps and television news, at a time when extreme weather is getting more frequent. Shutting down NCAR is like if, "on the eve of world war two, we decided to stop funding R&D into weapons", says Masters.
[...] NCAR administers the Community Earth System Model (CESM), the first global climate model designed for universities. CESM has supported a huge variety of research, from estimates of current global carbon emissions to future changes to ocean currents, heatwave frequency and glacier and sea ice melt.
"It's probably the most-used model in the world," says Richard Rood at the University of Michigan.
NCAR holds biannual meetings with users to decide how to improve the model, which can be run on its servers or downloaded and operated locally. Its closure is likely to end the further development of CESM, as well as maintenance to fix bugs.
[...] Its aircraft help monitor air pollution and calibrate satellite instruments, according to Rood.
Its research on aerosols would be vital to understanding the effects of geoengineering, he adds. Schemes like spreading aerosols to block sunlight have been proposed to avoid abrupt changes in the climate.
"Getting rid of climate research like this would really have us flying blind, more blindly, into decisions about geoengineering, as well as tipping points," says Rood.
A California-based aerospace startup, Reflect Orbital, has ignited intense debate within the scientific community by proposing an ambitious plan to "sell sunlight" using massive mirrors placed in low Earth orbit:
The company's concept involves deploying large reflective satellites that could redirect sunlight onto specific locations on Earth during nighttime hours.
According to Live Science Plus, this technology could provide artificial illumination to extend daylight, boost agricultural productivity, or allow solar panels to operate after sunset.
Reflect Orbital has filed an application with the U.S. Federal Communications Commission to launch its first experimental satellite, known as EARENDIL-1, as early as 2026. If approved and expanded, the company envisions deploying as many as 4,000 orbital mirrors by the end of the decade.
Each mirror would reportedly unfold to approximately 59 feet (18 meters) across and could illuminate an area on Earth roughly 3 miles (5 kilometers) wide.
Reflect Orbital claims the reflected light could be "up to four times brighter than the full moon," with future iterations potentially becoming even larger and more powerful.
https://therecord.media/spotify-disables-scraping-annas
Spotify responded on Monday to an open-source group's decision to publish files over the weekend containing 86 million tracks scraped from the music streaming platform.
Anna's Archive, which calls itself the "largest truly open library in human history," said on Saturday that it discovered a way to scrape Spotify's files and subsequently released a database of metadata and songs.
A spokesperson for Spotify told Recorded Future News that it "has identified and disabled the nefarious user accounts that engaged in unlawful scraping."
"We've implemented new safeguards for these types of anti-copyright attacks and are actively monitoring for suspicious behavior," the spokesperson said. "Since day one, we have stood with the artist community against piracy, and we are actively working with our industry partners to protect creators and defend their rights."
The spokesperson added that Anna's Archive did not contact them before publishing the files. They also said it did not consider the incident a "hack" of Spotify. The people behind the leaked database systematically violated Spotify's terms by stream-ripping some of the music from the platform over a period of months, a spokesperson said.
They did this through user accounts set up by a third party and not by accessing Spotify's business systems, they added.
Anna's Archive published a blog post about the cache this weekend, writing that while it typically focuses its efforts on text, its mission to preserve humanity's knowledge and culture "doesn't distinguish among media types."
"Sometimes an opportunity comes along outside of text. This is such a case. A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation," they said.
"This Spotify scrape is our humble attempt to start such a 'preservation archive' for music. Of course Spotify doesn't have all the music in the world, but it's a great start."
While the full release contains a music metadata database with 256 million tracks, Anna's Archive put together a bulk file a little under 300 terabytes in size featuring 86 million music files that account for about 99.6% of all listens on Spotify. There is another smaller file featuring the top 10,000 most popular songs.
The files cover all music posted on Spotify from 2007 to July 2025. Anna's Archive called it "by far the largest music metadata database that is publicly available."
"With your help, humanity's musical heritage will be forever protected from destruction by natural disasters, wars, budget cuts, and other catastrophes," the organization said.
The blog post outlines distinct trends from Spotify data. The top three songs on Spotify — Billie Eilish's "Birds of a Feather," Lady Gaga's "Die with a Smile" and Bad Bunny's "DtMF" — have a higher total stream count than the bottom 20-100 million songs combined.
Anna's Archive, which is banned in several countries for its repeated copyright violations, was created in the wake of the law enforcement shutdown of Z-Library in 2022. The Justice Department arrested and charged two Russian nationals in 2022 for running Z-Library, which at the time was "the world's largest library" and claimed to have at least 11 million e-books for download.
Anna's Archive emerged days after Z-Library was shut down and aggregated records from that site as well as several other free online libraries like the Internet Archive, Library Genesis and Sci-Hub.
As of December, Anna's Archive has more than 61 million books and 95 million papers. Copyright holders in multiple countries have tried to sue the organization, and Google in November said it removed nearly 800 million links to Anna's Archive from its search engine after publishers issued takedown requests.
Proton has confirmed the company has begun moving out of Switzerland due to "legal uncertainty" over the newly proposed surveillance law.
Proton's newly launched privacy-first AI chatbot, Lumo, has become the first product to change home yet, "investing in Europe does not equate to leaving Switzerland," a company spokesperson told TechRadar, amid rumors it's exiting the country for good.
The firm behind one of the best VPN and encrypted email services has been very critical of the Swiss government's proposed amendment of its surveillance law since the beginning, already sharing plans to quit Switzerland back in May.
If it passes, the Ordinance on the Surveillance of Correspondence by Post and Telecommunications (OSCPT) will introduce new obligations for virtual private networks (VPNs), messaging apps, and social networks. These measures include mandatory user identification and data retention of up to six months for all services with at least 5,000 users. Providers will also be required to decrypt the communication upon the authorities' request should they own encryption keys.
Lumo – the first to go
Proton launched its ChatGPT competitor, Lumo, in July 2025, to give its users an alternative to Big Tech solutions that truly protect their privacy.
In a blog post about the launch, Proton's Head of Anti-Abuse and Account Security, Eamonn Maguire, explains that the company has decided to invest outside Switzerland for fear of the looming legal changes.
He wrote: "Because of legal uncertainty around Swiss government proposals to introduce mass surveillance – proposals that have been outlawed in the EU – Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move."
Talking to a Swiss publication after the launch, Proton's CEO Andy Yen confirmed that the proposed changes to the Swiss surveillance law made the company opt for Germany instead to host Lumo's servers. Proton has also confirmed it's also developing facilities in Norway.
While the company did not specify that Germany would become the new home of the majority of its infrastructure, Proton confirmed to TechRadar that investing in Europe doesn't equate to leaving Switzerland.
It's worth noting, however, that being based in the EU could make Proton, and similar companies, vulnerable to wider data retention or scanning obligations if proposals like the so-called ProtectEU or Chat Control were to pass.
We approached Proton for clarification on this point, and a company spokesperson pointed out that mandatory data retention has already been ruled illegal multiple times by European courts.
"However, we will, of course, continue to monitor developments in the EU closely, as we do elsewhere," Proton added.
What's next for the Swiss tech privacy industry?
Proton isn't the only provider that has been vocal against what critics have deemed Switzerland's "war against online anonymity."
Another VPN provider, NymVPN, confirmed back in May its intentions to leave Switzerland if the new surveillance rules are enforced.
Talking to TechRadar, Nym's co-founder and COO, Alexis Roussel, shares support for Proton's decision to find a new home for its private AI chatbot.
He said, "Proton is in a position that they are expanding, so it totally makes sense. You cannot invest in privacy in Switzerland right now."
Roussel also confirmed to TechRadar that the company has already developed a strategy to move its VPN activities outside Switzerland and the EU. Yet, this remains the last resort.
He also explains that the fact that Nym works on a decentralised infrastructure means that it won't be affected by the encryption provision, as the company doesn't hold any encryption keys.
"Depending on how they modify things within the law, this will affect our decision to move. But we would like to resist the ordinance until the end and go to the tribunal," said Roussel.
As reported by Cyberinsider, also secure and private messaging app Session said that, "while keeping a close eye on the situation," its decentralized structure means its services are less vulnerable to the changes.
= Related: Hacker News Discussion
Beware of OpenAI's 'Grantwashing' on AI Harms:
This month, OpenAI announced "up to $2 million" in funding for research studies on AI safety and well-being. At its surface, this may seem generous, but following in the footsteps of other tech giants facing scrutiny over their products' mental health impacts, it's nothing more than grantwashing.
This industry practice commits a pittance to research that is doomed to be ineffective due to information and resources that companies hold back. When grantwashing works, it compromises the search for answers. And that's an insult to anyone whose loved one's death involved chatbots.
OpenAI's pledge came a week after the company's lawyers argued that the company isn't to blame in the death of a California teenager who ChatGPT encouraged to commit suicide. In the company's attempt to disclaim responsibility in court, they even requested a list of invitees to the teen's memorial and video footage of the service and the people there. In the last year, OpenAI and other generative AI companies have been accused of causing numerous deaths and psychotic breaks by encouraging people into suicide, feeding delusions, and giving them risky instructions.
As scientists who study developmental psychology and AI, we agree that society urgently needs better science on AI and mental health. The company has recruited a group of genuinely credible scientists to give them closed-door advice on the issue, like so many other companies accused of causing harm. But OpenAI's funding announcement reveals how small a fig leaf they think will persuade a credulous public.
Look at the size of the grants. High quality public health research on mental health harms requires a sequence of studies, large sample sizes, access to clinical patients, and an ethics safety net that supports people at risk. The median research project grant from the National Institutes of Mental Health in 2024 was $642,918. In contrast, OpenAI is offering a measly $5,000 to $100,000 to researchers studying AI and mental health, one sixth of a typical NIMH grant at best.
Despite the good ideas Open AI suggests, the company is holding back the resource that would contribute most to science on those questions: records about their systems and how people use their products. OpenAI's researchers have purportedly developed ways to identify users who potentially face mental health distress. A well-designed data access program would accelerate the search for answers while preserving privacy and protecting vulnerable users. European regulators are still deciding if OpenAI will face data access requirements under the Digital Services Act, but OpenAI doesn't have to wait for Europe.
We have seen this playbook before from other companies. In 2019, Meta announced a series of fifty thousand dollar grants to six scientists studying Instagram, safety, and well being. Even as the company touted its commitment to science on user well-being, Meta's leaders were pressuring internal researchers to "amend their research to limit Meta's potential liability," according to a recent ruling in the D.C. Superior Court.
Whether or not OpenAI leaders intend to muddy the waters of science, grantwashing hinders technology safety as one of us recently argued in Science. It adds uncertainty and debate in areas where companies want to avoid liability and that uncertainty gives the appearance of science. These underfunded studies inevitably produce inconclusive results, forcing other researchers to do more work to clean up the resulting misconceptions.
[...] Two decades of Big Tech funding for safety science has taught us that the grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change. And journalists quote the corporate lobbyist and move on until the next round of deaths creates another news cycle.
The problem is that we do desperately need better, faster science on technology safety. Companies are pushing out AI products to hundreds of millions of people with limited safety guardrails faster than safety science can match. One idea, proposed by Dr. Alondra Nelson, borrows from the Human Genome Project. In 1990, the project's leadership allocated 3-5% of its annual research budget to independent "ethical, legal, and social inquiry" about genomics. The result was a scientific endeavor that kept on top of emerging risks from genetics, at least at moments when projects had the freedom to challenge the genomics establishment.
[...] We can't say whether specific deaths were caused by ChatGPT or whether generative AI will cause a new wave of mental health crises. The science isn't there yet. The legal cases are ongoing. But we can say that OpenAI's grantwashing is the perfect corporate action to make sure we don't find the answers for years.
The Register reports that UNIX V4, the first with the kernel written in C, has been recovered, restored and run.
The source code and binaries were recovered from a 1970s-vintage nine-track tape and posted to the Internet Archive where it can be downloaded.
It's very small: it contains around 55,000 lines of code, of which about 25,000 lines are in C, with under 1,000 lines of comments. But then, the late Dennis M. Ritchie and co-creator Ken Thompson were very definitely Real Programmers, and as is recorded in ancient wisdom: "Real Programmers don't need comments – the code is obvious."
For those who don't already know:
UNIX started out as a quick hack by two geniuses in their spare time, so that they could use a spare computer – an extremely rare thing in the 1960s – to run a simulation game flying the player around a 2D of the Solar System that one of them had written. It was called Space Travel.
https://therecord.media/south-korea-facial-recognition-phones
South Korea will begin requiring people to submit to facial recognition when signing up for a new mobile phone number in a bid to fight scams, the Ministry of Science and ICT announced Friday.
The effort is meant to block people from illegally registering devices used for identity theft.
The plan reportedly applies to the country's three major mobile carriers and mobile virtual network operators. The new policy takes effect on March 23 after a pilot that will begin this week.
"By comparing the photo on an identification card with the holder's actual face on a real-time basis, we can fully prevent the activation of phones registered under a false name using stolen or fabricated IDs," the ministry reportedly said in a press release.
In August, South Korean officials unveiled a plan to combat voice phishing scams that included harsher penalties for mobile carriers that do not sufficiently act to prevent the scams were reportedly a central feature of that plan.
South Korea has been plagued by voice phishing scams, with 21,588 reported as of November, the ministry said.
In April, South Korea's SK Telecom was hacked and SIM card data belonging to nearly 27 million subscribers was stolen.
Privacy regulators determined the telecom "did not even implement basic access control," allowing hackers to take authentication data and subscriber information on a mass basis.
https://linuxiac.com/phoenix-emerges-as-a-modern-x-server-written-from-scratch-in-zig/
Phoenix is a new X server written from scratch in Zig, aiming to modernize X11 without relying on Xorg code.
Although Wayland has largely replaced Xorg, and most major Linux distributions and desktop environments have either already dropped support for the aging display protocol or are in the process of doing so, efforts to extend Xorg's life or replace it with similar alternatives continue. Recent examples include projects such as XLibre Xserver and Wayback. And now, a new name is joining this group: Phoenix.
It is a new X server project that takes a fundamentally different approach to X11. Written entirely from scratch in the Zig programming language, it is not yet another fork of the Xorg codebase and does not reuse its legacy code. Instead, according to devs, Phoenix aims to show that the X11 protocol itself is not inherently obsolete and can be implemented in a simpler, safer, and more modern way.
Phoenix is designed for real desktop and professional use, not for full protocol coverage. It supports only the X11 features that modern applications need, including older software like GTK2-based programs. By omitting rarely used or outdated parts, Phoenix keeps things simpler while still supporting many applications.
Right now, Phoenix is still experimental and not ready for daily use. It can run simple, hardware-accelerated apps using GLX, EGL, or Vulkan, but only in a nested setup under another X server. This will stay the case until the project is ready for more demanding use.
On the security side, which is actually one of the aspects for which Xorg receives the most criticism, Phoenix isolates applications by default, and access to sensitive capabilities such as screen recording or global hotkeys is mediated through explicit permission mechanisms. Importantly, this is done without breaking existing clients, as unauthorized access attempts return dummy data rather than protocol errors.
Under the hood, Phoenix includes a built-in compositor that enables tear-free rendering by default, supports disabling compositing for full-screen applications, and is designed to reduce compositor and vsync latency. Proper multi-monitor support is a priority, allowing different refresh rates, variable refresh rate displays, and future HDR support without relying on a single global framebuffer.
Phoenix is also looking at extending the protocol when needed. For example, new features like per-monitor DPI reporting are planned to ensure apps scale properly across mixed-DPI setups. If needed, Phoenix will add protocol extensions for things like HDR, while still working with existing software.
It is important to make it clear that the project does not aim to replace Xorg. Phoenix deliberately avoids supporting legacy hardware, obscure protocol features, and configurations such as multiple X11 screens or indirect GLX rendering, and focuses entirely on modern systems.
Wayland compatibility is part of the long-term plan. The developers say Phoenix might eventually support Wayland clients directly or use bridging tools to run Wayland-only apps in an X11 environment. Running Phoenix nested under Wayland, as an alternative to Xwayland, is also being considered.
Finally, as I mentioned, the project is in its early stages, and it remains to be seen whether it will develop into production-ready stable versions and be accepted for wider use. Until then, for more information about this new initiative, check here.
The experiment showed that physical violence is not necessary to scare off gulls:
Shouting at seagulls makes them more likely to leave your food alone, research shows.
University of Exeter researchers put a closed Tupperware box of chips on the ground to pique herring gulls' interest.
Once a gull approached, they played either a recording of a male voice shouting the words, "No, stay away, that's my food", the same voice speaking those words, or the 'neutral' birdsong of a robin.
They tested a total of 61 gulls across nine seaside towns in Cornwall and found that nearly half of those gulls exposed to the shouting voice flew away within a minute.
Only 15% of the gulls exposed to the speaking male voice flew away, while the rest walked away from the food, still sensing danger.
In contrast, 70% of gulls exposed to the robin song stayed near the food for the duration of the experiment.
"We found that urban gulls were more vigilant and pecked less at the food container when we played them a male voice, whether it was speaking or shouting," said Dr Neeltje Boogert of the Centre for Ecology and Conservation at Exeter's Penryn Campus in Cornwall.
"But the difference was that the gulls were more likely to fly away at the shouting and more likely to walk away at the speaking.
"So when trying to scare off a gull that's trying to steal your food, talking might stop them in their tracks but shouting is more effective at making them fly away."
The recordings, in which five male volunteers recorded themselves uttering the same phrase in a calm speaking voice and, separately, in a shouting voice, were adjusted to be at the same volume, which suggests gulls can detect differences in the acoustic properties of human voices.
"Normally when someone is shouting, it's scary because it's a loud noise, but in this case all the noises were the same volume, and it was just the way the words were being said that was different," said Dr Boogert.
"So it seems that gulls pay attention to the way we say things, which we don't think has been seen before in any wild species, only in those domesticated species that have been bred around humans for generations, such as dogs, pigs and horses."
The experiment is designed to show that physical violence is not necessary to scare off gulls, and the researchers used male voices as most crimes against wildlife are carried out by men.
"Most gulls aren't bold enough to steal food from a person, I think they've become quite vilified," said Dr Boogert.
"What we don't want is people injuring them. They are a species of conservation concern, and this experiment shows there are peaceful ways to deter them that don't involve physical contact."
Journal Reference: Céline M. I. Rémy, Christophoros Zikos, Laura Ann Kelley, Neeltje Janna Boogert et al.; Herring gulls respond to the acoustic properties of men's voices. Biol Lett 1 November 2025; 21 (11): 20250394. https://doi.org/10.1098/rsbl.2025.0394
E-ink enjoyers can upgrade old tablets into part of the desktop experience using a simple server setup
While the traditional computer monitor is a tried-and-true standard, many who use screens insist on a higher standard of readability and eye comfort. For these happy few screen enthusiasts, a recent project and tutorial from software engineer Alireza Alavi offers Linux users the ability to add an E-ink display to their desktop by reusing an existing E-ink tablet.
The project turns an E-ink tablet into a mirrored clone of an existing second display in a desktop setup. Using VNC for network remote control of a computer, this implementation turns the E-ink tablet into both a display and an input device, opening up options from being used as a primarily reading and writing display to also being usable as a drawing tablet or other screen-as-input-device use cases.
The example video above [in article] shows Alavi using the E-ink display as an additional monitor, first to write his blog post about the VNC protocol, then to read a lengthy document. The tablet runs completely without a wired connection, as VNC sharing happens over the network and thus enables a fully wireless monitor experience. The second screen, seen behind the tablet, reveals the magic by displaying the source of the tablet's output.
Many readers will correctly observe that the latency and lag on the E-ink tablet are a bit higher than desirable for a display. This is not due to the streaming connection's efficacy, but rather to the slowness that stems from the old age of the Boox tablet used by Alavi. A newer, more modern E-ink display, such as Modos Tech's upcoming 75Hz E-ink monitor, will get the job done without any such visible latency distractions.
For Linux users wanting to follow along and create a tertiary monitor of their very own, Alavi's blog has all the answers. Setting up a VNC connection between the computer and the tablet display took only around 20 minutes, with help from the Arch Linux wiki.
Very simply put, the order of operations in setting up the system is first installing a VNC package of your choice (this project used TigerVNC), setting up a list of permissions for multiple possible users of your new TigerVNC server now being broadcast from the host PC, and setting up the screen limitations of the stream, ensuring that your host screen that is to be mirrored is setup to the exact same resolution and aspect ratio as the E-ink tablet being streamed to.
From here, connecting to the new TigerVNC server with the E-ink tablet is as simple as installing an Android VNC client and connecting to the correct port and IP address set up previously. Alavi's blog post goes into the nitty-gritty of setting up permissions and port options, as well as how to get the setup to start automatically or manually with scripts.
While this project is very cool, prospective copycats should take note of a few things. First, this specific implementation requires effectively sacrificing one of your existing screens to become a copy of what the tablet will output. This is not a solution for using an E-ink display as a third monitor, though having the E-ink display mirrored in color on a less laggy screen has its own perks for debugging and troubleshooting, among other things. Also, this system requires the E-ink display to be an Android tablet with its own brain that can install and run a VNC client like AVNC; a standalone display or a tablet with an incompatible operating system won't cut the mustard.
However, for those who need E-ink and the will to jump through some Linux terminal hoops, this application has serious promise. Be sure to see Alavi's original blog post on the setup for a complete setup guide and more notes from the inventor. For those considering the E-ink lifestyle but without the same growing pains, the E-ink monitor market is getting better all the time; Boox's newest 23.5-inch, 1800p color E-ink monitor is an attractive prospect, though the sticker shock of its $1,900 price tag is enough to make many enthusiasts weep.
If your product is even a third as innovative and useful as you claim it is, you shouldn't have to go around trying a little too hard to convince people. The product's usefulness should speak for itself. And you definitely shouldn't be forcing people to use products they've repeatedly told you they don't actually appreciate or want.
LG and Microsoft learned that lesson recently when LG began installing Microsoft's Copilot "AI" assistant on people's televisions, without any way to disable it:
"According to affected users, Copilot appears automatically after installing the latest webOS update on certain LG TV models. The feature shows up on the home screen alongside streaming apps, but unlike Netflix or YouTube, it cannot be uninstalled."
To be clear this isn't the end of the world. Users can apparently "hide" the app, but people are still generally annoyed at the lack of control. Especially coming from two companies with a history of this sort of behavior.
Many people just generally don't like Copilot, much like they didn't really like a lot of the nosier features integrated into Windows 11. Or they don't like being forced to use Copilot when they'd prefer to use ChatGPT or Gemini.
You only have to peruse this Reddit thread to get a sense of the annoyance. You can also head over to the Microsoft forums to get a sense of how Microsoft customers are very very tired of all the forced Copilot integration across Microsoft's other products, even though you can (sometimes) disable the integration.
But "smart" TVs are already a sector where user choice and privacy take a backseat to the primary goal of collecting and monetizing viewer behavior. And LG has been at the forefront of disabling features if you try to disconnect from the internet. So there are justifiable privacy concerns raised over this tight integration (especially in America, which is too corrupt to pass even a baseline internet privacy law).
This is also coming on the heels of widespread backlash over another Microsoft "AI" feature, Recall. Recall takes screenshots of your PC's activity every five seconds, giving you an "explorable timeline of your PC's past," that Microsoft's AI-powered assistant, Copilot, can then help you peruse.
Here, again, there was widespread condemnation over the privacy implications of such tight integration. Microsoft's response was to initially pretend to care, only to double down. It's worth noting that Microsoft's forced AI integration into its half-assed journalism efforts, like MSN, has also been a hot, irresponsible mess. So this is not a company likely to actually listen to its users.
It's not like Microsoft hasn't had some very intimate experiences surrounding the backlash of forcing products down customers' throats. But like most companies, Microsoft knows U.S. consumer protection and antitrust reform has been beaten to a bloody pulp, and despite the Trump administration's hollow and performative whining about the power of "big tech," big tech giants generally have carte blanche to behave like assholes for the foreseeable future, provided they're polite to the dim autocrats in charge.
According to the Oxford English Dictionary, the word recent is defined as "having happened or started only a short time ago." A simple, innocent sounding definition. And yet, in the world of scientific publishing, it may be one of the most elastic terms ever used. What exactly qualifies as "a short time ago"? A few months? A couple of years? The advent of the antibiotic era?
In biomedical literature, "recent" is something of a linguistic chameleon. It appears everywhere: in recent studies, recent evidence, recent trials, recent literature, and so forth. It is a word that conveys urgency and relevance, while neatly sidestepping any commitment to a specific year—much like saying "I'll call you soon" after a first date: reassuring, yet infinitely interpretable. Authors wield it with confidence, often citing research that could have been published in the previous season or the previous century.
Despite its ubiquity, "recent" remains a suspiciously vague descriptor. Readers are expected to blindly trust the author's sense of time. But what happens if we dare to ask the obvious question? What if we take "recent" literally?
In this festive horological investigation, we decided to find out just how recent the recent studies really are. Armed with curiosity, a calendar, and a healthy disregard for academic solemnity, we set out to measure the actual age of those so-called fresh references. The results may not change the course of science, but they might make you raise an eyebrow the next time someone cites a recent paper from the past decade.
On 5 June 2025, we—that is, the junior author, while the senior author remained in supervisory orbit—performed a structured search in PubMed using the following terms: "recent advance*" or "recent analysis" or "recent article*" or "recent data" or "recent development" or "recent evidence" or "recent finding*" or "recent insights" or "recent investigation*" or "recent literature" or "recent paper*" or "recent progress" or "recent report*" or "recent research" or "recent result*" or "recent review*" or "recent study" or "recent studies" or "recent trial*," or "recent work*." These terms were selected on the basis that they appear frequently in the biomedical literature, convey an aura of immediacy, and are ideal for concealing the fact that the authors are citing papers from before the invention of UpToDate.
To avoid skewing the results towards only the freshest of publications (and therefore ruining the fun), we sorted the search results by best match rather than by date. This method added a touch of algorithmic chaos and ensured a more diverse selection of articles. We then included articles progressively until reaching a sample size of 1000, a number both sufficiently round and statistically unnecessary, but pleasing to the eye.
We—again, the junior author, while the senior author offered moral support and the occasional pat on the back—reviewed the full text of each article to identify expressions involving the word "recent," ensuring they were directly linked to a bibliographic reference. [...]
For every eligible publication, we—still the junior author, whose dedication was inversely proportional to his contract stability—manually recorded the following: the doi of the article, its title, the journal of publication, the year it was published, the country where the article's first author was based, the broad medical specialty to which the article belonged, the exact "recent" expression used, the reference cited immediately after that expression, the year in which that reference was published, and the journal's impact factor as of 2024 (as reported in the Journal Citation Reports, Clarivate Analytics). [...]
[...] The final analysis comprised 1000 articles. The time lag between the citing article and the referenced "recent" publication ranged from 0 to 37 years, with a mean of 5.53 years (standard deviation 5.29) and a median of 4 years (interquartile range 2-7). The most frequent citation lag was one year, which was observed for 159 publications. The distribution was right skewed (skewness=1.80), with high kurtosis (4.09), indicating a concentration of values around the lower end with a long tail of much older references. A total of 177 articles had a citation lag of 10 years or longer, 26 articles had a lag of 20 years or longer, and four articles cited references that were at least 30 years old. The maximum lag observed was 37 years, found in one particularly ambitious case.
[...] Our investigation confirms what many readers have long suspected, but none have dared to quantify: in the land of biomedical publishing, "recent" is less a measure of time than a narrative device. With a mean citation lag of 5.5 years and a median of 4, the average "recent" reference is just about old enough to have survived two guideline updates and a systematic review debunking its relevance. Our findings align with longstanding concerns over vague or imprecise terminology in scientific writing, which technical editors have highlighted for decades.3
To be fair, some references were genuinely fresh—barely out of the editorial oven. But then there were the mavericks: 177 articles cited works 10 years or older, 26 drew on sources more than 20 years old, and in a moment of true historical boldness, four articles described "recent" studies that predated the launch of the first iPhone. The record holder clocked in at a 37 year lag, leaving us to wonder whether the authors confused recent with renaissance.
[...] The lexicon of "recent" expressions also revealed fascinating differences. Recent publication and recent article showed reassuringly tight timelines, suggesting that for these terms, recent still means what the dictionary intended. Recent trial, recent guidelines, recent paper, and recent result also maintained a commendable sense of urgency, as if they had checked the calendar before going to press. At the other end of the spectrum, recent study, the most commonly used expression, behaved more like recent-ish study, with a median lag of 5.5 years and a long tail stretching into academic antiquity. Recent discovery and recent approach performed even worse, reinforcing the suspicion that some authors consider "recent" a purely ornamental term. Readers may be advised to handle these terms with protective gloves.
[...] In this study, we found that the term "recent" in biomedical literature can refer to anything from last month's preprint to a study published before the invention of the mobile phone. Despite the rhetorical urgency such expressions convey, the actual citation lag often suggests a more relaxed interpretation of time. Although some fields and phrases showed more temporal discipline than others, the overall picture is one of creative elasticity.
The use of vague temporal language appears to be a global constant, transcending specialties, regions, and decades. Our findings do not call for the abolition of the word "recent," but perhaps for a collective pause before using it— a moment to consider whether it is truly recent or just rhetorically convenient. Authors may continue to deploy "recent" freely, but readers and reviewers might want to consider whether it is recent enough to matter.
Journal Reference: BMJ 2025; 391 doi: https://doi.org/10.1136/bmj-2025-086941 (Published 11 December 2025)
[Ed. note: Dec. 30 - The headline has been updated to reflect the fact that the Microsoft researcher who posted the original LinkedIn post stating they want to rewrite all Windows code in Rust later qualified his statement by saying that this is a research project --hubie]
The Register reports that Microsoft wants to replace all of its C and C++ code bases with Rust rewrites by 2030, developing new technology to do the translation along the way.
"Our strategy is to combine AI and Algorithms to rewrite Microsoft's largest codebases," he added. "Our North Star is '1 engineer, 1 month, 1 million lines of code.'"
The article goes on to quote much management-speak drivel from official Microsoft sources making grand claims about magic bullets and the end of all known software vulnerabilities with many orders of magnitude productivity improvements promised into the bargain.
Unlike C and C++, Rust is a memory-safe language, meaning it uses automated memory management to avoid out-of-bounds reads and writes, and use-after-free errors, as both offer attackers a chance to control devices. In recent years, governments have called for universal adoption of memory-safe languages – and especially Rust – to improve software security.
Automated memory management? Is the magic not in the compiler rather than the runtime? Do these people even know what they're talking about? And anyway, isn't C++2.0 going to solve all problems and be faster than Rust and better than Python? It'll be out Real Soon Now(TM). Plus you'll only have to half-rewrite your code.
Are we witnessing yet another expensive wild goose chase from Microsoft? Windows Longhorn, anyone?
Swearing boosts performance by helping people feel focused, disinhibited, study finds:
Letting out a swear word in a moment of frustration can feel good. Now, research suggests that it can be good for you, too: Swearing can boost people's physical performance by helping them overcome their inhibitions and push themselves harder on tests of strength and endurance, according to research published by the American Psychological Association.
"In many situations, people hold themselves back—consciously or unconsciously—from using their full strength," said study author Richard Stephens, PhD, of Keele University in the U.K. "Swearing is an easily available way to help yourself feel focused, confident and less distracted, and 'go for it' a little more."
Previous research by Stephens and others has found when people swear, they perform better on many physical challenges, including how long they can keep their hand in ice water and how long they can support their body weight during a chair push-up exercise.
"That is now a well replicated, reliable finding," Stephens said. "But the question is—how is swearing helping us? What's the psychological mechanism?"
He and his colleagues believed that it might be that swearing puts people in a disinhibited state of mind. "By swearing, we throw off social constraint and allow ourselves to push harder in different situations," he said.
To test this, the researchers conducted two experiments with 192 total participants. In each, they asked participants to repeat either a swear word of their choice, or a neutral word, every two seconds while doing a chair push-up. After completing the chair push-up challenge, participants answered questions about their mental state during the task. The questions included measures of different mental states linked to disinhibition, including how much positive emotion participants felt, how funny they found the situation, how distracted they felt and how self-confident they felt. The questions also included a measure of psychological "flow," a state in which people become immersed in an activity in a pleasant, focused way.
Overall, and confirming earlier research, the researchers found that participants who swore during the chair push-up task were able to support their body weight significantly longer than those who repeated a neutral word. Combining the results of the two experiments as well as a previous experiment conducted as part of an earlier study, they also found that this difference could be explained by increases in participants' reports of psychological flow, distraction and self-confidence—all important aspects of a disinhibition.
"These findings help explain why swearing is so commonplace," said Stephens. "Swearing is literally a calorie neutral, drug free, low cost, readily available tool at our disposal for when we need a boost in performance."
Journal Reference:Stephens, R., Dowber, H., Richardson, C., & Washmuth, N. B. (2025). "Don't hold back": Swearing improves strength through state disinhibition. American Psychologist. Advance online publication. https://doi.org/10.1037/amp0001650 [PDF]