Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Scientists have been trying to develop safe and sustainable materials that can replace traditional plastics, which are non-sustainable and harm the environment. While some recyclable and biodegradable plastics exist, one big problem remains. Current biodegradable plastics like PLA often find their way into the ocean where they cannot be degraded because they are water insoluble. As a result, microplastics—plastic bits smaller than 5 mm—are harming aquatic life and finding their way into the food chain, including our own bodies.
In their new study, Aida and his team focused on solving this problem with supramolecular plastics—polymers with structures held together by reversible interactions. The new plastics were made by combining two ionic monomers that form cross-linked salt bridges, which provide strength and flexibility. In the initial tests, one of the monomers was a common food additive called sodium hexametaphosphate and the other was any of several guanidinium ion-based monomers. Both monomers can be metabolized by bacteria, ensuring biodegradability once the plastic is dissolved into its components.
"While the reversable nature of the bonds in supramolecular plastics have been thought to make them weak and unstable," says Aida, "our new materials are just the opposite." In the new material, the salt bridges structure is irreversible unless exposed to electrolytes like those found in seawater. The key discovery was how to create these selectively irreversible cross links.
As with oil with water, after mixing the two monomers together in water, the researchers observed two separated liquids. One was thick and viscous and contained the important structural cross linked salt bridges, while the other was watery and contained salt ions. For example, when sodium hexametaphosphate and alkyl diguanidinium sulfate were used, sodium sulphate salt was expelled into the watery layer. The final plastic, alkyl SP2, was made by drying what remained in the thick viscous liquid layer.
The "desalting" turned out to be the critical step; without it, the resulting dried material was a brittle crystal, unfit for use. Resalting the plastic by placing it in salt water caused the interactions to reverse and the plastic's structure destabilized in a matter of hours. Thus, having created a strong and durable plastic that can still be dissolved under certain conditions, the researchers next tested the plastic's quality.
The new plastics are non-toxic and non-flammable—meaning no CO2 emissions—and can be reshaped at temperatures above 120°C like other thermoplastics. By testing different types of guanidinium sulfates, the team was able to generate plastics that had varying hardnesses and tensile strengths, all comparable or better than conventional plastics. This means that the new type of plastic can be customized for need; hard scratch resistant plastics, rubber silicone-like plastics, strong weight-bearing plastics, or low tensile flexible plastics are all possible. The researchers also created ocean-degradable plastics using polysaccharides that form cross-linked salt bridges with guanidinium monomers. Plastics like these can be used in 3D printing as well as medical or health-related applications.
Lastly, the researchers investigated the new plastic's recyclability and biodegradability. After dissolving the initial new plastic in salt water, they were able to recover 91% of the hexametaphosphate and 82% of the guanidinium as powders, indicating that recycling is easy and efficient. In soil, sheets of the new plastic degraded completely over the course of 10 days, supplying the soil with phosphorous and nitrogen similar to a fertilizer.
"With this new material, we have created a new family of plastics that are strong, stable, recyclable, can serve multiple functions, and importantly, do not generate microplastics," says Aida.
Journal Reference: Cheng et al. (2024) Mechanically strong yet metabolizable multivalently form a cross-linked network structure by desalting upon phase separation. Science. doi: 10.1126/science.ado1782
Players of Sacred and Gothic games can rejoice once again:
Vintage game emulation just got another slight boost, thanks to the release of D7VK version 1.1. This Direct3D-to-Vulkan translation layer makes it possible to run old Direct3D 7 games on contemporary hardware, and it got some meaty improvements, including a new front-end, and experimental support for Direct3D 6.
In case you're a little confused, D7VK is a translation layer that turns Direct3D 7 calls to Direct X 9 running under Proton's DXVK layer, thereby taking advantage of DXVK's tried-and-true infrastructure and software ecosystem. Being a mere translation layer, it has a minor performance penalty and can run several times faster than a full emulator like WineD3D.
Alongside with a new front-end, the 1.1 update adds Direct3D 6 support as an experimental option. The author mentions that judging by its documentation, adding this API shouldn't be a lot of work. That's in sharp contrast to the lawless lands of Direct3D version 5 and under. Even as it stands, in their own words, "D3D7 is a land of highly cursed interoperability", with many games mixing Direct3D calls with older Windows APIs like DirectDraw and even GDI for 2D graphics.
In turn, this means that support for games is hit-or-miss, depending on how "hacky" the game was initially programmed. For example, this latest version adds a workaround specific to Sacrifice, which uses a wholly unspported depth buffer format. Likewise, support for strided primitive rendering makes Sacred playable, and fixes to mipmap swapping enable gamers to once again enjoy Gothic, Gothic 2, and Star Trek DS9: The Fallen as if they were just released.
Many popular Direct3D 6 titles have seen re-releases using modern APIs, including Final Fantasy VIII, Resident Evil 2, and Grand Theft Auto 2.
Additional fixes for games include workarounds for Conquest: Frontier Wars, Tomb Raider Chronicles, Darkan: Order of the Flame, Earth 2150, Tachyon: The Fringe, and Arabian Nights. If you have a particular game that doesn't run well, visit the issues section in the D7VK GitHub to lend your feedback. In the meantime, if your game doesn't run or is too old to use even Direct3D 7, you can use Wine's WineD3D instead.
WinD3D ironically also works in Windows itself, making older games easy to run on contemporary versions of the OS. If your vintage title used old Glide or OpenGL instead, the author recomments nGlide.
https://mashable.com/article/study-ai-slop-youtube
If it feels like there's a lot of AI slop on YouTube, that's because there's a lot of AI slop on YouTube.
New research from video-editing company Kapwing, reported by the Guardian found that more than one in every five videos that the YouTube Shorts algorithm shows new users is low-quality, AI-generated content.
One of the most interesting parts of the Kapwing study is that of the first 500 YouTube Shorts videos in a brand-new, untouched YouTube Shorts algorithm, 104 were AI-generated and 165 were brainrot — a whopping 21 percent and 33 percent, respectfully.
Of course, the love of AI slop differs depending on the country. Kapwing found that AI slop channels in Spain have a combined 20.22 million subscribers, more than any other country, but has fewer AI slop channels among its top 100 channels than other countries. The U.S. has nine channels among its top 100 channels, and the third-most slop subscribers at 14.47 million.
YouTube isn't the only social media beast whose content is falling to the depths of AI slop despair, but the Kapwing study makes it clear that AI slop isn't going anywhere. As Mashable's Tim Marcin reported earlier this month, AI slop is taking over our feeds, from fake animals on surveillance tapes to heavy machinery cleaning barnacles off whales.
In June, U.S. insurance giant Aflac disclosed a data breach where hackers stole customers' personal information, including Social Security numbers and health information, without saying how many victims were affected.
On Tuesday, the company confirmed it has begun notifying around 22.65 million people whose data was stolen during the cyberattack.
In a filing [PDF] with the Texas attorney general, Aflac said that the stolen data includes customer names, dates of birth, home addresses, government-issued ID numbers (such as passports and state ID cards) and driver's license numbers, and Social Security numbers, as well as medical and health insurance information.
And, in a filing with the Iowa attorney general, Aflac said that the cybercriminals responsible for the breach "may be affiliated with a known cyber-criminal organization; federal law enforcement and third-party cybersecurity experts have indicated that this group may have been targeting the insurance industry at large."
Given that Scattered Spider, an amorphous collective of primarily young English-speaking hackers, was targeting the insurance industry at the time of the breach, it's likely that this is the group Aflac is referring to.
A spokesperson for Aflac did not respond to TechCrunch's request for comment.
The company says it has around 50 million customers according to its official website.
Some brains perform a complicated assessment while others seem to take a shortcut:
Are you a social savant who easily reads people's emotions? Or are you someone who leaves an interaction with an unclear understanding of another person's emotional state?
New UC Berkeley research suggests those differences stem from a fundamental way our brains compute facial and contextual details, potentially explaining why some people are better at reading the room than others — sometimes, much better.
Human brains use information from faces and background context, such as the location or expressions of bystanders, when making sense of a scene and assessing someone's emotional state. If someone's facial expression is clear, but the emotional information in the context is unclear, most people's brains will heavily weigh the clear facial expression and minimize the importance of the background context. Conversely, if a facial expression is ambiguous but the background context provides strong cues of how a person feels, they'll rely more on the context to understand the person's emotions.
Think of it like a close-up photo of a person crying. Without background context, you might assume they're sad. But with context — a wedding altar, perhaps — the meaning shifts significantly.
It adds up to a complex statistical assessment that weighs different cues based on their ambiguity.
But while most people are naturally able to make those judgment calls, Berkeley psychologists say that others seemingly treat every piece of information equally. This discrepancy between complex calculus and simple averages might explain our vast differences in understanding emotions, said Jefferson Ortega, lead author of the study published today (Dec. 16) in Nature Communications.
"We don't know exactly why these differences occur," said Ortega, a psychology Ph.D. student. "But the idea is that some people might use this more simplistic integration strategy because it's less cognitively demanding, or it could also be due to underlying cognitive deficits."
Ortega's team had 944 participants continuously infer the mood of a person in a series of videos. He likened it to a video call: Some of the clips contained hazy backgrounds — like blurring your background in a Zoom meeting. Others had hazy faces and clear context. This allowed his team to isolate the emotional information people get from a person's face and body and the information they get from the context.
Using the participant's scene assessments from those two conditions, Ortega used a model to predict what rating they would provide when they viewed all of the scene details — what he called the "ground truth."
He wanted to know if people really weighed different inputs differently, valuing facial expressions more when backgrounds were blurred or backgrounds when the faces were fuzzy. This process, called Bayesian integration, is a statistical way of understanding whether people combine different types of information based on its ambiguity.
He expected everyone would weigh the ambiguities, decide which field to rely more on, and make an assessment. That was true in about 70% of cases.
However, instead of assessing the context ambiguity, the remaining 30% of participants had more simplistic strategies that basically averaged the two cues.
"It was very surprising," Ortega said, adding that it's less cognitively demanding to take simple averages than to weigh different factors more or less heavily almost instantly. "The computational mechanisms — the algorithm that the brain uses to do that — is not well understood. That's where the motivation came for this paper. It's just an amazing feat."
[...] "Some observers are very good at integrating context and facial expressions to understand emotions," Whitney said of the strong individual differences shown in Ortega's research. "And some folks are not so good at it."
Journal Reference: Ortega, J., Murai, Y. & Whitney, D. Integration of affective cues in context-rich and dynamic scenes varies across individuals. Nat Commun (2025). https://doi.org/10.1038/s41467-025-67466-1
Best wishes to the SoylentNews community, whether you've just gotten here, been here since the early days, or perhaps even come back from an extended absence. This site exists because of your support and participation. Keep submitting interesting articles and posting your thoughts or topics of discussion in your journal. If you've never posted in your journal before, well there's a ready-made New Year's Resolution for you!
Special thanks goes out to those who volunteer and contribute their time and resources behind the scenes to maintain the existence of the Soylent Phoenix corporation, provide us with hardware and hosting services, fix our code, edit our stories, and just generally keep things running day in and day out.
* . *
. * . *
. *
* .
. * *
H A P P Y N E W Y E A R
2 0 2 6
. * *
* .
* .
. * . *
* . *
##################################
# #
# SOYLENTNEWS STATUS: ONLINE #
# TIME INDEX: 00:00:00 #
# EVENT FLAG: CELEBRATION #
# #
# WELCOME TO THE FUTURE #
# #
##################################
Just a few days ago, we reported on Jolla's push for a new Linux phone, which needed at least 2,000 pre-orders to move forward. Now, only days later, there's good news: interest in the device has surpassed all expectations.
Jolla's community-funded smartphone project has cleared its production threshold, securing more than 3,200 pre-orders. The strong response ensures that the new Linux phone, developed under the Do It Together (DIT) model, will move forward into manufacturing, with Batch #1 already sold out and Batch #2 now available.
The device is positioned as an independent European Linux phone shaped directly by its users. Pre-orders require a €99 refundable deposit, deducted from the final price of €549 for Batch #2. Markets include the EU, UK, Norway, and Switzerland. According to Jolla, Batch #1 sold out in under 48 hours.
Let's recall the device's hardware specifications. It features a 6.36-inch Full HD AMOLED display, 12GB of RAM, 256GB of storage with microSD expansion, and a high-performance 5G MediaTek platform, powered by Sailfish OS, a privacy-focused Linux-based mobile operating system that offers an alternative to mainstream Android and iOS.
The phone also includes a 50MP main camera, user-replaceable 5,500mAh battery, dual nano-SIM support, Wi-Fi 6, NFC, and a fingerprint reader, as Jolla guarantees a minimum of five years of OS updates and availability of spare components, including back covers and batteries.
The pre-order campaign remains open until January 4, 2026, but once again, the funding goal has already been exceeded by a wide margin, reaching roughly 160 percent. Production will commence once Batch #1 is delivered, with first units expected by the end of the first half of 2026.
For more information, visit Jolla's website.
Previously: New Jolla Phone Now Available for Pre-Order as an Independent Linux Phone
The Trump administration's decision to close a world-leading research centre for atmospheric science is a blow to weather forecasting and climate modelling that could leave humanity more exposed to the impacts of global warming.
In a statement to USA Today, White House official Russ Vought said the National Center for Atmospheric Research (NCAR) is a source of "climate alarmism" and will be broken up. "Green new scam research" will be eliminated, while "vital functions" like weather modelling and supercomputing will be moved elsewhere, the White House said.
NCAR's models underpin the reports of the United Nations' Intergovernmental Panel on Climate Change, which countries rely on for decisions about how to reduce carbon emissions and adapt to extreme weather.
"Shutting it down would lead to greater uncertainty about what our climate future might be and leave us less able to prepare effectively," says Michael Meredith at the British Antarctic Survey. "It's hard to see this as anything other than shooting the messenger."
NCAR was started in 1960 to facilitate atmospheric science too large-scale for individual universities. Its 830 employees are involved in research "from the ocean floor to the Sun's core", according to its unofficial motto, with programmes to monitor everything from flooding and wildfires to space weather.
At its hilltop laboratory in the Colorado Rockies, NCAR invented the GPS dropsonde, a sensor-laden device that is dropped into hurricanes, revolutionising our understanding of tropical storms. Its researchers developed wind-shear warning systems for airports that have prevented countless crashes.
But perhaps its greatest contribution has been providing data, modelling and supercomputing to other researchers. Weather Underground, which in the 1990s was one of the first to offer local forecasts online, wouldn't have existed without software and weather data from NCAR, according to its founder, meteorologist Jeff Masters.
NCAR develops and administers the Weather Research and Forecasting Model, which is widely used for both day-to-day forecasting and the study of regional climates. It also collaborates with the US National Oceanic and Atmospheric Administration to advance weather modelling, especially for predicting severe storms.
If this work is disrupted, it could halt improvements to forecasts on weather apps and television news, at a time when extreme weather is getting more frequent. Shutting down NCAR is like if, "on the eve of world war two, we decided to stop funding R&D into weapons", says Masters.
[...] NCAR administers the Community Earth System Model (CESM), the first global climate model designed for universities. CESM has supported a huge variety of research, from estimates of current global carbon emissions to future changes to ocean currents, heatwave frequency and glacier and sea ice melt.
"It's probably the most-used model in the world," says Richard Rood at the University of Michigan.
NCAR holds biannual meetings with users to decide how to improve the model, which can be run on its servers or downloaded and operated locally. Its closure is likely to end the further development of CESM, as well as maintenance to fix bugs.
[...] Its aircraft help monitor air pollution and calibrate satellite instruments, according to Rood.
Its research on aerosols would be vital to understanding the effects of geoengineering, he adds. Schemes like spreading aerosols to block sunlight have been proposed to avoid abrupt changes in the climate.
"Getting rid of climate research like this would really have us flying blind, more blindly, into decisions about geoengineering, as well as tipping points," says Rood.
A California-based aerospace startup, Reflect Orbital, has ignited intense debate within the scientific community by proposing an ambitious plan to "sell sunlight" using massive mirrors placed in low Earth orbit:
The company's concept involves deploying large reflective satellites that could redirect sunlight onto specific locations on Earth during nighttime hours.
According to Live Science Plus, this technology could provide artificial illumination to extend daylight, boost agricultural productivity, or allow solar panels to operate after sunset.
Reflect Orbital has filed an application with the U.S. Federal Communications Commission to launch its first experimental satellite, known as EARENDIL-1, as early as 2026. If approved and expanded, the company envisions deploying as many as 4,000 orbital mirrors by the end of the decade.
Each mirror would reportedly unfold to approximately 59 feet (18 meters) across and could illuminate an area on Earth roughly 3 miles (5 kilometers) wide.
Reflect Orbital claims the reflected light could be "up to four times brighter than the full moon," with future iterations potentially becoming even larger and more powerful.
https://therecord.media/spotify-disables-scraping-annas
Spotify responded on Monday to an open-source group's decision to publish files over the weekend containing 86 million tracks scraped from the music streaming platform.
Anna's Archive, which calls itself the "largest truly open library in human history," said on Saturday that it discovered a way to scrape Spotify's files and subsequently released a database of metadata and songs.
A spokesperson for Spotify told Recorded Future News that it "has identified and disabled the nefarious user accounts that engaged in unlawful scraping."
"We've implemented new safeguards for these types of anti-copyright attacks and are actively monitoring for suspicious behavior," the spokesperson said. "Since day one, we have stood with the artist community against piracy, and we are actively working with our industry partners to protect creators and defend their rights."
The spokesperson added that Anna's Archive did not contact them before publishing the files. They also said it did not consider the incident a "hack" of Spotify. The people behind the leaked database systematically violated Spotify's terms by stream-ripping some of the music from the platform over a period of months, a spokesperson said.
They did this through user accounts set up by a third party and not by accessing Spotify's business systems, they added.
Anna's Archive published a blog post about the cache this weekend, writing that while it typically focuses its efforts on text, its mission to preserve humanity's knowledge and culture "doesn't distinguish among media types."
"Sometimes an opportunity comes along outside of text. This is such a case. A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation," they said.
"This Spotify scrape is our humble attempt to start such a 'preservation archive' for music. Of course Spotify doesn't have all the music in the world, but it's a great start."
While the full release contains a music metadata database with 256 million tracks, Anna's Archive put together a bulk file a little under 300 terabytes in size featuring 86 million music files that account for about 99.6% of all listens on Spotify. There is another smaller file featuring the top 10,000 most popular songs.
The files cover all music posted on Spotify from 2007 to July 2025. Anna's Archive called it "by far the largest music metadata database that is publicly available."
"With your help, humanity's musical heritage will be forever protected from destruction by natural disasters, wars, budget cuts, and other catastrophes," the organization said.
The blog post outlines distinct trends from Spotify data. The top three songs on Spotify — Billie Eilish's "Birds of a Feather," Lady Gaga's "Die with a Smile" and Bad Bunny's "DtMF" — have a higher total stream count than the bottom 20-100 million songs combined.
Anna's Archive, which is banned in several countries for its repeated copyright violations, was created in the wake of the law enforcement shutdown of Z-Library in 2022. The Justice Department arrested and charged two Russian nationals in 2022 for running Z-Library, which at the time was "the world's largest library" and claimed to have at least 11 million e-books for download.
Anna's Archive emerged days after Z-Library was shut down and aggregated records from that site as well as several other free online libraries like the Internet Archive, Library Genesis and Sci-Hub.
As of December, Anna's Archive has more than 61 million books and 95 million papers. Copyright holders in multiple countries have tried to sue the organization, and Google in November said it removed nearly 800 million links to Anna's Archive from its search engine after publishers issued takedown requests.
Proton has confirmed the company has begun moving out of Switzerland due to "legal uncertainty" over the newly proposed surveillance law.
Proton's newly launched privacy-first AI chatbot, Lumo, has become the first product to change home yet, "investing in Europe does not equate to leaving Switzerland," a company spokesperson told TechRadar, amid rumors it's exiting the country for good.
The firm behind one of the best VPN and encrypted email services has been very critical of the Swiss government's proposed amendment of its surveillance law since the beginning, already sharing plans to quit Switzerland back in May.
If it passes, the Ordinance on the Surveillance of Correspondence by Post and Telecommunications (OSCPT) will introduce new obligations for virtual private networks (VPNs), messaging apps, and social networks. These measures include mandatory user identification and data retention of up to six months for all services with at least 5,000 users. Providers will also be required to decrypt the communication upon the authorities' request should they own encryption keys.
Lumo – the first to go
Proton launched its ChatGPT competitor, Lumo, in July 2025, to give its users an alternative to Big Tech solutions that truly protect their privacy.
In a blog post about the launch, Proton's Head of Anti-Abuse and Account Security, Eamonn Maguire, explains that the company has decided to invest outside Switzerland for fear of the looming legal changes.
He wrote: "Because of legal uncertainty around Swiss government proposals to introduce mass surveillance – proposals that have been outlawed in the EU – Proton is moving most of its physical infrastructure out of Switzerland. Lumo will be the first product to move."
Talking to a Swiss publication after the launch, Proton's CEO Andy Yen confirmed that the proposed changes to the Swiss surveillance law made the company opt for Germany instead to host Lumo's servers. Proton has also confirmed it's also developing facilities in Norway.
While the company did not specify that Germany would become the new home of the majority of its infrastructure, Proton confirmed to TechRadar that investing in Europe doesn't equate to leaving Switzerland.
It's worth noting, however, that being based in the EU could make Proton, and similar companies, vulnerable to wider data retention or scanning obligations if proposals like the so-called ProtectEU or Chat Control were to pass.
We approached Proton for clarification on this point, and a company spokesperson pointed out that mandatory data retention has already been ruled illegal multiple times by European courts.
"However, we will, of course, continue to monitor developments in the EU closely, as we do elsewhere," Proton added.
What's next for the Swiss tech privacy industry?
Proton isn't the only provider that has been vocal against what critics have deemed Switzerland's "war against online anonymity."
Another VPN provider, NymVPN, confirmed back in May its intentions to leave Switzerland if the new surveillance rules are enforced.
Talking to TechRadar, Nym's co-founder and COO, Alexis Roussel, shares support for Proton's decision to find a new home for its private AI chatbot.
He said, "Proton is in a position that they are expanding, so it totally makes sense. You cannot invest in privacy in Switzerland right now."
Roussel also confirmed to TechRadar that the company has already developed a strategy to move its VPN activities outside Switzerland and the EU. Yet, this remains the last resort.
He also explains that the fact that Nym works on a decentralised infrastructure means that it won't be affected by the encryption provision, as the company doesn't hold any encryption keys.
"Depending on how they modify things within the law, this will affect our decision to move. But we would like to resist the ordinance until the end and go to the tribunal," said Roussel.
As reported by Cyberinsider, also secure and private messaging app Session said that, "while keeping a close eye on the situation," its decentralized structure means its services are less vulnerable to the changes.
= Related: Hacker News Discussion
Beware of OpenAI's 'Grantwashing' on AI Harms:
This month, OpenAI announced "up to $2 million" in funding for research studies on AI safety and well-being. At its surface, this may seem generous, but following in the footsteps of other tech giants facing scrutiny over their products' mental health impacts, it's nothing more than grantwashing.
This industry practice commits a pittance to research that is doomed to be ineffective due to information and resources that companies hold back. When grantwashing works, it compromises the search for answers. And that's an insult to anyone whose loved one's death involved chatbots.
OpenAI's pledge came a week after the company's lawyers argued that the company isn't to blame in the death of a California teenager who ChatGPT encouraged to commit suicide. In the company's attempt to disclaim responsibility in court, they even requested a list of invitees to the teen's memorial and video footage of the service and the people there. In the last year, OpenAI and other generative AI companies have been accused of causing numerous deaths and psychotic breaks by encouraging people into suicide, feeding delusions, and giving them risky instructions.
As scientists who study developmental psychology and AI, we agree that society urgently needs better science on AI and mental health. The company has recruited a group of genuinely credible scientists to give them closed-door advice on the issue, like so many other companies accused of causing harm. But OpenAI's funding announcement reveals how small a fig leaf they think will persuade a credulous public.
Look at the size of the grants. High quality public health research on mental health harms requires a sequence of studies, large sample sizes, access to clinical patients, and an ethics safety net that supports people at risk. The median research project grant from the National Institutes of Mental Health in 2024 was $642,918. In contrast, OpenAI is offering a measly $5,000 to $100,000 to researchers studying AI and mental health, one sixth of a typical NIMH grant at best.
Despite the good ideas Open AI suggests, the company is holding back the resource that would contribute most to science on those questions: records about their systems and how people use their products. OpenAI's researchers have purportedly developed ways to identify users who potentially face mental health distress. A well-designed data access program would accelerate the search for answers while preserving privacy and protecting vulnerable users. European regulators are still deciding if OpenAI will face data access requirements under the Digital Services Act, but OpenAI doesn't have to wait for Europe.
We have seen this playbook before from other companies. In 2019, Meta announced a series of fifty thousand dollar grants to six scientists studying Instagram, safety, and well being. Even as the company touted its commitment to science on user well-being, Meta's leaders were pressuring internal researchers to "amend their research to limit Meta's potential liability," according to a recent ruling in the D.C. Superior Court.
Whether or not OpenAI leaders intend to muddy the waters of science, grantwashing hinders technology safety as one of us recently argued in Science. It adds uncertainty and debate in areas where companies want to avoid liability and that uncertainty gives the appearance of science. These underfunded studies inevitably produce inconclusive results, forcing other researchers to do more work to clean up the resulting misconceptions.
[...] Two decades of Big Tech funding for safety science has taught us that the grantwashing playbook works every time. Internally, corporate leaders pacify passionate employees with token actions that seem consequential. External scientists take the money, get inconclusive results, and lose public trust. Policymakers see what looks like responsible self regulation from a powerful industry and backpedal calls for change. And journalists quote the corporate lobbyist and move on until the next round of deaths creates another news cycle.
The problem is that we do desperately need better, faster science on technology safety. Companies are pushing out AI products to hundreds of millions of people with limited safety guardrails faster than safety science can match. One idea, proposed by Dr. Alondra Nelson, borrows from the Human Genome Project. In 1990, the project's leadership allocated 3-5% of its annual research budget to independent "ethical, legal, and social inquiry" about genomics. The result was a scientific endeavor that kept on top of emerging risks from genetics, at least at moments when projects had the freedom to challenge the genomics establishment.
[...] We can't say whether specific deaths were caused by ChatGPT or whether generative AI will cause a new wave of mental health crises. The science isn't there yet. The legal cases are ongoing. But we can say that OpenAI's grantwashing is the perfect corporate action to make sure we don't find the answers for years.
The Register reports that UNIX V4, the first with the kernel written in C, has been recovered, restored and run.
The source code and binaries were recovered from a 1970s-vintage nine-track tape and posted to the Internet Archive where it can be downloaded.
It's very small: it contains around 55,000 lines of code, of which about 25,000 lines are in C, with under 1,000 lines of comments. But then, the late Dennis M. Ritchie and co-creator Ken Thompson were very definitely Real Programmers, and as is recorded in ancient wisdom: "Real Programmers don't need comments – the code is obvious."
For those who don't already know:
UNIX started out as a quick hack by two geniuses in their spare time, so that they could use a spare computer – an extremely rare thing in the 1960s – to run a simulation game flying the player around a 2D of the Solar System that one of them had written. It was called Space Travel.
https://therecord.media/south-korea-facial-recognition-phones
South Korea will begin requiring people to submit to facial recognition when signing up for a new mobile phone number in a bid to fight scams, the Ministry of Science and ICT announced Friday.
The effort is meant to block people from illegally registering devices used for identity theft.
The plan reportedly applies to the country's three major mobile carriers and mobile virtual network operators. The new policy takes effect on March 23 after a pilot that will begin this week.
"By comparing the photo on an identification card with the holder's actual face on a real-time basis, we can fully prevent the activation of phones registered under a false name using stolen or fabricated IDs," the ministry reportedly said in a press release.
In August, South Korean officials unveiled a plan to combat voice phishing scams that included harsher penalties for mobile carriers that do not sufficiently act to prevent the scams were reportedly a central feature of that plan.
South Korea has been plagued by voice phishing scams, with 21,588 reported as of November, the ministry said.
In April, South Korea's SK Telecom was hacked and SIM card data belonging to nearly 27 million subscribers was stolen.
Privacy regulators determined the telecom "did not even implement basic access control," allowing hackers to take authentication data and subscriber information on a mass basis.
https://linuxiac.com/phoenix-emerges-as-a-modern-x-server-written-from-scratch-in-zig/
Phoenix is a new X server written from scratch in Zig, aiming to modernize X11 without relying on Xorg code.
Although Wayland has largely replaced Xorg, and most major Linux distributions and desktop environments have either already dropped support for the aging display protocol or are in the process of doing so, efforts to extend Xorg's life or replace it with similar alternatives continue. Recent examples include projects such as XLibre Xserver and Wayback. And now, a new name is joining this group: Phoenix.
It is a new X server project that takes a fundamentally different approach to X11. Written entirely from scratch in the Zig programming language, it is not yet another fork of the Xorg codebase and does not reuse its legacy code. Instead, according to devs, Phoenix aims to show that the X11 protocol itself is not inherently obsolete and can be implemented in a simpler, safer, and more modern way.
Phoenix is designed for real desktop and professional use, not for full protocol coverage. It supports only the X11 features that modern applications need, including older software like GTK2-based programs. By omitting rarely used or outdated parts, Phoenix keeps things simpler while still supporting many applications.
Right now, Phoenix is still experimental and not ready for daily use. It can run simple, hardware-accelerated apps using GLX, EGL, or Vulkan, but only in a nested setup under another X server. This will stay the case until the project is ready for more demanding use.
On the security side, which is actually one of the aspects for which Xorg receives the most criticism, Phoenix isolates applications by default, and access to sensitive capabilities such as screen recording or global hotkeys is mediated through explicit permission mechanisms. Importantly, this is done without breaking existing clients, as unauthorized access attempts return dummy data rather than protocol errors.
Under the hood, Phoenix includes a built-in compositor that enables tear-free rendering by default, supports disabling compositing for full-screen applications, and is designed to reduce compositor and vsync latency. Proper multi-monitor support is a priority, allowing different refresh rates, variable refresh rate displays, and future HDR support without relying on a single global framebuffer.
Phoenix is also looking at extending the protocol when needed. For example, new features like per-monitor DPI reporting are planned to ensure apps scale properly across mixed-DPI setups. If needed, Phoenix will add protocol extensions for things like HDR, while still working with existing software.
It is important to make it clear that the project does not aim to replace Xorg. Phoenix deliberately avoids supporting legacy hardware, obscure protocol features, and configurations such as multiple X11 screens or indirect GLX rendering, and focuses entirely on modern systems.
Wayland compatibility is part of the long-term plan. The developers say Phoenix might eventually support Wayland clients directly or use bridging tools to run Wayland-only apps in an X11 environment. Running Phoenix nested under Wayland, as an alternative to Xwayland, is also being considered.
Finally, as I mentioned, the project is in its early stages, and it remains to be seen whether it will develop into production-ready stable versions and be accepted for wider use. Until then, for more information about this new initiative, check here.