Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Popular sugar substitute linked to brain cell damage and stroke risk:
From low-carb ice cream to keto protein bars to "sugar-free" soda, the decades-old sweetener erythritol is everywhere.
But new University of Colorado Boulder research shows the popular sugar substitute and specialty food additive comes with serious downsides, impacting brain cells in numerous ways that can boost risk of stroke.
"Our study adds to the evidence suggesting that non-nutritive sweeteners that have generally been purported to be safe, may not come without negative health consequences," said senior author Christopher DeSouza, professor of integrative physiology and director of the Integrative Vascular Biology Lab.
First approved by the Food and Drug Administration in 2001, erythritol is a sugar alcohol, often produced by fermenting corn and found in hundreds of products. It has almost no calories, is about 80% as sweet as table sugar, and has negligible impact on insulin levels, making it a favorite for people trying to lose weight, keep their blood sugar in check or avoid carbohydrates.
Recent research has begun to shed light on its risks.
One recent study involving 4,000 people in the U.S. and Europe found that men and women with higher circulating levels of erythritol were significantly more likely to have a heart attack or stroke within the next three years.
DeSouza and first author Auburn Berry, a graduate student in his lab, set out to understand what might be driving that increased risk.
Researchers in the lab treated human cells that line blood vessels in the brain for three hours with about the same amount of erythritol contained in a typical sugar-free beverage.
They observed that the treated cells were altered in numerous ways: They expressed significantly less nitric oxide, a molecule that relaxes and widens blood vessels, and more endothelin-1, a protein that constricts blood vessels. Meanwhile, when challenged with a clot-forming compound called thrombin, cellular production of the natural clot-busting compound t-PA was "markedly blunted." The erythritol-treated cells also produced more reactive oxygen species (ROS), a.k.a. "free radicals," metabolic byproducts which can age and damage cells and inflame tissue.
"Big picture, if your vessels are more constricted and your ability to break down blood clots is lowered, your risk of stroke goes up," said Berry. "Our research demonstrates not only that, but how erythritol has the potential to increase stroke risk."
DeSouza notes that their study used only a serving-size worth of the sugar substitute. For those who consume multiple servings per day, the impact, presumably, could be worse.
The authors caution that their study was a laboratory study, conducted on cells, and larger studies in people are needed.
That said, De Souza encourages consumers to read labels, looking for erythritol or "sugar alcohol" on the label.
"Given the epidemiological study that inspired our work, and now our cellular findings, we believe it would be prudent for people to monitor their consumption of non-nutrient-sweeteners such as this one," he said.
Journal Reference:
Auburn R. Berry, Samuel T. Ruzzene, Emily I. Ostrander, et al. The non-nutritive sweetener erythritol adversely affects brain microvascular endothelial cell function, Journal of Applied Physiology (DOI: JAPPL-00276-2025)
The BBC has announced that Ozzy Osborne has died today.
From the Guardian:
Ozzy Osbourne, whose gleeful "Prince of Darkness" image made him one of the most iconic rock frontmen of all time, has died aged 76.
A statement from the Osbourne family reads: "It is with more sadness than mere words can convey that we have to report that our beloved Ozzy Osbourne has passed away this morning. He was with his family and surrounded by love. We ask everyone to respect our family privacy at this time." No cause of death was given, though Osbourne had experienced various forms of ill health in recent years.
A strange fossil at the edge of the solar system just shook up Planet Nine theories:
The object was found as part of the survey project FOSSIL (Formation of the Outer Solar System: An Icy Legacy), which takes advantage of the Subaru Telescope's wide field of view. The object was discovered through observations taken in March, May, and August 2023 using the Subaru Telescope. The object is currently designated 2023 KQ14; a more classical name will be assigned later by the International Astronomical Union. After that, follow-up observations in July 2024 with the Canada-France-Hawaii Telescope and a search for unrecognized sightings of the object in old data from other observatories allowed astronomers to track the object's orbit over 19 years. Due to its peculiar distant orbit, 2023 KQ14 has been classified as a "sednoid", making it only the fourth known example of this rare type of object.
[Editor's Note: A sednoid is a trans-Neptunian object with a large semi-major axis, a distant perihelion and a highly eccentric orbit, similar to that of the dwarf planet Sedna --JE]
Numerical simulations conducted by the FOSSIL team, some of which used the PC cluster operated by the National Astronomical Observatory of Japan, indicate that 2023 KQ14 has maintained a stable orbit for at least 4.5 billion years. Although its current orbit differs from those of the other sednoids, the simulations suggest that their orbits were remarkably similar around 4.2 billion years ago.
The fact that 2023 KQ14 now follows an orbit different from the other sednoids indicates that the outer Solar System is more diverse and complex than previously thought. This discovery also places new constraints on the hypothetical Planet Nine. If Planet Nine exists, its orbit must lie farther out than typically predicted.
Dr. Yukun Huang of the National Astronomical Observatory of Japan who conducted simulations of the orbit comments, "The fact that 2023 KQ14's current orbit does not align with those of the other three sednoids lowers the likelihood of the Planet Nine hypothesis. It is possible that a planet once existed in the Solar System but was later ejected, causing the unusual orbits we see today."
Regarding the significance of this discovery, Dr. Fumi Yoshida states, "2023 KQ14 was found in a region far away where Neptune's gravity has little influence. The presence of objects with elongated orbits and large perihelion distances in this area implies that something extraordinary occurred during the ancient era when 2023 KQ14 formed. Understanding the orbital evolution and physical properties of these unique, distant objects is crucial for comprehending the full history of the Solar System. At present, the Subaru Telescope is among the few telescopes on Earth capable of making such discoveries. I would be happy if the FOSSIL team could make many more discoveries like this one and help draw a complete picture of the history of the Solar System."
Journal Reference:
Chen, Ying-Tung, Lykawka, Patryk Sofia, Huang, Yukun, et al. Discovery and dynamics of a Sedna-like object with a perihelion of 66 au [open], Nature Astronomy (DOI: 10.1038/s41550-025-02595-7)
Microsoft says it will no longer use engineers in China for Department of Defense work:
Following a Pro Publica report that Microsoft was using engineers in China to help maintain cloud computing systems for the U.S. Department of Defense, the company said it's made changes to ensure this will no longer happen.
The existing system reportedly relied on "digital escorts" to supervise the China-based engineers. But according to Pro Publica, those escorts — U.S. citizens with security clearances — sometimes lacked the technical expertise to properly monitor the engineers.
In response to the report, Secretary of Defense Pete Hegseth wrote on X, "Foreign engineers — from any country, including of course China — should NEVER be allowed to maintain or access DoD systems."
On Friday, Microsoft's chief communications officer Frank X. Shaw responded: "In response to concerns raised earlier this week about US-supervised foreign engineers, Microsoft has made changes to our support for US Government customers to assure that no China-based engineering teams are providing technical assistance for DoD Government cloud and related services."
Rolling Stone has an article about a concert tape with an interesting back story. The album, Thelonious Monk: Live at Palo Alto eventually came out in September 2020. It was a recording of when the jazz legend played at a high school back in 1968. The school custodian recorded the show on reel to reel. When the tape resurfaced not too many years ago, it drew the ire of and some dirty tricks from a former record label.
The greatest lost concert in American history almost never happened at all. It was Oct. 27, 1968, in Palo Alto, California. Outside of his high school, Danny Scher, a 16-year-old, bushy-haired, jazz-obsessed, self-described “weirdo,” was pacing the parking lot waiting for his hero, and music’s most elusive and enigmatic genius, to show up: composer and pianist Thelonious Monk.
To the disbelief of most everyone — including his mother and girlfriend waiting alongside him — Scher claimed to have booked the jazz legend for an afternoon gig, the modern equivalent of securing Kendrick Lamar for prom. Pulling this off at a nearly all-white school during his racially divided town's explosive Civil Rights battle — when the predominantly Black community of East Palo Alto was fighting to rename itself "Nairobi" — made it even more unlikely. But the mixed crowd in the parking lot proved how music could bring them together. "It was really the only time I ever remember seeing that many Black people," Scher recalls. "Everyone was just there to see Monk."
Monk was playing a residency at the Jazz Workshop, a club in San Francisco. The city was only 35 miles away. Maybe, Scher thought, Monk would be willing to come down for a Sunday-afternoon show. After tracking down the number of Monk's manager, Harry Colomby, and calling him with the outrageous offer, he got an even more surprising response: Monk was in. Scher was duly mind-blown. But now he faced a new challenge, pulling off the show.
The kid promoting the Monk show, nonetheless, was having an unexpectedly hard time selling tickets. Despite Scher's booking, few people believed that the world's greatest jazz artist was really coming to town. To get the word out, Scher stuffed his newspaper-boy bag with rolled-up posters, and pedaled across Highway 101 to where he knew there were plenty of Monk fans like him: East Palo Alto.
It was a busy week for postering. After the killing of Martin Luther King Jr. that spring,[...] Tensions were high. Scher recalls a neighborhood cop seeing him taping up a poster. The cop warned him: "Hey, white boy, this isn't a safe place for you. You're going to get in trouble putting up posters." Scher told him, "I'm going to be in bigger trouble if the show doesn't do well."
SCHER'S MOVE PAID OFF. With Black and white kids buying up the tickets, the show sold out. Two days before the gig, Scher called the jazz club where Monk was playing to go over details with his manager — only to hear Monk himself pick up the phone instead. There was just one thing more shocking than talking to his hero for the first time — realizing Monk didn't know about the gig at all. As he told this kid on the phone, "What are you talking about?"
Scher's heart raced. He did his best to coolly fill in Monk, who'd either not been told about the gig by his manager or lost track. "How am I going to get there?" the piano great replied. Scher didn't have the budget for a limo, but he had something better: his older brother Les, who not only turned him on to Monk in the first place but also had a license. "My brother will pick you up!" Scher assured him. Yet without having received a fully executed contract back from Monk's manager, he didn't know if Monk would really show up at all.
Scher checked the school's piano. One of the custodians, a Black man in his thirties, knew how to tune it and offered to set it up. A fan himself, he just wanted one thing in return. "If I tune the piano," he said, "can I record the concert?" In all of Scher's meticulous planning, he hadn't thought about recording the show. But the custodian had access to a reel-to-reel tape recorder, and knew how to operate it, too. "Yeah, OK," Scher told him.
But he'd never heard the custodian's tape. The old reel-to-reel had been sitting in a box packed away until friends urged him to burn it onto a CD. When Scher popped it into his stereo, it was the first time he'd heard it since he was that bushy-haired 16-year-old listening from backstage. The custodian's raw tape captured Monk's performance in all its wonderful imperfections: the squeak of the piano bench as he shifted in his seat, the scratchy tap of his shoes swiping the piano pedals below. "It was really good," Scher says. It had to come out.
With Impulse Records on board, Thelonious Monk: Live at Palo Alto was slated to come out in July 2020. Scher, T.S. Monk, and the label prepared a lavish package for the vinyl release, including copies of the original program and poster. Impulse submitted it for six Grammy nominations.
But just as the advance raves were peaking two weeks before the release, they got a message from Monk's old sparring partner: his label. Sony, owners of Columbia, claimed the tape was contractually theirs. "They were saying that this recording was made during the period that Thelonious was on the contract to Columbia, and therefore they owned it," T.S. Monk says.
This wasn't the first time the Monk estate had battled with Sony. In 2002, the estate conducted a forensic accounting of Monk's catalog and discovered it was owed hundreds of thousands of dollars from the label. A settlement was reached in 2023. But now Sony was threatening to sue if the Palo Alto concert got released. Faced with a legal battle, Impulse pulled the LP. The momentum crashed. And with no way of knowing when or if the record would get released, the hypothetical Grammy nominations went away, too.
After searching through Monk's old paperwork, T.S. and the estate confirmed what they had known to be true: Monk's contract with Columbia had expired in 1967, a year before the Palo Alto High School show. Sony responded with another salvo: a contract extension through 1968 signed by Monk himself. But when his son eyed it 52 years later, he called bullshit. "That's not my father's signature," he said. Scher— who had one of Monk's rare autographs on his Palo Alto program — agreed. A forensic handwriting analyst confirmed their assessment. Sony seems to have decided this was a losing battle. According to T.S., the company soon settled the matter. Thelonious Monk: Live at Palo Alto eventually came out in September 2020.
Despite getting robbed of the momentum and the Grammy nominations, T.S. and Scher are happy the long-lost recording could finally be heard. "I know you think there's a bias because he's my father," T.S. says with a smile, "but it's not because he's my father. It's because he's Monk. His music does the same thing to me as it does to everybody else." For Scher, the legacy of the concert lives on, and so does his hero. He says, "I hear Monk every day."
Fortunately Monk's contract with the label had expired in 1967, a year before the Palo Alto High School show, but the rip off attempts by the label almost derailed the release.
Previously:
(2024) Gershwin's "Rhapsody in Blue" at 100
(2019) The Internet Saved the Record Labels
Endgadget reports that Meta is Building "Several" Multi-Gigawatt Compute Clusters
Meta is building several gigawatt-sized data centers to power AI, as reported by Bloomberg. CEO Mark Zuckerberg says the company will spend "hundreds of billions of dollars" to accomplish this feat, with an aim of creating "superintelligence."
The first center is called Prometheus and it comes online next year. It's being built in Ohio. Next up, there's a data center called Hyperion that's almost the size of Manhattan. This one should "be able to scale up to 5GW over several years." Some of these campuses will be among the largest in the world, as most data centers can only generate hundreds of megawatts of capacity.
Meta has also been staffing up its Superintelligence Labs team, recruiting folks from OpenAI, Google's DeepMind and others. Scale AI's co-founder Alexandr Wang is heading up this effort.
However, these giant data centers do not exist in a vacuum. The complexes typically brush up against local communities. The centers are not only power hogs, but also water hogs. The New York Times just published a report on how Meta data centers impact local water supplies.
There's a data center east of Atlanta that has damaged local wells and caused municipal water prices to soar, which could lead to a shortage and rationing by 2030. The price of water in the region is set to increase by 33 percent in the next two years.
Typical data centers guzzle around 500,000 gallons of water each day, but these forthcoming AI-centric complexes will likely be even thirstier. The new centers could require millions of gallons per day, according to water permit applications reviewed by The New York Times. Mike Hopkins, the executive director of the Newton County Water and Sewerage Authority, says that applications are coming in with requests for up to six millions of water per day, which is more than the county's entire daily usage.
"What the data centers don't understand is that they're taking up the community wealth," he said. "We just don't have the water."
We're going to have to decide soon how to regulate the growing data center industry which pose several issues for desert communities. "They consume large amounts of electricity and water 24 hours per day, seven days a week."
— Arizona Green Party 🌻 (@AZGreenParty) July 10, 2025This same worrying story is playing out across the country. Data center hot spots in Texas, Arizona, Louisiana and Colorado are also taxing local water reserves. For instance, some Phoenix homebuilders have been forced to pause new constructions due to droughts exacerbated by these data centers.
See also Meta Superintelligence – Leadership Compute, Talent, and Data for a detailed analysis of Meta AI.
Phys.org reports on how weird space weather seems to have influenced human behavior on Earth 41,000 years ago
[...] This near-collapse is known as the Laschamps Excursion, a brief but extreme geomagnetic event named for the volcanic fields in France where it was first identified. At the time of the Laschamps Excursion, near the end of the Pleistocene epoch, Earth's magnetic poles didn't reverse as they do every few hundred thousand years. Instead, they wandered, erratically and rapidly, over thousands of miles. At the same time, the strength of the magnetic field dropped to less than 10% of its modern day intensity.
The magnetosphere normally deflects much of the solar wind and harmful ultraviolet radiation that would otherwise reach Earth's surface.
The skies 41,000 years ago may have been both spectacular and threatening. When we realized this, we two geophysicists wanted to know whether this could have affected people living at the time.
[...] In response, people may have adopted practical measures: spending more time in caves, producing tailored clothing for better coverage, or applying mineral pigment "sunscreen" made of ochre to their skin.
At this time, both Neanderthals and members of our species, Homo sapiens, were living in Europe, though their geographic distributions likely overlapped only in certain regions. The archaeological record suggests that different populations exhibited distinct approaches to environmental challenges, with some groups perhaps more reliant on shelter or material culture for protection.
Importantly, we're not suggesting that space weather alone caused an increase in these behaviors or, certainly, that the Laschamps caused Neanderthals to go extinct, which is one misinterpretation of our research. But it could have been a contributing factor—an invisible but powerful force that influenced innovation and adaptability.
The United Nations' Global E-waste Monitor estimates that the world generates over 60 million tonnes of e-waste annually. Furthermore, this number is rising five times as fast as e-waste recycling. Much of this waste comes from prematurely discarded electronic devices.
Many enterprises follow a standard three-year replacement cycle, assuming older computers are inefficient. However, many of these devices are still functional and could perform well with minor upgrades or maintenance. The issue is, no one knows what the weak points are for a particular machine, or what the needed maintenance is, and the diagnostics would be too costly and time-consuming. It's easier to just buy brand new laptops.
When buying a used car, dealerships and individual buyers can access each car's particular CarFax report, detailing the vehicle's usage and maintenance history. Armed with this information, dealerships can perform the necessary fixes or upgrades before reselling the car. And individuals can decide whether to trust that vehicle's performance. We at HP realized that, to prevent unnecessary e-waste, we need to collect and make available usage and maintenance data for each laptop, like a CarFax for used PCs.
There is a particular challenge to collecting usage data for a PC, however. We need to make sure to protect the user's privacy and security. So, we set out to design a data-collection protocol for PCs that manages to remain secure.
Luckily, the sensors that can collect the necessary data are already installed in each PC. There are thermal sensors that monitor CPU temperature, power-consumption monitors that track energy efficiency, storage health indicators that assess solid state drive (SSD) wear levels, performance counters that measure system utilization, fan-rotation-speed sensors that detect cooling efficiency, and more. The key is to collect and store all that data in a secure yet useful way.
We decided that the best way to do this is to integrate the life-cycle records into the firmware layer. By embedding telemetry capabilities directly within the firmware, we ensure that device health and usage data is captured the moment it is collected. This data is stored securely on HP SSD drives, leveraging hardware-based security measures to protect against unauthorized access or manipulation.
The secure telemetry protocol we've developed at HP works as follows. We gather the critical hardware and sensor data and store it in a designated area of the SSD. This area is write-locked, meaning only authorized firmware components can write to it, preventing accidental modification or tampering. That authorized firmware component we use is the Endpoint Security Controller, a dedicated piece of hardware embedded in business-class HP PCs. It plays a critical role in strengthening platform-level security and works independently from the main CPU to provide foundational protection.
The endpoint security controller establishes a secure session by retaining the secret key within the controller itself. This mechanism enables read data protection on the SSD—where telemetry and sensitive data are stored—by preventing unauthorized access, even if the operating system is reinstalled or the system environment is otherwise altered.
Then, the collected data is recorded in a time-stamped file, stored within a dedicated telemetry log on the SSD. Storing these records on the SSD has the benefit of ensuring the data is persistent even if the operating system is reinstalled or some other drastic change in software environment occurs.
The telemetry log employs a cyclic buffer design, automatically overwriting older entries when the log reaches full capacity. Then, the telemetry log can be accessed by authorized applications at the operating system level.
The telemetry log serves as the foundation for a comprehensive device history report. Much like a CarFax report for used cars, this report, which we call PCFax, will provide both current users and potential buyers with crucial information.
The PCFax report aggregates data from multiple sources beyond just the on-device telemetry logs. It combines the secure firmware-level usage data with information from HP's factory and supply-chain records, digital-services platforms, customer-support service records, diagnostic logs, and more. Additionally, the system can integrate data from external sources including partner sales and service records, refurbishment partner databases, third-party component manufacturers like Intel, and other original equipment manufacturers. This multisource approach creates a complete picture of the device's entire life cycle, from manufacturing through all subsequent ownership and service events.
For IT teams within organizations, we hope the PCFax will bring simplicity and give opportunities for optimization. Having access to fine-grained usage and health information for each device in their fleet can help IT managers decide which devices are sent to which users, as well as when maintenance is scheduled. This data can also help device managers decide which specific devices to replace rather than issuing new computers automatically, enhancing sustainability. And this can help with security: With real-time monitoring and firmware-level protection, IT teams can mitigate risks and respond swiftly to emerging threats. All of this can facilitate more efficient use of PC resources, cutting down on unnecessary waste.
We also hope that, much as the CarFax gives people confidence in buying used cars, the PCFax can encourage resale of used PCs. For enterprises and consumers purchasing second-life PCs, it provides detailed visibility into the complete service and support history of each system, including any repairs, upgrades, or performance issues encountered during its initial deployment. By making this comprehensive device history readily available, PCFax enables more PCs to find productive second lives rather than being prematurely discarded, directly addressing the e-waste challenge while providing economic benefits to both sellers and buyers in the secondary PC market.
While HP's solutions represent a significant step forward, challenges remain. Standardizing telemetry frameworks across diverse ecosystems is critical for broader adoption. Additionally, educating organizations about the benefits of life-cycle records will be essential to driving uptake.
We are also working on integrating AI into our dashboards. We hope to use AI models to analyze historical telemetry data and predict failures before they happen, such as detecting increasing SSD write cycles to forecast impending failure and alert IT teams for proactive replacement, or predicting battery degradation and automatically generating a service ticket to ensure a replacement battery is ready before failure, minimizing downtime.
We plan to start rolling out these features at the beginning of 2026.
upstart writes:
Delta Air Lines is using AI to set the maximum price you're willing to pay:
Delta's president says the quiet part out loud.
Delta Air Lines is leaning into dynamic ticket pricing that uses artificial intelligence to individually determine the highest fee you'd willingly pay for flights, according to comments Fortune spotted in the company's latest earnings call. Following a limited test of the technology last year, Delta is planning to shift away from static ticket prices entirely after seeing "amazingly favorable" results.
"We will have a price that's available on that flight, on that time, to you, the individual," Delta president Glen Hauenstein told investors in November, having started to test the technology on one percent of its ticket prices. Delta currently uses AI to influence three percent of its ticket prices, according to last week's earnings call, and is aiming to increase that to 20 percent by the end of this year. "We're in a heavy testing phase," said Hauenstein. "We like what we see. We like it a lot, and we're continuing to roll it out."
While personalized pricing isn't unique to Delta, the airline has been particularly candid about embracing it. During that November call, Hauenstein said the AI ticketing system is "a full reengineering of how we price and how we will be pricing in the future," and described the rollout as "a multiyear, multi-step process." Hauenstein acknowledged that Delta was excited about the initial revenue results it saw in testing, but noted the shift to AI-determined pricing could "be very dangerous, if it's not controlled and it's not done correctly."
Delta's personalized AI pricing tech is provided by travel firm Fetcherr, which also partners with Virgin Atlantic, Azul, WestJet, and VivaAerobus. In Delta's case, the AI will act as a "super analyst" that operates 24/7 to determine custom ticket prices that should be offered to individual customers in real-time, per specific flights and times.
Airlines have varied their ticket prices for customers on the same routes for many years, depending on a range of factors, including how far in advance the booking is made, what website or service it's being booked with, and even the web browser the customer is using. Delta is no exception, but AI pricing looks set to supercharge the approach.
Delta has taken heat for charging customers different prices for flights, having rolled back the decision to price tickets higher for solo-travelers compared to groups in May. It's not entirely clear how invasive Delta's AI ticketing will be when it analyzes customers to figure out prices, but Fortune notes that it has privacy advocates concerned.
"They are trying to see into people's heads to see how much they're willing to pay," Justin Kloczko of Consumer Watchdog told the publication. "They are basically hacking our brains." Arizona Senator Ruben Gallego described it as "predatory pricing" that's designed to "squeeze you for every penny."
upstart writes:
For Algorithms, a Little Memory Outweighs a Lot of Time:
One of the most important classes goes by the humble name "P." Roughly speaking, it encompasses all problems that can be solved in a reasonable amount of time. An analogous complexity class for space is dubbed "PSPACE."
The relationship between these two classes is one of the central questions of complexity theory. Every problem in P is also in PSPACE, because fast algorithms just don't have enough time to fill up much space in a computer's memory. If the reverse statement were also true, the two classes would be equivalent: Space and time would have comparable computational power. But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren't in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn't as forgiving — once it passes, you can't get it back.
"The intuition is just so simple," Williams said. "You can reuse space, but you can't reuse time."
But intuition isn't good enough for complexity theorists: They want rigorous proof. To prove that PSPACE is larger than P, researchers would have to show that for some problems in PSPACE, fast algorithms are categorically impossible. Where would they even start?
Those definitions emerged from the work of Juris Hartmanis, a pioneering computer scientist who had experience making do with limited resources. He was born in 1928 into a prominent Latvian family, but his childhood was disrupted by the outbreak of World War II. Occupying Soviet forces arrested and executed his father, and after the war Hartmanis finished high school in a refugee camp. He went on to university, where he excelled even though he couldn't afford textbooks.
In 1960, while working at the General Electric research laboratory in Schenectady, New York, Hartmanis met Richard Stearns, a graduate student doing a summer internship. In a pair of groundbreaking papers they established precise mathematical definitions for time and space. These definitions gave researchers the language they needed to compare the two resources and sort problems into complexity classes.
As it happened, they started at Cornell University, where Hartmanis moved in 1965 to head the newly established computer science department. Under his leadership it quickly became a center of research in complexity theory, and in the early 1970s, a pair of researchers there, John Hopcroft and Wolfgang Paul, set out to establish a precise link between time and space.
Hopcroft and Paul knew that to resolve the P versus PSPACE problem, they'd have to prove that you can't do certain computations in a limited amount of time. But it's hard to prove a negative. Instead, they decided to flip the problem on its head and explore what you can do with limited space. They hoped to prove that algorithms given a certain space budget can solve all the same problems as algorithms with a slightly larger time budget. That would indicate that space is at least slightly more powerful than time — a small but necessary step toward showing that PSPACE is larger than P.
With that goal in mind, they turned to a method that complexity theorists call simulation, which involves transforming existing algorithms into new ones that solve the same problems, but with different amounts of space and time. To understand the basic idea, imagine you're given a fast algorithm for alphabetizing your bookshelf, but it requires you to lay out your books in dozens of small piles. You might prefer an approach that takes up less space in your apartment, even if it takes longer. A simulation is a mathematical procedure you could use to get a more suitable algorithm: Feed it the original, and it'll give you a new algorithm that saves space at the expense of time.
Algorithm designers have long studied these space-time trade-offs for specific tasks like sorting. But to establish a general relationship between time and space, Hopcroft and Paul needed something more comprehensive: a space-saving simulation procedure that works for every algorithm, no matter what problem it solves. They expected this generality to come at a cost. A universal simulation can't exploit the details of any specific problem, so it probably won't save as much space as a specialized simulation. But when Hopcroft and Paul started their work, there were no known universal methods for saving space at all. Even saving a small amount would be progress.
The breakthrough came in 1975, after Hopcroft and Paul teamed up with a young researcher named Leslie Valiant. The trio devised a universal simulation procedure that always saves a bit of space. No matter what algorithm you give it, you'll get an equivalent one whose space budget is slightly smaller than the original algorithm's time budget.
"Anything you can do in so much time, you can also do in slightly less space," Valiant said. It was the first major step in the quest to connect space and time.
But then progress stalled, and complexity theorists began to suspect that they'd hit a fundamental barrier. The problem was precisely the universal character of Hopcroft, Paul and Valiant's simulation. While many problems can be solved with much less space than time, some intuitively seemed like they'd need nearly as much space as time. If so, more space-efficient universal simulations would be impossible. Paul and two other researchers soon proved that they are indeed impossible, provided you make one seemingly obvious assumption: Different chunks of data can't occupy the same space in memory at the same time.
"Everybody took it for granted that you cannot do better," Wigderson said.
Paul's result suggested that resolving the P versus PSPACE problem would require abandoning simulation altogether in favor of a different approach, but nobody had any good ideas. That was where the problem stood for 50 years — until Williams finally broke the logjam. First, he had to get through college.
In 1996, the time came for Williams to apply to colleges. He knew that pursuing complexity theory would take him far from home, but his parents made it clear that the West Coast and Canada were out of the question. Among his remaining options, Cornell stood out to him for its prominent place in the history of his favorite discipline.
"This guy Hartmanis defined the time and space complexity classes," he recalled thinking. "That was important for me."
Williams was admitted to Cornell with generous financial aid and arrived in the fall of 1997, planning to do whatever it took to become a complexity theorist himself. His single-mindedness stuck out to his fellow students.
"He was just super-duper into complexity theory," said Scott Aaronson, a computer scientist at the University of Texas, Austin, who overlapped with Williams at Cornell.
For 50 years, researchers had assumed it was impossible to improve Hopcroft, Paul and Valiant's universal simulation. Williams' idea — if it worked — wouldn't just beat their record — it would demolish it.
"I thought about it, and I was like, 'Well, that just simply can't be true,'" Williams said. He set it aside and didn't come back to it until that fateful day in July, when he tried to find the flaw in the argument and failed. After he realized that there was no flaw, he spent months writing and rewriting the proof to make it as clear as possible.
Valiant got a sneak preview of Williams' improvement on his decades-old result during his morning commute. For years, he's taught at Harvard University, just down the road from Williams' office at MIT. They'd met before, but they didn't know they lived in the same neighborhood until they bumped into each other on the bus on a snowy February day, a few weeks before the result was public. Williams described his proof to the startled Valiant and promised to send along his paper.
"I was very, very impressed," Valiant said. "If you get any mathematical result which is the best thing in 50 years, you must be doing something right."
With his new simulation, Williams had proved a positive result about the computational power of space: Algorithms that use relatively little space can solve all problems that require a somewhat larger amount of time.
The difference is a matter of scale. P and PSPACE are very broad complexity classes, while Williams' results work at a finer level. He established a quantitative gap between the power of space and the power of time, and to prove that PSPACE is larger than P, researchers will have to make that gap much, much wider.
That's a daunting challenge, akin to prying apart a sidewalk crack with a crowbar until it's as wide as the Grand Canyon. But it might be possible to get there by using a modified version of Williams' simulation procedure that repeats the key step many times, saving a bit of space each time. It's like a way to repeatedly ratchet up the length of your crowbar — make it big enough, and you can pry open anything.
"It could be an ultimate bottleneck, or it could be a 50-year bottleneck," Valiant said. "Or it could be something which maybe someone can solve next week."
"I can never prove precisely the things that I want to prove," Williams said. "But often, the thing I prove is way better than what I wanted."
Journal References:
Dr. Juris Hartmanis Interview: July 26, 2009; Cornell University in Ithaca, New York
On Time Versus Space, Journal of the ACM (JACM)
Space bounds for a game on graphs:, Journal of the ACM (JACM)
Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛), Journal of the ACM (JACM)
An Anonymous Coward writes:
Open, free, and completely ignored: The strange afterlife of Symbian
The result of the pioneering joint Psion and Nokia smartphone effort is still out there on GitHub.
Smartphones are everywhere. They are entirely commoditized now. Most of them run Android, which uses the Linux kernel. The rest run Apple's iOS, which uses the same XNU kernel as macOS. As we've said before, they're not Unix-like, they really are Unix™.
There have been a bunch of others. BlackBerry tried hard with BB10, but even a decade ago, it was over. It was based on QNX and Qt, and both of those are doing fine. We reported last year that QNX 8 is free to use again. Palm's WebOS ended up with HP and now runs in LG smart TVs – but it's Linux underneath.
The most radical, though, was probably Symbian. The Register covered it at length back in the day, notably the epic Psion: the Last Computer feature, followed by the two-part Symbian, The Secret History, and Symbian UI Wars features.
Built from scratch in the late 1990s in the then-relatively new C++, it evolved into a real-time microkernel OS for handhelds, with the radical EKA2 microkernel designed by Dennis May and documented in detail in the book Symbian OS Internals. There's also The
Symbian OS Architecture Sourcebook [PDF]. An official version of the source code is on GitHub, and other copies are out there.
We liked this description from CHERI Project boffin David Chisnall:
The original Symbian kernel was nothing special, but EKA2 (which is the one described in the amazing Symbian Internals book) was a thing of beauty. It had a realtime nano-kernel (does not allocate memory) that could run both an RTOS and a richer application stack.
It was a victim of poor timing: the big advantage was the ability to run both the apps and the phone stack on the same core, but it came along as Arm cores became cheap enough that just sticking two in the SoC was cheap enough.
Before Nokia was assimilated and digested by Microsoft, it open sourced the OS, and despite some licensing concerns, it's still there.
It strikes this vulture as odd that while work continues on some ground-up FOSS OS projects in C++, such as the Genode OS, orSerenity OS, which we looked at in 2022,the more complete Symbian, which shipped on millions of devices and for a while had a thriving third-party application market, languishes ignored.
(Incidentally, the Serenity OS project lead has moved on to the independent Ladybird browser, which we looked at in 2023. Work on the OS continues, now community-led.)Symbian's progenitor, Psion EPOC32, predates much of the standardization of C++ – much as BeOS did. We've seen comments that it was not easy to program, but tools such as P.I.P.S. made it easier. Nokia wasted vast effort on multiple incompatible UIs, which have been blamed for tearing Symbian apart, but none of that matters now: adapt some existing FOSS stuff, and forget backwards compatibility. Relatively few of the apps were FOSS, and who needs touchscreen phone apps on a Raspberry Pi anyway? Qt would be ideal – it's a native C++ tool too.
Fans of all manner of 20th century proprietary OSes from AmigaOS to OS/2 bemoan that these never went open source. Some of BeOS made it into PalmOS Cobalt but that sank. Palm even mulled basing an Arm version of PalmOS on Symbian, but the deal fell through.
Some of those OSes have been rebuilt from scratch, including AmigaOS as AROS and BeOS as Haiku. But they run on Intel. Neither runs natively on Arm, and yet Symbian sits there ignored. Sometimes you can't even give the good stuff away.
by Liam Proven // Thu 17 Jul 2025 // 07:27 UTC
Arthur T Knackerbracket has processed the following story:
The Information Technology Organization of Iran (ITOI), the government body that develops and implements IT services for the country, is looking for suppliers of cloud computing.
The org[anisation] recently posted a notification of its desire to evaluate, grade, and rank cloud players to assess their suitability to host government services.
At the end of the exercise, the organization hopes to have a panel of at least three cloud operators capable of handling government services.
The government agency will base its assessments on compliance with standards such as ISO 27017 and ISO 27018, which define controls for secure cloud computing and protection of personally identifiable information.
ITOI also expects companies that participate in its evaluation to be compliant with the NIST SP 800-145 definition of cloud computing.
Yes, Iran recognizes that NIST – the USA’s National Institute of Standards and Technology – despite regarding America as a trenchant enemy.
ITOI has cast the net wide, by seeking cloud operators with the capacity to deliver IaaS, PaaS, or SaaS. Service providers that deliver private, public, hybrid or community clouds are also welcome, as are service providers who specialize in security, monitoring, support services, or cloud migration.
Organizations that pass ITOI’s tests will earn a “cloud service rating certificate” that makes them eligible for inclusion on a list of authorized cloud services providers.
https://lists.archlinux.org/archives/list/aur-general@lists.archlinux.org/thread/7EZTJXLIAQLARQNTMEW2HBWZYE626IFJ/
https://archive.ph/jwPRg
On the 16th of July, at around 8pm UTC+2, a malicious AUR package was
uploaded to the AUR. Two other malicious packages were uploaded by the
same user a few hours later. These packages were installing a script
coming from the same GitHub repository that was identified as a Remote
Access Trojan (RAT).The affected malicious packages are:
- librewolf-fix-bin
- firefox-patch-bin
- zen-browser-patched-binThe Arch Linux team addressed the issue as soon as they became aware of
the situation. As of today, 18th of July, at around 6pm UTC+2, the
offending packages have been deleted from the AUR.We strongly encourage users that may have installed one of these
packages to remove them from their system and to take the necessary
measures in order to ensure they were not compromised.
/r/linux Discussion: http://old.reddit.com/r/linux/comments/1m3wodv/malware_found_in_the_aur/
/r/archlinux Discussion: https://old.reddit.com/r/archlinux/comments/1m387c5/aurgeneral_security_firefoxpatchbin/
https://distrowatch.com/dwres.php?resource=showheadline&story=20030
Clear Linux is a rolling release, highly optimized distribution developed by Intel. Or, it is now more accurate to say it "was", since Intel has decided to abruptly discontinue the project. Just one day after the project's latest snapshot, the following announcement was published on the distribution's forum: "Effective immediately, Intel will no longer provide security patches, updates, or maintenance for Clear Linux OS, and the Clear Linux OS GitHub repository will be archived in read-only mode. So, if you're currently using Clear Linux OS, we strongly recommend planning your migration to another actively maintained Linux distribution as soon as possible to ensure ongoing security and stability."
An Anonymous Coward writes:
Microsoft's Copilot finally comes into its own with new AI features like Recall
Buy any new Windows PC and you might notice an unfamiliar key: the Copilot key. Launched in January, it promised quick access to Microsoft's AI Copilot. Yet features were limited, causing critics to wonder: Is this it?
Microsoft Build 2024, the company's annual developer conference, had a reply: No. On 20 May, the company revealed Copilot+ PCs, a new class of Windows computers that exclusively use Qualcomm chips (for now, at least) to power a host of AI features that run on-device. Copilot+ PCs can quickly recall tasks you've completed on the PC, refine simple sketches in Paint, and translate languages in a real-time video call. Microsoft's Surface Laptop and Surface Pro will showcase these features, but they're joined by Copilot+ PCs from multiple laptop partners including Acer, Asus, Dell, HP, Lenovo, and Samsung.
"We wanted to put the best foot forward," said Brett Ostrum, corporate vice president of Surface devices at Microsoft. "When we started this journey, the goal was that Surface was going to ship relevant volumes on [Qualcomm] silicon. And people need to love it."
Windows' Recall is a new way to search
Microsoft revealed several AI features at Build 2024, but the highlight was Recall. Similar to Rewind, an app for the Mac I tried in December 2023, Recall can help Windows users find anything they've seen, heard, or opened on their PC. This includes files, documents, and apps, but also images, videos, and audio. Recall defaults to a scrollable timeline, which is broken up into discrete events detected by Recall, but users can also browse with semantic text search.
It's a simple feature to use, but its implications are vast. If Recall works as advertised, it could fundamentally change how people interact with Windows PCs. There's arguably little need to organize photos from a vacation or carefully file away notes if Recall can find anything, and everything, you've opened on your PC.
"It used to be if you interacted with your PC, you used a command line. Then we came up with the graphical user interface," said Ostrum. "Now, how do you find the things that you are looking for? Recall is a much more natural and richer way to interact with your files."
There's one unavoidable caveat: It's too early to know if Recall will do what Microsoft says. I tried the feature firsthand, and found that it could recall a fictional recipe I asked Microsoft Copilot to create. It did so immediately, and also after several hours had passed. Whether it can do the same next month, or next year, remains to be seen.
While Recall was the star, it was joined by several additional AI features. These include Cocreator, a new feature for Microsoft Paint that uses AI to convert simple sketches into more elaborate digital art, and Live Captions, which captions and translates video in real time. Like Recall, both features lean on a Copilot+ PC's neural processing unit (NPU). That means these features, again like Recall, won't be available on older PCs.
These features are intriguing, but they're shadowed by a concern: privacy. Recall could help you find lost documents, and live translation could lower language barriers, but they only work if Microsoft's AI captures what's happening on your PC. The company hopes to ease these concerns by running AI models on-device and encrypting any data that's stored.
Qualcomm partnership leaves Intel, AMD in the cold
Of course, running an AI model on-device isn't easy. CPUs can handle some AI models, but performance often isn't ideal, and many AI models aren't optimized for the hardware. GPUs are better fit for AI workloads but can draw a lot of power, which shortens battery life.
That's where Qualcomm comes into the picture. Its latest laptop chip, the Snapdragon X Elite, was designed by many of the same engineers responsible for Apple's M1 chip and includes an NPU.
Microsoft's two Copilot+ PCs, the Surface Laptop and Surface Pro, both have Snapdragon X Elite processors, and both quote AI performance of up to 45 trillion operations per second. Intel's current Intel Core Ultra processors are a step behind, with quoted AI performance up to 34 trillion operations per second.
That's apparently not enough for Microsoft: All Copilot+ PCs available at launch on 18 June will have Qualcomm chips inside. And many new AI features, including Windows' Recall, only work on Copilot+ PCs. Put simply: If you want to use Recall, you must buy Qualcomm.
Intel and AMD chips will appear in Copilot+ PCs eventually, but Ostrum said that may not happen until the end of 2024 or early 2025.
"We will continue to partner with [Intel and AMD] when it makes sense," said Ostrum. "There is both an element of how much performance there is, but there's also an element of how efficient that performance is [...] we don't want [AI] to be taxing multiple hours of battery life at a given time." Ostrum says activating AI features like Windows' Recall on a Copilot+ PC shaves no more than 30 to 40 minutes off a laptop's battery life, and all of Microsoft's battery-life quotes for Surface devices (which promise up to 15 hours of Web browsing and 22 hours of video playback) assume Copilot+ AI features are turned on.
It's unusual to see a major Windows product launch without Intel at the forefront of it, but that underscores Microsoft's belief that features like Recall only work on hardware that prioritizes AI performance and efficiency. If Microsoft has it their way, the Copilot key won't be a fad. It'll be the most important key on every Windows PC.
So, are you getting one or staying as far as away as you can?