Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:93 | Votes:228

posted by janrinok on Wednesday July 23, @07:54PM   Printer-friendly
from the too-late! dept.

Conversations Between LLMs Could Automate the Creation of Exploits, Study Shows

2025-07-conversations-llms-automate-creation-exploits:

As computers and software become increasingly sophisticated, hackers need to rapidly adapt to the latest developments and devise new strategies to plan and execute cyberattacks. One common strategy to maliciously infiltrate computer systems is known as software exploitation.

As suggested by its name, this strategy involves the exploitation of bugs, vulnerabilities or flaws in software to execute unauthorized actions. These actions include gaining access to a user's personal accounts or computer, remotely executing malware or specific commands, stealing or modifying a user's data or crashing a program or system.

Understanding how hackers devise potential exploits and plan their attacks is of the utmost importance, as it can ultimately help to develop effective security measures against their attacks. Until now, creating exploits has been primarily possible for individuals with extensive knowledge of programming, the protocols governing the exchange of data between devices or systems, and operating systems.

A recent paper published in Computer Networks, however, shows that this might no longer be the case. Exploits could also be automatically generated by leveraging large language models (LLMs), such as the model underlying the well-known conversational platform ChatGPT. In fact, the authors of the paper were able to automate the generation of exploits via a carefully prompted conversation between ChatGPT and Llama 2, the open-source LLM developed by Meta.

"We work in the field of cybersecurity, with an offensive approach," Simon Pietro Romano, co-senior author of the paper, told Tech Xplore. "We were interested in understanding how far we could go with leveraging LLMs to facilitate penetration testing activities."

As part of their recent study, Romano and his colleagues initiated a conversation aimed at generating software exploits between ChatGPT and Llama 2. By carefully engineering the prompts they fed to the two models, they ensured that the models took on different roles and completed five different steps known to support the creation of exploits.

These steps included: the analysis of a vulnerable program, the identification of possible exploits, planning an attack based on these exploits, understanding the behavior of targeted hardware systems and ultimately generating the actual exploit code.

"We let two different LLMs interoperate in order to get through all of the steps involved in the process of crafting a valid exploit for a vulnerable program," explained Romano. "One of the two LLMs gathers 'contextual' information about the vulnerable program and its run-time configuration. It then asks the other LLM to craft a working exploit. In a nutshell, the former LLM is good at asking questions. The latter is good at writing (exploit) code."

So far, the researchers have only tested their LLM-based exploit generation method in an initial experiment. Nonetheless, they found that it ultimately produced fully functional code for a buffer overflow exploit, an attack that entails overwriting data stored by a system to alter the behavior of specific programs.

"This is a preliminary study, yet it clearly proves the feasibility of the approach," said Romano. "The implications concern the possibility of arriving at fully automated Penetration Testing and Vulnerability Assessment (VAPT)."

The recent study by Romano and his colleagues raises important questions about the risks of LLMs, as it shows how hackers could use them to automate the generation of exploits. In their next studies, the researchers plan to continue investigating the effectiveness of the exploit generation strategy they devised to inform the future development of LLMs, as well as the advancement of cybersecurity measures.

"We are now exploring further avenues of research in the same field of application," added Romano. "Namely, we feel like the natural prosecution of our research falls in the field of the so-called 'agentic' approach, with minimal human supervision."

More information: A chit-chat between Llama 2 and ChatGPT for the automated creation of exploits. Computer Networks(2025). DOI: 10.1016/j.comnet.2025.111501.

First-ever AI malware 'LameHug' hides in ZIP files to hack Windows PCs

A new malware named LameHug is using Alibaba's large language models (LLM), the very same tech that powers AI chatbots like ChatGPT, to generate and run commands and steal information from Windows machines.

A new family of malware called LameHug is infecting systems around the world using the very same tech that powers AI chatbots like ChatGPT, Gemini, Perplexity and Claude. Discovered by the Ukrainian national cyber incident response team (CERT-UA), the malware uses large language models to generate and run commands to infect and steal information from Windows PCs.

CERT-UA says that the attacks are from the Russian threat group APT028. Written in the popular coding language Python, LameHug uses APIs from Hugging Face and is powered by Qwen-2.5-Coder-32B-Instruct, an open-sourced large language model developed by Alibaba Cloud to generate and send commands.

As is the case with AI chatbots like Gemini, ChatGPT and Perplexity, the large language model can convert instructions given in natural language into executable code or shell commands. In an email sent by the group to Ukrainian government authorities impersonating ministry officials, the payload delivering the LameHug malware was hidden in a ZIP archive that contained files named "AI_generator_uncensored_Canvas_PRO_0.9.exe" and "image.py".

The malware used commands that allowed APT-28, the threat group that sent these emails, to extract information about the infected Windows PC and search for text and PDF documents stored in the Documents, Downloads and Desktop folders. This information was then sent to a remotely controlled server, but as of now, it is unclear how the LLM-powered attack was carried out.

According to a recently issued advisory by the threat intelligence sharing platform IBM X-Force Exchange, this is the first documented case where a malware is using LLMs to write executable commands, which "allows threat actors to adapt their practice during a compromise without needing new payloads, potentially making the malware harder to detect by security software or static analysis tools." The news comes after security analysis firm Check Point said that it discovered a new malware called Skynet that evades detection by AI tools.

https://indianexpress.com/article/technology/tech-news-technology/lamehug-virus-zip-file-ai-powered-alibaba-llm-malware-10136327/
Alternative link: https://newsinterpretation.com/ai-powered-malware-lazyhug-secretly-steals-files-from-windows-pcs/


Original Submission #1Original Submission #2

posted by janrinok on Wednesday July 23, @03:10PM   Printer-friendly

https://www.osnews.com/story/142855/microsoft-wants-to-find-out-why-windows-11-is-so-slow/

Microsoft wants to know why, exactly, Windows 11 is slow, so it's adding a feature in the latest Insider Preview to collect data when a Windows 11 machine is experiencing slowness or sluggishness.

As part of our commitment to improving Windows performance, logs are now collected when your PC has experienced any slow or sluggish performance. Windows Insiders are encouraged to provide feedback when experiencing PC issues related to slow or sluggish performance, allowing Feedback Hub to automatically collect these logs, which will help us root cause issues faster. Use the Desktop > System Sluggishness category when filing feedback to allow Feedback Hub to automatically pick up these logs. These logs are stored locally (%systemRoot%\Temp\DiagOutputDir\Whesvc folder) and only sent to Microsoft via Feedback Hub when feedback is submitted.

The replies are interesting - even if you disregard the expected but unwelcome 'replace it with Linux' suggestions. Some existing Windows users are complaining about the excessive telemetry and what they describe as spyware. This is an "Insider Preview" so it is more of an Alpha release.

One comment requests:

  • – allow the option to fully disable telemetry.
  • – allow the option to fully disable a feature the user doesn't need (bonus: reduced attack surface)
  • – allow the option to create offline accounts without having to rely on shenanigans
  • – remove ads from the system
  • – stop reinventing the wheel and replacing working components with much heavier ones (cough, notepad, cough)

Original Submission

posted by janrinok on Wednesday July 23, @10:24AM   Printer-friendly

The atomic bomb marker inside your body:

It is 80 years since the first nuclear weapon test – codenamed Trinity – detonated above the desert in New Mexico. Today the hidden legacy of nuclear bomb tests can still be found in our cells – and is proving surprisingly useful to scientists.

It's in your teeth. Your eyes and your brain too. Scientists call it the "bomb spike" (or "bomb pulse") – and for more than half a century its signature has been present inside the human body.

On 16 July 1945, scientists of the Manhattan Project detonated the first nuclear weapon, known as the Trinity test, in New Mexico. The 18.6kt explosion lit up the sky and sent a blast of searing heat across the desert as a fireball lofted high into the sky. In the days that followed, white flakes and dust rained down on areas downwind. A now de-classified report from the time warned that radioactive particles spread over an area of more than 2,700sq miles (6,993sq km). And this test was just the start of the atomic era.

In the 1950s, there were so many nuclear bomb explosions above ground that they transformed the chemical make-up of the atmosphere – altering the carbon composition of life on Earth ever since, along with oceans, sediments, stalactites and more.

Unlike the direct radioactive fallout from the explosions, the bomb spike is not harmful. In fact, it's proven surprisingly helpful for scientists in recent years. Some have even gone so far as to describe it as the "mushroom cloud's silver lining".

Why? Evidence of the pulse is so ubiquitous that it can, among many other insights, tell forensic scientists when a person was born (or died), provide discoveries about the age of neurons in our brains, reveal the origin of poached wildlife, determine red wine vintage and even unlock the true age of centuries-old sharks (see box: "The bomb spike's multiple uses").

And now it may also help to define a new geological era. In July 2023, a group of earth scientists recommended that its presence in a Canadian lake – along with other human-made markers from the mid-20th Century – should represent the official start of the Anthropocene.

So, what exactly is the bomb spike, and what can it reveal about us and the world?

Before the 1963 Nuclear Test Ban Treaty obligated signatory nations to test nuclear bombs underground, governments exploded hundreds of atomic weapons out in the open air. More than 500 of these blasts – mainly conducted by the US and Russia – spewed their contents into the atmosphere.

It's well-established that these tests spread radioactive material far and wide, harminghumans and wildlife and rendering whole regions uninhabitable. Perhaps lesser known outside the scientific laboratory is that the bombs also reacted with natural nitrogen to form new isotopes – particularly carbon-14.

By the 1960s, overground bomb testing had produced almost twice the amount of carbon-14 in the atmosphere compared with previous levels. First the isotope entered water, sediments and vegetation, and then it passed along the food chain to humans. It has even reached organisms in the deepest ocean trench.

"In essence, every carbon pool on Earth which was in exchange with atmospheric CO2 since the late 1950s has been labelled by bomb carbon-14," writes Walter Kutschera of the University of Vienna, who published a review of the scientific applications of the spike in the journal Radiocarbon in 2022.

Back in the mid-20th Century, scientists noted the carbon-14 spike when atmospheric testing stopped, but it took decades for them to realise that the elevated levels might be useful. From the 1950s onwards, they had been using carbon-14 to date paleolithic remains or ancient texts, but that was based on its radioactive decay – known as radiocarbon dating. The isotope is unstable: it decays slowly into nitrogen with a half-life of 5,730 years. So, when a Neanderthal died, for instance, the quantity of carbon-14 in their bones and teeth would have started to gradually decline. Measure the extent of the decline, and you have a Neanderthal date of death.

Radiocarbon dating, however, tends to be limited to samples that are more than 300 years old, because of the isotope's slow decay rate. Any younger, and it hasn't decayed enough for an accurate date. Muddying recent dating further is humanity's introduction of additional carbon dioxide into the atmosphere since the Industrial Revolution – the so-called Suess effect.

Around the turn of the century, however, researchers realised that the bomb spike could help them use carbon-14 in a different way – and crucially it allows for dating within the past 70-80 years.

Ever since the peak in the 1950s, levels of the isotope in nature (and human beings) have gradually declined. Scientists can therefore analyse the proportions of carbon-14 in any organic substance that has exchanged atmospheric carbon since the tests, and specify the window in which it formed, down to a resolution of one to two years.

And that includes you and me. If you were born in the 1950s, your tissues will have accumulated more carbon-14 than a 1980s child, but levels are only now approaching the pre-atomic state.

One of the earliest uses of the bomb spike was to assist crime investigators seeking to identify the age of unidentified human remains. Forensic scientists have found that they can measure bomb carbon-14 in teeth, bones, hair or even the lens of the eye to help them estimate how old a person was, or when they died, according to Eden Centaine Johnstone-Belford of Monash University and Soren Blau of the Victorian Institute of Forensic Medicine in Australia.

In a 2019 review, Centaine Johnstone-Belford and Blau cite multiple examples where the bomb spike has informed police enquiries. For example, in 2010 investigators used it to confirm a body found in a northern Italian lake had been dumped there by the killer the previous year.

The pair also point out that knowing the time since death can be "a vital determination in human rights abuse cases such as war crimes, genocide and extrajudicial killings". In 2004, for example, bomb spike dating of hair samples from a mass grave in Ukraine allowed investigators to identify a Nazi war crime that occurred between 1941 and 1952.

The bomb spike has also unlocked new scientific discoveries, revealing new insights about the cells in our bodies and brains. In 2005, the biologist Kirsty Spalding of the Karolinska Institute in Sweden and colleagues showed that it was possible to date the relative ages of our cells by analysing bomb carbon-14 within their DNA. Across several subsequent studies, she has used the technique to answer whether certain cells in our bodies have been around since birth, or whether they are continually replaced.

For example, in 2008 Spalding and colleagues showed that the body continually replaces fat cells called adipocytes as the cells die. The number of these fat cells, she found, stays constant across adulthood – which promises new ways to tackle obesity. "Understanding that this is a dynamic process opens up new avenues of therapy, which may include manipulating the birth or death rate of fat cells, in combination with exercise and diet, to help reduce the number of fat cells in obesity," she says.

In 2013, Spalding and colleagues also used the bomb spike to look at the turnover of brain cells. For many years, researchers assumed that the number of neurons was fixed in childhood, and indeed her earlier research had suggested that was the case in regions like the cortex. However, by using carbon-14 to date neurons within the hippocampus, she and her team confirmed that new neurons may be produced there throughout adult life.

Corroborated by other research, the possible existence of "adult neurogenesis" has proven to be one of the most important neuroscience discoveries of the past 20 years. While the science is far from settled, it has suggested new avenues for medical strategies that might prevent neuron loss via disease, or even increase the generation of new neurons.

This article was originally published on 9 August 2023. It was updated on 16 July 2025 for the 80th anniversary of the Trinity test.

Journal Reference:
The Mushroom Cloud's Silver Lining, Science (DOI: https://www.science.org/doi/10.1126/science.321.5895.1434)
Eden Centaine Johnstone-Belford, Soren Blau. A Review of Bomb Pulse Dating and its Use in the Investigation of Unidentified Human Remains, Journal of Forensic Sciences (DOI: 10.1111/1556-4029.14227)
Progress in Authentication of Food and Wine, ACS Symposium Series (DOI: 10.1021/bk-2011-1081.ch006)
Eye lens radiocarbon reveals centuries of longevity in the Greenland shark (Somniosus microcephalus), Science (DOI: https://www.science.org/doi/10.1126/science.aaf1703)
Ning Wang, Chengde Shen, Weidong Sun, et al. Penetration of Bomb 14C Into the Deepest Ocean Trench [open], Geophysical Research Letters (DOI: 10.1029/2018GL081514)
Eden Centaine Johnstone-Belford, Soren Blau. A Review of Bomb Pulse Dating and its Use in the Investigation of Unidentified Human Remains, Journal of Forensic Sciences (DOI: 10.1111/1556-4029.14227)
Progress in Authentication of Food and Wine, ACS Symposium Series (DOI: 10.1021/bk-2011-1081.ch006)
Radiocarbon dating of seized ivory confirms rapid decline in African elephant populations and provides insight into illegal trade, Proceedings of the National Academy of Sciences (DOI: https://www.pnas.org/doi/10.1073/pnas.1614938113)
Eye lens radiocarbon reveals centuries of longevity in the Greenland shark (Somniosus microcephalus), Science (DOI: https://www.science.org/doi/10.1126/science.aaf1703)
Early mining and smelting lead anomalies in geological archives as potential stratigraphic markers for the base of an early Anthropocene, The Anthropocene Review (DOI: https://journals.sagepub.com/doi/10.1177/2053019618756682)
The trajectory of the Anthropocene: The Great Acceleration, The Anthropocene Review (DOI: https://journals.sagepub.com/doi/10.1177/2053019614564785)
Defining the onset of the Anthropocene, Science (DOI: https://www.science.org/doi/10.1126/science.ade2310)
The varved succession of Crawford Lake, Milton, Ontario, Canada as a candidate Global boundary Stratotype Section and Point for the Anthropocene series, The Anthropocene Review (DOI: https://journals.sagepub.com/doi/10.1177/20530196221149281)


Original Submission

posted by jelizondo on Wednesday July 23, @05:35AM   Printer-friendly
from the drowning-in-fire dept.

Matson surprised customers this week with an announcement that, effective immediately, it would suspend transporting battery-powered electric or plug-in hybrid electric vehicles due to the hazardous material classification of their lithium-ion batteries. The ability to ship cars between the mainland of the United States, Hawaii, Guam, and Alaska was an important service both for individuals and car dealers:

[Editor's Note: Matson, Inc. is a U.S. owned and operated transportation services company headquartered in Honolulu, Hawaii. --JE]

In a letter sent to customers, the company writes, "Due to increasing concern for the safety of transporting vehicles powered by large lithium-ion batteries, Matson is suspending acceptance of used or new electric vehicles (EVs) and plug-in hybrid vehicles for transport aboard its vessels. Effective immediately, we have ceased accepting new bookings for these shipments to/from all trades."

The Hawaii Electric Vehicle Association reports there are currently more than 37,000 electric vehicles registered in the state. No figures were reported for Guam, but dealers who spoke with the local media said they regretted the decision, highlighting that EVs are well-suited for driving on the island.

Matson had reported in the past that it had developed a collaborative team approach to tackle the complexities of carrying lithium batteries. It established an Electric Vehicle Safe Carriage Working Group, and said it was participating in external working groups on electric vehicles and lithium batteries.

[...] Matson continues to transport conventional cars. It offers the service both trans-ocean and also moves the containers interisland in Hawaii as part of its barge service.

Previously: Blaze Sends Ship Carrying Hundreds Of Chinese EVs To Bottom Of Pacific


Original Submission

posted by jelizondo on Wednesday July 23, @12:54AM   Printer-friendly

11,000-year-old feast uncovered: Why hunters hauled wild boars across mountains:

According to new research, communities that lived in western Iran about 11,000 years ago during the Early Neolithic period took a similar approach when it came to gift-giving.

They invested significant effort to bring wild boars hunted in dispersed parts of the landscape as gifts to be eaten at a communal celebration that took place at what is now the archaeological site of Asiab in the Zagros Mountains.

The findings, conducted by an international team of researchers including scientists from The Australian National University (ANU), suggest this practice of offering gifts that have geographical symbolism can be traced back to prehistory.

"Food and long-standing culinary traditions form an integral component of cultures all over the globe. It is for this reason holidays, festivals, and other socially meaningful events commonly involve food. For example, we cannot imagine Christmas without the Christmas meal, Eid without the food gifts, or Passover without matzo ball soup," Dr Petra Vaiglova from ANU said.

The scientists unearthed the skulls of 19 wild boars that were neatly packed and sealed inside a pit within a round building at the Asiab site. Butchery marks on the animals' skulls suggest they were used for feasting, but until now scientists were unsure where these boars came from.

Dr Vaiglova and the international research team examined the tooth enamel of five of these wild boars. The researchers analysed microscopic growth patterns and chemical signatures inside the enamel that offered "tell-tale" signs indicating that at least some of the boars used for the feast were not from the area where the gathering took place.

"Just like trees and their annual growth rings, teeth deposit visible layers of enamel and dentine during growth that we can count under the microscope. This is the first time these growth layers have been used to guide geochemical analysis of animal teeth to answer questions about human-animal interactions," Dr Vaiglova said.

"Rainfall and bedrock have distinct isotopic values in different geographical locations. These isotopic values get incorporated into animal tissues through drinking water and food. Measuring the isotopic values of tooth enamel allowed us to assess whether all the animals came from the same part of the region or whether they originated from more dispersed locations.

"Because the values we measured across the five teeth showed a high amount of variability, it is unlikely that all the animals originated from the same location. It is possible that some of them originated roughly 70 kilometers (~43 miles) away from the site where the feast took place."

The researchers said it is surprising that these hunters went through such effort to kill and transport boars from their local region over difficult mountainous terrain during a journey that likely would have taken several days, especially considering boars were not the most hunted animal during the Early Neolithic period.

Dr Vaiglova said communities living in the Zagros Mountains at this time had a "very diverse hunting strategy" and were hunting lots of different animal species.

"Boars are especially aggressive and so displaying them as hunting trophies or presenting them at a feast carries with it a certain element of significance. Bringing these animals from distant locations would have undoubtedly helped celebrate the importance of the social event that took place at Asiab," she said.

"What is special about the feast at Asiab is not only its early date and that it brought together people from across the wider region, but also the fact that people who participated in this feast invested substantial amounts of effort to ensure that their contributions involved an element of geographic symbolism. This feast also took place at a time that pre-dates agriculture and farming practices.

"This was clearly a very meaningful event and the fact that people put in so much effort to transport the boars over such challenging terrain provides us with a glimpse of how old the tradition of bringing geographically meaningful gifts to social events really is.

"These people were clearly the ultimate dinner party guests."

The research is published in Nature Communications Earth and Environment and involved scientists from Australia, Germany, Denmark and Iran.

Journal Reference:
Vaiglova, Petra, Kierdorf, Horst, Witzel, Carsten, et al. Transport of animals underpinned ritual feasting at the onset of the Neolithic in southwestern Asia [open], Communications Earth & Environment (DOI: 10.1038/s43247-025-02501-z)


Original Submission

posted by jelizondo on Tuesday July 22, @08:09PM   Printer-friendly
from the There-Ain't-No-Such-Thing-as-a-(Sugar)-Free-Lunch dept.

Popular sugar substitute linked to brain cell damage and stroke risk:

From low-carb ice cream to keto protein bars to "sugar-free" soda, the decades-old sweetener erythritol is everywhere.

But new University of Colorado Boulder research shows the popular sugar substitute and specialty food additive comes with serious downsides, impacting brain cells in numerous ways that can boost risk of stroke.

"Our study adds to the evidence suggesting that non-nutritive sweeteners that have generally been purported to be safe, may not come without negative health consequences," said senior author Christopher DeSouza, professor of integrative physiology and director of the Integrative Vascular Biology Lab.

First approved by the Food and Drug Administration in 2001, erythritol is a sugar alcohol, often produced by fermenting corn and found in hundreds of products. It has almost no calories, is about 80% as sweet as table sugar, and has negligible impact on insulin levels, making it a favorite for people trying to lose weight, keep their blood sugar in check or avoid carbohydrates.

Recent research has begun to shed light on its risks.

One recent study involving 4,000 people in the U.S. and Europe found that men and women with higher circulating levels of erythritol were significantly more likely to have a heart attack or stroke within the next three years.

DeSouza and first author Auburn Berry, a graduate student in his lab, set out to understand what might be driving that increased risk.

Researchers in the lab treated human cells that line blood vessels in the brain for three hours with about the same amount of erythritol contained in a typical sugar-free beverage.

They observed that the treated cells were altered in numerous ways: They expressed significantly less nitric oxide, a molecule that relaxes and widens blood vessels, and more endothelin-1, a protein that constricts blood vessels. Meanwhile, when challenged with a clot-forming compound called thrombin, cellular production of the natural clot-busting compound t-PA was "markedly blunted." The erythritol-treated cells also produced more reactive oxygen species (ROS), a.k.a. "free radicals," metabolic byproducts which can age and damage cells and inflame tissue.

"Big picture, if your vessels are more constricted and your ability to break down blood clots is lowered, your risk of stroke goes up," said Berry. "Our research demonstrates not only that, but how erythritol has the potential to increase stroke risk."

DeSouza notes that their study used only a serving-size worth of the sugar substitute. For those who consume multiple servings per day, the impact, presumably, could be worse.

The authors caution that their study was a laboratory study, conducted on cells, and larger studies in people are needed.

That said, De Souza encourages consumers to read labels, looking for erythritol or "sugar alcohol" on the label.

"Given the epidemiological study that inspired our work, and now our cellular findings, we believe it would be prudent for people to monitor their consumption of non-nutrient-sweeteners such as this one," he said.

Journal Reference:
Auburn R. Berry, Samuel T. Ruzzene, Emily I. Ostrander, et al. The non-nutritive sweetener erythritol adversely affects brain microvascular endothelial cell function, Journal of Applied Physiology (DOI: JAPPL-00276-2025)


Original Submission

posted by janrinok on Tuesday July 22, @06:48PM   Printer-friendly

The BBC has announced that Ozzy Osborne has died today.

From the Guardian:

Ozzy Osbourne, whose gleeful "Prince of Darkness" image made him one of the most iconic rock frontmen of all time, has died aged 76.

A statement from the Osbourne family reads: "It is with more sadness than mere words can convey that we have to report that our beloved Ozzy Osbourne has passed away this morning. He was with his family and surrounded by love. We ask everyone to respect our family privacy at this time." No cause of death was given, though Osbourne had experienced various forms of ill health in recent years.

posted by jelizondo on Tuesday July 22, @03:22PM   Printer-friendly

A strange fossil at the edge of the solar system just shook up Planet Nine theories:

The object was found as part of the survey project FOSSIL (Formation of the Outer Solar System: An Icy Legacy), which takes advantage of the Subaru Telescope's wide field of view. The object was discovered through observations taken in March, May, and August 2023 using the Subaru Telescope. The object is currently designated 2023 KQ14; a more classical name will be assigned later by the International Astronomical Union. After that, follow-up observations in July 2024 with the Canada-France-Hawaii Telescope and a search for unrecognized sightings of the object in old data from other observatories allowed astronomers to track the object's orbit over 19 years. Due to its peculiar distant orbit, 2023 KQ14 has been classified as a "sednoid", making it only the fourth known example of this rare type of object.

[Editor's Note: A sednoid is a trans-Neptunian object with a large semi-major axis, a distant perihelion and a highly eccentric orbit, similar to that of the dwarf planet Sedna --JE]

Numerical simulations conducted by the FOSSIL team, some of which used the PC cluster operated by the National Astronomical Observatory of Japan, indicate that 2023 KQ14 has maintained a stable orbit for at least 4.5 billion years. Although its current orbit differs from those of the other sednoids, the simulations suggest that their orbits were remarkably similar around 4.2 billion years ago.

The fact that 2023 KQ14 now follows an orbit different from the other sednoids indicates that the outer Solar System is more diverse and complex than previously thought. This discovery also places new constraints on the hypothetical Planet Nine. If Planet Nine exists, its orbit must lie farther out than typically predicted.

Dr. Yukun Huang of the National Astronomical Observatory of Japan who conducted simulations of the orbit comments, "The fact that 2023 KQ14's current orbit does not align with those of the other three sednoids lowers the likelihood of the Planet Nine hypothesis. It is possible that a planet once existed in the Solar System but was later ejected, causing the unusual orbits we see today."

Regarding the significance of this discovery, Dr. Fumi Yoshida states, "2023 KQ14 was found in a region far away where Neptune's gravity has little influence. The presence of objects with elongated orbits and large perihelion distances in this area implies that something extraordinary occurred during the ancient era when 2023 KQ14 formed. Understanding the orbital evolution and physical properties of these unique, distant objects is crucial for comprehending the full history of the Solar System. At present, the Subaru Telescope is among the few telescopes on Earth capable of making such discoveries. I would be happy if the FOSSIL team could make many more discoveries like this one and help draw a complete picture of the history of the Solar System."

Journal Reference:
Chen, Ying-Tung, Lykawka, Patryk Sofia, Huang, Yukun, et al. Discovery and dynamics of a Sedna-like object with a perihelion of 66 au [open], Nature Astronomy (DOI: 10.1038/s41550-025-02595-7)


Original Submission

posted by jelizondo on Tuesday July 22, @10:33AM   Printer-friendly

Microsoft says it will no longer use engineers in China for Department of Defense work:

Following a Pro Publica report that Microsoft was using engineers in China to help maintain cloud computing systems for the U.S. Department of Defense, the company said it's made changes to ensure this will no longer happen.

The existing system reportedly relied on "digital escorts" to supervise the China-based engineers. But according to Pro Publica, those escorts — U.S. citizens with security clearances — sometimes lacked the technical expertise to properly monitor the engineers.

In response to the report, Secretary of Defense Pete Hegseth wrote on X, "Foreign engineers — from any country, including of course China — should NEVER be allowed to maintain or access DoD systems."

On Friday, Microsoft's chief communications officer Frank X. Shaw responded: "In response to concerns raised earlier this week about US-supervised foreign engineers, Microsoft has made changes to our support for US Government customers to assure that no China-based engineering teams are providing technical assistance for DoD Government cloud and related services."


Original Submission

posted by jelizondo on Tuesday July 22, @05:47AM   Printer-friendly

Rolling Stone has an article about a concert tape with an interesting back story. The album, Thelonious Monk: Live at Palo Alto eventually came out in September 2020. It was a recording of when the jazz legend played at a high school back in 1968. The school custodian recorded the show on reel to reel. When the tape resurfaced not too many years ago, it drew the ire of and some dirty tricks from a former record label.

The greatest lost concert in American history almost never happened at all. It was Oct. 27, 1968, in Palo Alto, California. Outside of his high school, Danny Scher, a 16-year-old, bushy-haired, jazz-obsessed, self-described “weirdo,” was pacing the parking lot waiting for his hero, and music’s most elusive and enigmatic genius, to show up: composer and pianist Thelonious Monk.

To the disbelief of most everyone — including his mother and girlfriend waiting alongside him — Scher claimed to have booked the jazz legend for an afternoon gig, the modern equivalent of securing Kendrick Lamar for prom. Pulling this off at a nearly all-white school during his racially divided town's explosive Civil Rights battle — when the predominantly Black community of East Palo Alto was fighting to rename itself "Nairobi" — made it even more unlikely. But the mixed crowd in the parking lot proved how music could bring them together. "It was really the only time I ever remember seeing that many Black people," Scher recalls. "Everyone was just there to see Monk."

Monk was playing a residency at the Jazz Workshop, a club in San Francisco. The city was only 35 miles away. Maybe, Scher thought, Monk would be willing to come down for a Sunday-afternoon show. After tracking down the number of Monk's manager, Harry Colomby, and calling him with the outrageous offer, he got an even more surprising response: Monk was in. Scher was duly mind-blown. But now he faced a new challenge, pulling off the show.

The kid promoting the Monk show, nonetheless, was having an unexpectedly hard time selling tickets. Despite Scher's booking, few people believed that the world's greatest jazz artist was really coming to town. To get the word out, Scher stuffed his newspaper-boy bag with rolled-up posters, and pedaled across Highway 101 to where he knew there were plenty of Monk fans like him: East Palo Alto.

It was a busy week for postering. After the killing of Martin Luther King Jr. that spring,[...] Tensions were high. Scher recalls a neighborhood cop seeing him taping up a poster. The cop warned him: "Hey, white boy, this isn't a safe place for you. You're going to get in trouble putting up posters." Scher told him, "I'm going to be in bigger trouble if the show doesn't do well."

SCHER'S MOVE PAID OFF. With Black and white kids buying up the tickets, the show sold out. Two days before the gig, Scher called the jazz club where Monk was playing to go over details with his manager — only to hear Monk himself pick up the phone instead. There was just one thing more shocking than talking to his hero for the first time — realizing Monk didn't know about the gig at all. As he told this kid on the phone, "What are you talking about?"

Scher's heart raced. He did his best to coolly fill in Monk, who'd either not been told about the gig by his manager or lost track. "How am I going to get there?" the piano great replied. Scher didn't have the budget for a limo, but he had something better: his older brother Les, who not only turned him on to Monk in the first place but also had a license. "My brother will pick you up!" Scher assured him. Yet without having received a fully executed contract back from Monk's manager, he didn't know if Monk would really show up at all.

Scher checked the school's piano. One of the custodians, a Black man in his thirties, knew how to tune it and offered to set it up. A fan himself, he just wanted one thing in return. "If I tune the piano," he said, "can I record the concert?" In all of Scher's meticulous planning, he hadn't thought about recording the show. But the custodian had access to a reel-to-reel tape recorder, and knew how to operate it, too. "Yeah, OK," Scher told him.

But he'd never heard the custodian's tape. The old reel-to-reel had been sitting in a box packed away until friends urged him to burn it onto a CD. When Scher popped it into his stereo, it was the first time he'd heard it since he was that bushy-haired 16-year-old listening from backstage. The custodian's raw tape captured Monk's performance in all its wonderful imperfections: the squeak of the piano bench as he shifted in his seat, the scratchy tap of his shoes swiping the piano pedals below. "It was really good," Scher says. It had to come out.

With Impulse Records on board, Thelonious Monk: Live at Palo Alto was slated to come out in July 2020. Scher, T.S. Monk, and the label prepared a lavish package for the vinyl release, including copies of the original program and poster. Impulse submitted it for six Grammy nominations.

But just as the advance raves were peaking two weeks before the release, they got a message from Monk's old sparring partner: his label. Sony, owners of Columbia, claimed the tape was contractually theirs. "They were saying that this recording was made during the period that Thelonious was on the contract to Columbia, and therefore they owned it," T.S. Monk says.

This wasn't the first time the Monk estate had battled with Sony. In 2002, the estate conducted a forensic accounting of Monk's catalog and discovered it was owed hundreds of thousands of dollars from the label. A settlement was reached in 2023. But now Sony was threatening to sue if the Palo Alto concert got released. Faced with a legal battle, Impulse pulled the LP. The momentum crashed. And with no way of knowing when or if the record would get released, the hypothetical Grammy nominations went away, too.

After searching through Monk's old paperwork, T.S. and the estate confirmed what they had known to be true: Monk's contract with Columbia had expired in 1967, a year before the Palo Alto High School show. Sony responded with another salvo: a contract extension through 1968 signed by Monk himself. But when his son eyed it 52 years later, he called bullshit. "That's not my father's signature," he said. Scher— who had one of Monk's rare autographs on his Palo Alto program — agreed. A forensic handwriting analyst confirmed their assessment. Sony seems to have decided this was a losing battle. According to T.S., the company soon settled the matter. Thelonious Monk: Live at Palo Alto eventually came out in September 2020.

Despite getting robbed of the momentum and the Grammy nominations, T.S. and Scher are happy the long-lost recording could finally be heard. "I know you think there's a bias because he's my father," T.S. says with a smile, "but it's not because he's my father. It's because he's Monk. His music does the same thing to me as it does to everybody else." For Scher, the legacy of the concert lives on, and so does his hero. He says, "I hear Monk every day."

Fortunately Monk's contract with the label had expired in 1967, a year before the Palo Alto High School show, but the rip off attempts by the label almost derailed the release.

Previously:
(2024) Gershwin's "Rhapsody in Blue" at 100
(2019) The Internet Saved the Record Labels


Original Submission

posted by jelizondo on Tuesday July 22, @01:04AM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

Endgadget reports that Meta is Building "Several" Multi-Gigawatt Compute Clusters

Meta is building several gigawatt-sized data centers to power AI, as reported by Bloomberg. CEO Mark Zuckerberg says the company will spend "hundreds of billions of dollars" to accomplish this feat, with an aim of creating "superintelligence."

The first center is called Prometheus and it comes online next year. It's being built in Ohio. Next up, there's a data center called Hyperion that's almost the size of Manhattan. This one should "be able to scale up to 5GW over several years." Some of these campuses will be among the largest in the world, as most data centers can only generate hundreds of megawatts of capacity.

Meta has also been staffing up its Superintelligence Labs team, recruiting folks from OpenAI, Google's DeepMind and others. Scale AI's co-founder Alexandr Wang is heading up this effort.

However, these giant data centers do not exist in a vacuum. The complexes typically brush up against local communities. The centers are not only power hogs, but also water hogs. The New York Times just published a report on how Meta data centers impact local water supplies.

There's a data center east of Atlanta that has damaged local wells and caused municipal water prices to soar, which could lead to a shortage and rationing by 2030. The price of water in the region is set to increase by 33 percent in the next two years.

Typical data centers guzzle around 500,000 gallons of water each day, but these forthcoming AI-centric complexes will likely be even thirstier. The new centers could require millions of gallons per day, according to water permit applications reviewed by The New York Times. Mike Hopkins, the executive director of the Newton County Water and Sewerage Authority, says that applications are coming in with requests for up to six millions of water per day, which is more than the county's entire daily usage.

"What the data centers don't understand is that they're taking up the community wealth," he said. "We just don't have the water."

We're going to have to decide soon how to regulate the growing data center industry which pose several issues for desert communities. "They consume large amounts of electricity and water 24 hours per day, seven days a week."
— Arizona Green Party 🌻 (@AZGreenParty) July 10, 2025

This same worrying story is playing out across the country. Data center hot spots in Texas, Arizona, Louisiana and Colorado are also taxing local water reserves. For instance, some Phoenix homebuilders have been forced to pause new constructions due to droughts exacerbated by these data centers.

See also Meta Superintelligence – Leadership Compute, Talent, and Data for a detailed analysis of Meta AI.


Original Submission #1Original Submission #2

posted by mrpg on Monday July 21, @08:19PM   Printer-friendly
from the not-enough dept.

Phys.org reports on how weird space weather seems to have influenced human behavior on Earth 41,000 years ago

[...] This near-collapse is known as the Laschamps Excursion, a brief but extreme geomagnetic event named for the volcanic fields in France where it was first identified. At the time of the Laschamps Excursion, near the end of the Pleistocene epoch, Earth's magnetic poles didn't reverse as they do every few hundred thousand years. Instead, they wandered, erratically and rapidly, over thousands of miles. At the same time, the strength of the magnetic field dropped to less than 10% of its modern day intensity.

The magnetosphere normally deflects much of the solar wind and harmful ultraviolet radiation that would otherwise reach Earth's surface.

The skies 41,000 years ago may have been both spectacular and threatening. When we realized this, we two geophysicists wanted to know whether this could have affected people living at the time.

[...] In response, people may have adopted practical measures: spending more time in caves, producing tailored clothing for better coverage, or applying mineral pigment "sunscreen" made of ochre to their skin.

At this time, both Neanderthals and members of our species, Homo sapiens, were living in Europe, though their geographic distributions likely overlapped only in certain regions. The archaeological record suggests that different populations exhibited distinct approaches to environmental challenges, with some groups perhaps more reliant on shelter or material culture for protection.

Importantly, we're not suggesting that space weather alone caused an increase in these behaviors or, certainly, that the Laschamps caused Neanderthals to go extinct, which is one misinterpretation of our research. But it could have been a contributing factor—an invisible but powerful force that influenced innovation and adaptability.


Original Submission

posted by jelizondo on Monday July 21, @03:39PM   Printer-friendly

A CarFax for Used PCs

The United Nations' Global E-waste Monitor estimates that the world generates over 60 million tonnes of e-waste annually. Furthermore, this number is rising five times as fast as e-waste recycling. Much of this waste comes from prematurely discarded electronic devices.

Many enterprises follow a standard three-year replacement cycle, assuming older computers are inefficient. However, many of these devices are still functional and could perform well with minor upgrades or maintenance. The issue is, no one knows what the weak points are for a particular machine, or what the needed maintenance is, and the diagnostics would be too costly and time-consuming. It's easier to just buy brand new laptops.

When buying a used car, dealerships and individual buyers can access each car's particular CarFax report, detailing the vehicle's usage and maintenance history. Armed with this information, dealerships can perform the necessary fixes or upgrades before reselling the car. And individuals can decide whether to trust that vehicle's performance. We at HP realized that, to prevent unnecessary e-waste, we need to collect and make available usage and maintenance data for each laptop, like a CarFax for used PCs.

There is a particular challenge to collecting usage data for a PC, however. We need to make sure to protect the user's privacy and security. So, we set out to design a data-collection protocol for PCs that manages to remain secure.

Luckily, the sensors that can collect the necessary data are already installed in each PC. There are thermal sensors that monitor CPU temperature, power-consumption monitors that track energy efficiency, storage health indicators that assess solid state drive (SSD) wear levels, performance counters that measure system utilization, fan-rotation-speed sensors that detect cooling efficiency, and more. The key is to collect and store all that data in a secure yet useful way.

We decided that the best way to do this is to integrate the life-cycle records into the firmware layer. By embedding telemetry capabilities directly within the firmware, we ensure that device health and usage data is captured the moment it is collected. This data is stored securely on HP SSD drives, leveraging hardware-based security measures to protect against unauthorized access or manipulation.

The secure telemetry protocol we've developed at HP works as follows. We gather the critical hardware and sensor data and store it in a designated area of the SSD. This area is write-locked, meaning only authorized firmware components can write to it, preventing accidental modification or tampering. That authorized firmware component we use is the Endpoint Security Controller, a dedicated piece of hardware embedded in business-class HP PCs. It plays a critical role in strengthening platform-level security and works independently from the main CPU to provide foundational protection.

The endpoint security controller establishes a secure session by retaining the secret key within the controller itself. This mechanism enables read data protection on the SSD—where telemetry and sensitive data are stored—by preventing unauthorized access, even if the operating system is reinstalled or the system environment is otherwise altered.

Then, the collected data is recorded in a time-stamped file, stored within a dedicated telemetry log on the SSD. Storing these records on the SSD has the benefit of ensuring the data is persistent even if the operating system is reinstalled or some other drastic change in software environment occurs.

The telemetry log employs a cyclic buffer design, automatically overwriting older entries when the log reaches full capacity. Then, the telemetry log can be accessed by authorized applications at the operating system level.

The telemetry log serves as the foundation for a comprehensive device history report. Much like a CarFax report for used cars, this report, which we call PCFax, will provide both current users and potential buyers with crucial information.

The PCFax report aggregates data from multiple sources beyond just the on-device telemetry logs. It combines the secure firmware-level usage data with information from HP's factory and supply-chain records, digital-services platforms, customer-support service records, diagnostic logs, and more. Additionally, the system can integrate data from external sources including partner sales and service records, refurbishment partner databases, third-party component manufacturers like Intel, and other original equipment manufacturers. This multisource approach creates a complete picture of the device's entire life cycle, from manufacturing through all subsequent ownership and service events.

For IT teams within organizations, we hope the PCFax will bring simplicity and give opportunities for optimization. Having access to fine-grained usage and health information for each device in their fleet can help IT managers decide which devices are sent to which users, as well as when maintenance is scheduled. This data can also help device managers decide which specific devices to replace rather than issuing new computers automatically, enhancing sustainability. And this can help with security: With real-time monitoring and firmware-level protection, IT teams can mitigate risks and respond swiftly to emerging threats. All of this can facilitate more efficient use of PC resources, cutting down on unnecessary waste.

We also hope that, much as the CarFax gives people confidence in buying used cars, the PCFax can encourage resale of used PCs. For enterprises and consumers purchasing second-life PCs, it provides detailed visibility into the complete service and support history of each system, including any repairs, upgrades, or performance issues encountered during its initial deployment. By making this comprehensive device history readily available, PCFax enables more PCs to find productive second lives rather than being prematurely discarded, directly addressing the e-waste challenge while providing economic benefits to both sellers and buyers in the secondary PC market.

While HP's solutions represent a significant step forward, challenges remain. Standardizing telemetry frameworks across diverse ecosystems is critical for broader adoption. Additionally, educating organizations about the benefits of life-cycle records will be essential to driving uptake.

We are also working on integrating AI into our dashboards. We hope to use AI models to analyze historical telemetry data and predict failures before they happen, such as detecting increasing SSD write cycles to forecast impending failure and alert IT teams for proactive replacement, or predicting battery degradation and automatically generating a service ticket to ensure a replacement battery is ready before failure, minimizing downtime.

We plan to start rolling out these features at the beginning of 2026.


Original Submission

posted by jelizondo on Monday July 21, @10:55AM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

upstart writes:

Delta Air Lines is using AI to set the maximum price you're willing to pay:

Delta's president says the quiet part out loud.

Delta Air Lines is leaning into dynamic ticket pricing that uses artificial intelligence to individually determine the highest fee you'd willingly pay for flights, according to comments Fortune spotted in the company's latest earnings call. Following a limited test of the technology last year, Delta is planning to shift away from static ticket prices entirely after seeing "amazingly favorable" results.

"We will have a price that's available on that flight, on that time, to you, the individual," Delta president Glen Hauenstein told investors in November, having started to test the technology on one percent of its ticket prices. Delta currently uses AI to influence three percent of its ticket prices, according to last week's earnings call, and is aiming to increase that to 20 percent by the end of this year. "We're in a heavy testing phase," said Hauenstein. "We like what we see. We like it a lot, and we're continuing to roll it out."

While personalized pricing isn't unique to Delta, the airline has been particularly candid about embracing it. During that November call, Hauenstein said the AI ticketing system is "a full reengineering of how we price and how we will be pricing in the future," and described the rollout as "a multiyear, multi-step process." Hauenstein acknowledged that Delta was excited about the initial revenue results it saw in testing, but noted the shift to AI-determined pricing could "be very dangerous, if it's not controlled and it's not done correctly."

Delta's personalized AI pricing tech is provided by travel firm Fetcherr, which also partners with Virgin Atlantic, Azul, WestJet, and VivaAerobus. In Delta's case, the AI will act as a "super analyst" that operates 24/7 to determine custom ticket prices that should be offered to individual customers in real-time, per specific flights and times.

Airlines have varied their ticket prices for customers on the same routes for many years, depending on a range of factors, including how far in advance the booking is made, what website or service it's being booked with, and even the web browser the customer is using. Delta is no exception, but AI pricing looks set to supercharge the approach.

Delta has taken heat for charging customers different prices for flights, having rolled back the decision to price tickets higher for solo-travelers compared to groups in May. It's not entirely clear how invasive Delta's AI ticketing will be when it analyzes customers to figure out prices, but Fortune notes that it has privacy advocates concerned.

"They are trying to see into people's heads to see how much they're willing to pay," Justin Kloczko of Consumer Watchdog told the publication. "They are basically hacking our brains." Arizona Senator Ruben Gallego described it as "predatory pricing" that's designed to "squeeze you for every penny."


Original Submission

posted by jelizondo on Monday July 21, @06:09AM   Printer-friendly

upstart writes:

For Algorithms, a Little Memory Outweighs a Lot of Time:

One of the most important classes goes by the humble name "P." Roughly speaking, it encompasses all problems that can be solved in a reasonable amount of time. An analogous complexity class for space is dubbed "PSPACE."

The relationship between these two classes is one of the central questions of complexity theory. Every problem in P is also in PSPACE, because fast algorithms just don't have enough time to fill up much space in a computer's memory. If the reverse statement were also true, the two classes would be equivalent: Space and time would have comparable computational power. But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren't in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn't as forgiving — once it passes, you can't get it back.

"The intuition is just so simple," Williams said. "You can reuse space, but you can't reuse time."

But intuition isn't good enough for complexity theorists: They want rigorous proof. To prove that PSPACE is larger than P, researchers would have to show that for some problems in PSPACE, fast algorithms are categorically impossible. Where would they even start?

Those definitions emerged from the work of Juris Hartmanis, a pioneering computer scientist who had experience making do with limited resources. He was born in 1928 into a prominent Latvian family, but his childhood was disrupted by the outbreak of World War II. Occupying Soviet forces arrested and executed his father, and after the war Hartmanis finished high school in a refugee camp. He went on to university, where he excelled even though he couldn't afford textbooks.

In 1960, while working at the General Electric research laboratory in Schenectady, New York, Hartmanis met Richard Stearns, a graduate student doing a summer internship. In a pair of groundbreaking papers they established precise mathematical definitions for time and space. These definitions gave researchers the language they needed to compare the two resources and sort problems into complexity classes.

As it happened, they started at Cornell University, where Hartmanis moved in 1965 to head the newly established computer science department. Under his leadership it quickly became a center of research in complexity theory, and in the early 1970s, a pair of researchers there, John Hopcroft and Wolfgang Paul, set out to establish a precise link between time and space.

Hopcroft and Paul knew that to resolve the P versus PSPACE problem, they'd have to prove that you can't do certain computations in a limited amount of time. But it's hard to prove a negative. Instead, they decided to flip the problem on its head and explore what you can do with limited space. They hoped to prove that algorithms given a certain space budget can solve all the same problems as algorithms with a slightly larger time budget. That would indicate that space is at least slightly more powerful than time — a small but necessary step toward showing that PSPACE is larger than P.

With that goal in mind, they turned to a method that complexity theorists call simulation, which involves transforming existing algorithms into new ones that solve the same problems, but with different amounts of space and time. To understand the basic idea, imagine you're given a fast algorithm for alphabetizing your bookshelf, but it requires you to lay out your books in dozens of small piles. You might prefer an approach that takes up less space in your apartment, even if it takes longer. A simulation is a mathematical procedure you could use to get a more suitable algorithm: Feed it the original, and it'll give you a new algorithm that saves space at the expense of time.

Algorithm designers have long studied these space-time trade-offs for specific tasks like sorting. But to establish a general relationship between time and space, Hopcroft and Paul needed something more comprehensive: a space-saving simulation procedure that works for every algorithm, no matter what problem it solves. They expected this generality to come at a cost. A universal simulation can't exploit the details of any specific problem, so it probably won't save as much space as a specialized simulation. But when Hopcroft and Paul started their work, there were no known universal methods for saving space at all. Even saving a small amount would be progress.

The breakthrough came in 1975, after Hopcroft and Paul teamed up with a young researcher named Leslie Valiant. The trio devised a universal simulation procedure that always saves a bit of space. No matter what algorithm you give it, you'll get an equivalent one whose space budget is slightly smaller than the original algorithm's time budget.

"Anything you can do in so much time, you can also do in slightly less space," Valiant said. It was the first major step in the quest to connect space and time.

But then progress stalled, and complexity theorists began to suspect that they'd hit a fundamental barrier. The problem was precisely the universal character of Hopcroft, Paul and Valiant's simulation. While many problems can be solved with much less space than time, some intuitively seemed like they'd need nearly as much space as time. If so, more space-efficient universal simulations would be impossible. Paul and two other researchers soon proved that they are indeed impossible, provided you make one seemingly obvious assumption: Different chunks of data can't occupy the same space in memory at the same time.

"Everybody took it for granted that you cannot do better," Wigderson said.

Paul's result suggested that resolving the P versus PSPACE problem would require abandoning simulation altogether in favor of a different approach, but nobody had any good ideas. That was where the problem stood for 50 years — until Williams finally broke the logjam. First, he had to get through college.

In 1996, the time came for Williams to apply to colleges. He knew that pursuing complexity theory would take him far from home, but his parents made it clear that the West Coast and Canada were out of the question. Among his remaining options, Cornell stood out to him for its prominent place in the history of his favorite discipline.

"This guy Hartmanis defined the time and space complexity classes," he recalled thinking. "That was important for me."

Williams was admitted to Cornell with generous financial aid and arrived in the fall of 1997, planning to do whatever it took to become a complexity theorist himself. His single-mindedness stuck out to his fellow students.

"He was just super-duper into complexity theory," said Scott Aaronson, a computer scientist at the University of Texas, Austin, who overlapped with Williams at Cornell.

For 50 years, researchers had assumed it was impossible to improve Hopcroft, Paul and Valiant's universal simulation. Williams' idea — if it worked — wouldn't just beat their record — it would demolish it.

"I thought about it, and I was like, 'Well, that just simply can't be true,'" Williams said. He set it aside and didn't come back to it until that fateful day in July, when he tried to find the flaw in the argument and failed. After he realized that there was no flaw, he spent months writing and rewriting the proof to make it as clear as possible.

Valiant got a sneak preview of Williams' improvement on his decades-old result during his morning commute. For years, he's taught at Harvard University, just down the road from Williams' office at MIT. They'd met before, but they didn't know they lived in the same neighborhood until they bumped into each other on the bus on a snowy February day, a few weeks before the result was public. Williams described his proof to the startled Valiant and promised to send along his paper.

"I was very, very impressed," Valiant said. "If you get any mathematical result which is the best thing in 50 years, you must be doing something right."

With his new simulation, Williams had proved a positive result about the computational power of space: Algorithms that use relatively little space can solve all problems that require a somewhat larger amount of time.

The difference is a matter of scale. P and PSPACE are very broad complexity classes, while Williams' results work at a finer level. He established a quantitative gap between the power of space and the power of time, and to prove that PSPACE is larger than P, researchers will have to make that gap much, much wider.

That's a daunting challenge, akin to prying apart a sidewalk crack with a crowbar until it's as wide as the Grand Canyon. But it might be possible to get there by using a modified version of Williams' simulation procedure that repeats the key step many times, saving a bit of space each time. It's like a way to repeatedly ratchet up the length of your crowbar — make it big enough, and you can pry open anything.

"It could be an ultimate bottleneck, or it could be a 50-year bottleneck," Valiant said. "Or it could be something which maybe someone can solve next week."

"I can never prove precisely the things that I want to prove," Williams said. "But often, the thing I prove is way better than what I wanted."

Journal References:
Dr. Juris Hartmanis Interview: July 26, 2009; Cornell University in Ithaca, New York
On Time Versus Space, Journal of the ACM (JACM)
Space bounds for a game on graphs:, Journal of the ACM (JACM)
Tree Evaluation Is in Space 𝑂 (log 𝑛 · log log 𝑛), Journal of the ACM (JACM)


Original Submission