Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Intel and AMD engineers have stepped in at the eleventh hour to deal with a code contribution from a Microsoft developer that could have broken Linux 6.13 on some systems.
The change, made in the autumn, was a useful improvement at face value. It was a modification to Linux x86_64 to use large read-only execute (ROX) pages for caching executable pages. The theory was that the alteration would result in increased performance.
However, the code caused problems on some setups and an urgent patch from Intel's Peter Zijlstra was committed yesterday to disable it. The stable release of the 6.13 kernel was due this coming weekend.
Zijlstra wrote: "The whole module_writable_address() nonsense made a giant mess of alternative.c, not to mention it still contains bugs -- notable (sic) some of the CFI variants crash and burn.
Control Flow Integrity (CFI) is an anti-malware technology aimed at preventing attackers from redirecting the control flow of a program. The change can cause issues on some CFI-enabled setups and reports have included Intel Alder Lake-powered machines failing to resume from hibernation.
Zijlstra said the Microsoft engineer "has been working on patches to clean all this up again, but given the current state of things, this stuff just isn't ready. Disable for now, let's try again next cycle."
The offending source is still present, but won't be included in the upcoming stable kernel build.
AMD engineer Borislav Petkov noted that the Linux x86_64 maintainers had not signed off on the change, commenting: "I just love it how this went in without a single x86 maintainer Ack, it broke a bunch of things and then it is still there instead of getting reverted. Let's not do this again please."
Microsoft is notable for dubious quality control standards regarding releases of its flagship operating system, Windows. That one of its engineers should drop some dodgy code into the Linux kernel is not hugely surprising, and the unfortunate individual is not the first and will not be the last to do so, regardless of their employer.
However, the processes that allowed it to remain in the build this close to public release will be a concern. While it is amusing that engineers from both Intel and AMD were involved in dealing with the issues arising from the contribution of a Microsoft engineer, and the problem never reached the stable release, it is concerning. Petkov will not be the only one wondering how the change made it in without a review by the Linux x86/x86_64 maintainers.
https://spectrum.ieee.org/isaac-asimov-robotics
In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story "Runaround." The laws were later popularized in his seminal story collection I, Robot.
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov's framework useful for considering the potential safeguards needed for AI that interacts with humans.
But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov's original concerns about physical harm and obedience.
The proliferation of AI-enabled deception is particularly concerning. According to the FBI's 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity's 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.
Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.
Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I'm a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.
...
In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems' ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union's AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov's time, people couldn't have imagined how artificial agents could use online communication tools and avatars to deceive humans.Therefore, we must make an addition to Asimov's laws.
Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The order comes after GM was caught selling customer data to third-party data brokers and insurance companies — without consent:
General Motors and its subsidiary OnStar are banned from selling customer geolocation and driving behavior data for five years, the Federal Trade Commission announced Thursday.
The settlement comes after a New York Times investigation found that GM had been collecting micro-details about its customers' driving habits, including acceleration, braking, and trip length — and then selling it to insurance companies and third-party data brokers like LexisNexis and Verisk. Clueless vehicle owners were then left wondering why their insurance premiums were going up.
[...] FTC accused GM of using a "misleading enrollment process" to get vehicle owners to sign up for its OnStar connected vehicle service and Smart Driver feature. The automaker failed to disclose to customers that it was collecting their data, nor did GM seek out their consent to sell it to third parties. After the Times exposed the practice, GM said it was discontinuing its OnStar Smart Driver program.
Also at AP, Detroit Free Press and Engadget.
Previously:
Arthur T Knackerbracket has processed the following story:
The White House just released restrictions on the global sale of AI chips and GPUs, which more than limit who can buy these chips but also tell them where to use them. This new rule has got Nvidia and the Semiconductor Industry Association (SIA) up in arms, and now the European Commission (EC) is also protesting it. However, the rule will come into force 60 days after its announcement—well into Trump’s second presidency—giving the EU and other concerned parties time to negotiate with his administration to defer or cancel it.
Ten EU members—Belgium, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Norway, and Sweden—would have Tier 1 status, meaning they have ‘near-unrestricted access’ to advance American AI chips. However, they must still abide by U.S. security requirements and keep at least 75% of their processing capabilities within Tier 1 countries. Although they could install the rest of their AI chips in Tier 2 countries, they cannot put over 7% of these chips in any single nation, meaning they have to spread operations to over four countries if they want to do so.
https://phys.org/news/2025-01-chainmail-material-future-armor.html
In a remarkable feat of chemistry, a Northwestern University-led research team has developed the first two-dimensional (2D) mechanically interlocked material.
Resembling the interlocking links in chainmail, the nanoscale material exhibits exceptional flexibility and strength. With further work, it holds promise for use in high-performance, light-weight body armor and other uses that demand lightweight, flexible and tough materials.
Publishing on Jan. 17 in the journal Science, the study marks several firsts for the field. Not only is it the first 2D mechanically interlocked polymer, but the novel material also contains 100 trillion mechanical bonds per 1 square centimeter—the highest density of mechanical bonds ever achieved.
The researchers produced this material using a new, highly efficient and scalable polymerization process.
"We made a completely new polymer structure," said Northwestern's William Dichtel, the study's corresponding author.
"It's similar to chainmail in that it cannot easily rip because each of the mechanical bonds has a bit of freedom to slide around. If you pull it, it can dissipate the applied force in multiple directions. And if you want to rip it apart, you would have to break it in many, many different places. We are continuing to explore its properties and will probably be studying it for years."
Samsung could make the Galaxy S26 extra thin with new battery tech:
The batteries that power today's electronics are a lot more incredible than we often give them credit for. We've come a long way from the days of nickel-cadmium and even nickel-metal hydride cells, with lithium-based chemistry offering superior capacity and discharge characteristics (if only they didn't have that annoying tendency to burst into flame). But for as far as we've come, it always feels like the next big thing could be right around the corner, as advocates hype next-gen battery tech. We've only just started to see silicon-carbon batteries emerge, capable of storing even more energy in smaller spaces, and we've been hugely curious to see who might take advantage of them next.
[...] asserting that Samsung is planning to use a silicon-carbon battery in the Galaxy S26.
That could be a major step forward, and may not even be the only new battery technology Samsung could be thinking of using; this rumor arrives just a day after a report in South Korea's TheElec that discussed the company's interest in novel battery construction, and specifically, increasing capacity through stacked electrodes. Already used in larger solutions like car batteries, Samsung may be considering a similar approach for smartphones.
See also:
Digital archivist, David Rosenthal, reviews an old, obscure, but prescient, document from computer scientist Clifford Lynch on various aspects of the then nascent WWW. Lynch's document, Accessibility and Integrity of Networked Information Collections. Background Paper, from 1993, which covered topics ranging from the First Sale Doctrine, what is now called surveillance capitalism, pay walls, and disinformation:
While doing the research for a future talk, I came across an obscure but impressively prophetic report entitled Accessibility and Integrity of Networked Information Collections that Cliff Lynch wrote for the federal Office of Technology Assessment in 1993, 32 years ago. I say "obscure" because it doesn't appear in Lynch's pre-1997 bibliography.
To give you some idea of the context in which it was written, unless you are over 70, it was more than half your life ago when in November 1989 Tim Berners-Lee's browser first accessed a page from his Web server. It was only about the same time that the first commercial, as opposed to research, Internet Service Providers started with the ARPANET being decommissioned the next year. Two years later, in December of 1991, the Stanford Linear Accelerator Center put up the first US Web page. In 1992 Tim Berners-Lee codified and extended the HTTP protocol he had earlier implemented. It would be another two years before Netscape became the first browser to support HTTPS. It would be two years after that before the ITEF approved HTTP/1.0 in RFC 1945. As you can see, Lynch was writing among the birth-pangs of the Web.
Although Lynch was insufficiently pessimistic, he got a lot of things exactly right. Below the fold I provide four out of many examples.
Rosenthal's summary includes a link to a digital copy at the Education Resources Information Center.
New Ohio Law Allows Cops To Charge $75/Hr. To Process Body Cam Footage:
Ohio residents pay for the cops. They pay for the cameras. Now, they're expected to pay for the footage generated by cops and their cameras. Governor Mike DeWine, serving no one but cops and their desire for opacity, recently signed a bill into law that will make it much more expensive for residents to exercise their public records rights.
And it was done in possibly the shadiest way possible — at the last minute and with zero transparency.
[...] Reporter Morgan Trau had questions following the passage of this measure. Gov. DeWine had answers. But they're completely unsatisfactory.
"These requests certainly should be honored, and we want them to be honored. We want them to be honored in a swift way that's very, very important," DeWine responded. "We also, though — if you have, for example, a small police department — very small police department — and they get a request like that, that could take one person a significant period of time."
Sure, that's part of the equation. Someone has to take time to review information requested via a public records request. But that's part of the government's job. It's not an excuse to charge a premium just to fulfill the government's obligations to the public.
DeWine had more of the same in his official statement on this line item — a statement he was presumably compelled to issue due to many people having these exact same questions about charging people a third time for something they'd already paid for twice.
No law enforcement agency should ever have to choose between diverting resources for officers on the street to move them to administrative tasks like lengthy video redaction reviews for which agencies receive no compensation–and this is especially so for when the requestor of the video is a private company seeking to make money off of these videos. The language in House Bill 315 is a workable compromise to balance the modern realities of preparing these public records and the cost it takes to prepare them.
Well, the biggest problem with this assertion is that no law enforcement agency ever has to choose between reviewing footage for release and keeping an eye on the streets. I realize some smaller agencies may not have a person dedicated to public records responses, but for the most part, I would prefer someone other than Officer Johnny Trafficstop handle public records releases. First, they're not specifically trained to handle this job. Second, doing this makes it a fox-in-the-hen-house situation, where officers might be handling information involving themselves, which is a clear conflict of interest.
[...] This argument isn't much better:
Marion Police Chief Jay McDonald, also the president of the Ohio FOP, showed me that he receives requests from people asking for drunk and disorderly conduct videos. Oftentimes, these people monetize the records on YouTube, he added.
Moving past the conflict of interest that is a police chief also being the head of a police union, the specific problem with this argument is that it suggests it's ok to financially punish everyone just because a small minority of requesters are abusing the system for personal financial gain. Again, while it sounds like a plausible argument for charging processing fees, the real benefit isn't in deterring YouTube opportunists, but in placing a tax on transparency most legitimate requesters simply won't be able to pay. And that's the obvious goal here. If it wasn't, this proposal would have gone up for discussion, rather than tacked onto the end of 315-page omnibus bill at the last minute. This is nothing but what it looks like: people in the legislature doing a favor for cops... and screwing over their own constituents.
By 2005, computer chips were running a billion times faster than the Z3 in the region of 5GHz. But then progress stalled. Today, state-of-the-art chips still operate at around 5GHz, a limit bottleneck that has significantly restricted progress in fields requiring ultrafast data processing.
Now that looks set to change thanks to the work of Gordon Li and Midya Parto at the California Institute of Technology in Pasadena, and colleagues, who have designed and tested an all-optical computer capable of clock speeds exceeding 100 GHz. "The all-optical computer realizes linear operations, nonlinear functions, and memory entirely in the optical domain with > 100 GHz clock rates," they say. Their work paves the way for a new era of ultrafast computing with applications in fields ranging from signal processing to pattern recognition and beyond.
Ref: All-Optical Computing With Beyond 100-Ghz Clock Rates : https://arxiv.org/abs/2501.05756
https://arstechnica.com/gaming/2025/01/this-pdf-contains-a-playable-copy-of-doom/
Here at Ars, we're suckers for stories about hackers getting Doom running on everything from CAPTCHA robot checks and Windows' notepad.exe to AI hallucinations and fluorescing gut bacteria. Despite all that experience, we were still thrown for a loop by a recent demonstration of Doom running in the usually static confines of a PDF file.
On the Github page for the quixotic project, coder ading2210 discusses how Adobe Acrobat included some robust support for JavaScript in the PDF file format.
[...] the Doom PDF can take inputs via the user typing in a designated text field and generate "video" output in the form of converted ASCII text fed into 200 individual text fields, each representing a horizontal line of the Doom display. The text in those fields is enough to simulate a six-color monochrome display at a "pretty poor but playable" 13 frames per second (about 80 ms per frame).
[...] have to dock at least a few coolness points because the port doesn't actually work on generic desktop versions of Adobe Acrobat—you need to load it through a Chromium-based web browser. But the project gains those coolness points back with a web front-end that lets users load generic WAD files into a playable PDF.
Related stories on SoylentNews:
Hitting the Books: The Programming Trick That Gave Us DOOM Multiplayer - 20230912
Can Doom Run It? An Adding Machine in Doom - 20221224
Def Con Hacker Shows John Deere's Tractors Can Run Doom - 20220817
Even DOOM Can Now Run DOOM! - 20220715
You Can Play 'Doom' Inside 'Minecraft' Using a Virtual PC - 20200726
Explore This 3D World Rendered In ASCII Art - 20200102
'Doomba' Turns Your Roomba's Cleaning Maps Into Doom Levels - 20181227
Modder Gets Half-Life Running on an Android Smartwatch - 20150724
Run Doom on your Printer - 20140917
Archive Link: https://archive.is/LxGQ6
The refrigerated section at the flagship Walgreens on Chicago's Magnificent Mile was glowing with frozen food and bottled drinks, but not for long. Where the fridge cases were previously lined with simple glass doors, there were door-size computer screens instead. These "smart doors" obscured shoppers' view of the fridges' actual contents, replacing them with virtual rows of the Gatorades, Bagel Bites and other goods it promised were inside. The digital displays had a distinct advantage over regular glass, at least for the retailer: ads. When proximity sensors detected passersby, the fridge doors started playing short videos hawking Doritos or urging customers to check out with Apple Pay. If this sounds disruptive—in the ordinary sense of the word, not Silicon Valley's—that might have seemed a generous description in December 2023, when all the screens went blank.
Most people here probably came from slashdot originally, so it won't need much introduction.
The owners of the site have decided to go all in on advertising enshittification, and anyone visiting any page with an adblocker installed will be greeted by several seconds of JS bloat trying to inject ads past your adblocker, followed by a message box that demands you disable your adblocker, and forces a page reload.
[Editor's Comment: DotDalek has been in email contact with "whipslash" on Slashdot and he has asked me to give you a summary of part of the email exchange:
Quote: I posted some information about this in my recent comments, with links to comments by whipslash (Logan Abbott, the guy who owns Slashdot) and my own personal communication with him. Ad blockers aren't banned on Slashdot, whipslash apologized and removed the advertiser who caused this, and he at least seems open to allowing users to subscribe again. As I said, a subscription-model is a much better way to raise revenue than inserting more ads. We'll see if Slashdot actually offers subscriptions to raise revenue, but whipslash seemed open to it. If you're going to run the story that was in your queue, I ask you to please make sure that it includes accurate information. Incidentally, this is why SN needs people to subscribe, and it would be a good opportunity to further remind people of this. End Quote ]
https://phys.org/news/2025-01-ancient-genomes-reveal-iron-age.html
An international team of geneticists, led by those from Trinity College Dublin, has joined forces with archaeologists from Bournemouth University to decipher the structure of British Iron Age society, finding evidence of female political and social empowerment.
The researchers seized upon a rare opportunity to sequence DNA from many members of a single community. They retrieved over 50 ancient genomes from a set of burial grounds in Dorset, southern England, in use before and after the Roman Conquest of AD 43. The results revealed that this community was centered around bonds of female-line descent.
Dr. Lara Cassidy, Assistant Professor in Trinity's Department of Genetics, led the study that has been published in Nature.
She said, "This was the cemetery of a large kin group. We reconstructed a family tree with many different branches and found most members traced their maternal lineage back to a single woman, who would have lived centuries before. In contrast, relationships through the father's line were almost absent.
"This tells us that husbands moved to join their wives' communities upon marriage, with land potentially passed down through the female line. This is the first time this type of system has been documented in European prehistory and it predicts female social and political empowerment.
"It's relatively rare in modern societies, but this might not always have been the case."
Incredibly, the team found that this type of social organization, termed "matrilocality," was not just restricted to Dorset. They sifted through data from prior genetic surveys of Iron Age Britain and, although sample numbers from other cemeteries were smaller, they saw the same pattern emerge again and again.
Journal Reference: Lara Cassidy et al., Continental influx and pervasive matrilocality in Iron Age Britain, Nature (2025). DOI: 10.1038/s41586-024-08409-6. www.nature.com/articles/s41586-024-08409-6
In 2023, AI researchers at Meta interviewed 34 native Spanish and Mandarin speakers who lived in the US but didn't speak English. The goal was to find out what people who constantly rely on translation in their day-to-day activities expect from an AI translation tool. What those participants wanted was basically a Star Trek universal translator or the Babel Fish from the Hitchhiker's Guide to the Galaxy: an AI that could not only translate speech to speech in real time across multiple languages, but also preserve their voice, tone, mannerisms, and emotions. So, Meta assembled a team of over 50 people and got busy building it.
[...] AI translation systems today are mostly focused on text, because huge amounts of text are available in a wide range of languages thanks to digitization and the Internet.
[...] AI translators we have today support an impressive number of languages in text, but things are complicated when it comes to translating speech.
[...] A few systems that can translate speech-to-speech directly do exist, but in most cases they only translate into English and not in the opposite way.
[...] to pull off the Star Trek universal translator thing Meta's interviewees dreamt about, the Seamless team started with sorting out the data scarcity problem.
[...] Warren Weaver, a mathematician and pioneer of machine translation, argued in 1949 that there might be a yet undiscovered universal language working as a common base of human communication.
[...] Machines do not understand words as humans do. To make sense of them, they need to first turn them into sequences of numbers that represent their meaning.
[...] When you vectorize aligned text in two languages like those European Parliament proceedings, you end up with two separate vector spaces, and then you can run a neural net to learn how those two spaces map onto each other.
But the Meta team didn't have those nicely aligned texts for all the languages they wanted to cover. So, they vectorized all texts in all languages as if they were just a single language and dumped them into one embedding space called SONAR (Sentence-level Multimodal and Language-Agnostic Representations).
[...] The team just used huge amounts of raw data—no fancy human labeling, no human-aligned translations. And then, the data mining magic happened.
SONAR embeddings represented entire sentences instead of single words. Part of the reason behind that was to control for differences between morphologically rich languages, where a single word may correspond to multiple words in morphologically simple languages. But the most important thing was that it ensured that sentences with similar meaning in multiple languages ended up close to each other in the vector space.
[...] The Seamless team suddenly got access to millions of aligned texts, even in low-resource languages, along with thousands of hours of transcribed audio. And they used all this data to train their next-gen translator.
[...] The Nature paper published by Meta's Seamless ends at the SEAMLESSM4T models, but Nature has a long editorial process to ensure scientific accuracy. The paper published on January 15, 2025, was submitted in late November 2023. But in a quick search of the arXiv.org, a repository of not-yet-peer-reviewed papers, you can find the details of two other models that the Seamless team has already integrated on top of the SEAMLESSM4T: SeamlessStreaming and SeamlessExpressive, which take this AI even closer to making a Star Trek universal translator a reality.
SeamlessStreaming is meant to solve the translation latency problem.
[...] SeamlessStreaming was designed to take this experience a bit closer to what human simultaneous translator do—it translates what you're saying as you speak in a streaming fashion. SeamlessExpressive, on the other hand, is aimed at preserving the way you express yourself in translations.
[...] Sadly, it still can't do both at the same time; you can only choose to go for either streaming or expressivity, at least at the moment. Also, the expressivity variant is very limited in supported languages—it only works in English, Spanish, French, and German. But at least it's online so you can go ahead and give it a spin.
Related stories on SoylentNews:
"AI Took My Job, Literally"—Gizmodo Fires Spanish Staff Amid Switch to AI Translator - 20230906
Tokyo Tests Automated, Simultaneous Translation at Railway Station - 20230805
AI Localization Tool Claims to Translate Your Words in Your Voice - 20201017
The Shallowness of Google Translate - 20180202
Survey Says AI Will Exceed Human Performance in Many Occupations Within Decades - 20170701
Google Upgrades Chinese-English Translation with "Neural Machine Translation" - 20160929
Android Marshmallow Has a Hidden Feature: Universal Translation - 20151012
https://spectrum.ieee.org/reversible-computing
Michael Frank has spent his career as an academic researcher working over three decades in a very peculiar niche of computer engineering. According to Frank, that peculiar niche's time has finally come. "I decided earlier this year that it was the right time to try to commercialize this stuff," Frank says. In July 2024, he left his position as a senior engineering scientist at Sandia National Laboratories to join a startup, U.S. and U.K.-based Vaire Computing.
Frank argues that it's the right time to bring his life's work—called reversible computing—out of academia and into the real world because the computing industry is running out of energy. "We keep getting closer and closer to the end of scaling energy efficiency in conventional chips," Frank says. According to an IEEE semiconducting industry road map report Frank helped edit, by late in this decade the fundamental energy efficiency of conventional digital logic is going to plateau, and "it's going to require more unconventional approaches like what we're pursuing," he says.
As Moore's Law stumbles and its energy-themed cousin Koomey's Law slows, a new paradigm might be necessary to meet the increasing computing demands of today's world. According to Frank's research at Sandia, in Albuquerque, reversible computing may offer up to a 4,000x energy-efficiency gain compared to traditional approaches.
"Moore's Law has kind of collapsed, or it's really slowed down," says Erik DeBenedictis, founder of Zettaflops, who isn't affiliated with Vaire. "Reversible computing is one of just a small number of options for reinvigorating Moore's Law, or getting some additional improvements in energy efficiency."
Vaire's first prototype, expected to be fabricated in the first quarter of 2025, is less ambitious—it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire's road map but probably 10 or 15 years out.
...
Intuitively, information may seem like an ephemeral, abstract concept. But in 1961, Rolf Landauer at IBM discovered a surprising fact: Erasing a bit of information in a computer necessarily costs energy, which is lost as heat. It occurred to Landauer that if you were to do computation without erasing any information, or "reversibly," you could, at least theoretically, compute without using any energy at all.Landauer himself considered the idea impractical. If you were to store every input and intermediate computation result, you would quickly fill up memory with unnecessary data. But Landauer's successor, IBM's Charles Bennett, discovered a workaround for this issue. Instead of just storing intermediate results in memory, you could reverse the computation, or "decompute," once that result was no longer needed. This way, only the original inputs and final result need to be stored.
Take a simple example, such as the exclusive-OR, or XOR gate. Normally, the gate is not reversible—there are two inputs and only one output, and knowing the output doesn't give you complete information about what the inputs were. The same computation can be done reversibly by adding an extra output, a copy of one of the original inputs. Then, using the two outputs, the original inputs can be recovered in a decomputation step.