Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Phone Metadata Suddenly Not So 'Harmless' When It's The FBI's Data Being Harvested:
The government's next-best argument (after "Third Party Doctrine yo!") in support of its bulk collection of US persons' phone metadata via the (now partly-dead) Section 215 surveillance program was this: hey, it's just metadata. How harmful could it be? (And if it's of so little use to the NSA/FBI/others, how is it possible we're using it to literally kill people?)
While trying to fend off attacks on Section 215 collections (most of which are governed [in the loosest sense of the word] by the Third Party Doctrine), the NSA and its domestic-facing remora, the FBI, insisted collecting and storing massive amounts of phone metadata was no more a constitutional violation than it was a privacy violation.
Suddenly — thanks to the ongoing, massive compromising of major US telecom firms by Chinese state-sanctioned hackers — the FBI is getting hot and bothered about the bulk collection of its own phone metadata by (gasp!) a government agency. (h/t Kevin Collier on Bluesky)
[...] The agency (quite correctly!) believes the metadata could be used to identify agents, as well as their contacts and confidential sources. Of course it can. That's why the NSA liked gathering it. And that's why the FBI liked collections it didn't need a warrant to access. (But let's not pretend this data was "stolen." It was duplicated and exfiltrated, but AT&T isn't suddenly missing thousands of records generated by FBI agents and their contacts.)
The issue, of course, is that the Intelligence Community consistently downplayed this exact aspect of the bulk collection, claiming it was no more intrusive than scanning every piece of domestic mail (!) or harvesting millions of credit card records just because the Fourth Amendment (as interpreted by the Supreme Court) doesn't say the government can't.
[...] The takeaway isn't the inherent irony. It's that the FBI and NSA spent years pretending the fears expressed by activists and legislators were overblown. Officials repeatedly claimed the information was of almost zero utility, despite mounting several efforts to protect this collection from being shut down by the federal government. In the end, the phone metadata program (at least as it applies to landlines) was terminated. But there's more than a hint of egregious hypocrisy in the FBI's sudden concern about how much can be revealed by "just" metadata.
https://devblogs.microsoft.com/oldnewthing/20040409-00/?p=39873
A friend of mine used to work on the development of the USB specification and subsequent implementation. One of the things that happens at these meetings is that hardware companies would show off the great USB hardware they were working on. It also gave them a chance to try out their hardware with various USB host manufacturers and operating systems to make sure everything worked properly together.
One of the earlier demonstrations was a company that was making USB floppy drives. The company representative talked about how well the drives were doing and mentioned that they make two versions, one for PCs and one for Macs.
"That's strange," the committee members thought to themselves. "Why are there separate PC and Mac versions? The specification is very careful to make sure that the same floppy drive works on both systems. You shouldn't need to make two versions."
Arthur T Knackerbracket has processed the following story:
The stunning panorama features over 600 overlapping Hubble images that have been painstaking stitched together. Spread across 2.5 billion pixels, you'll find some 200 million stars – all of which are brighter than our own Sun. That is a huge number, yet only a fraction of the estimated one trillion stars in the Andromeda galaxy. Many of Andromeda's less massive stars are beyond Hubble's sensitivity limit and thus, are not represented in the imaged.
Data from two surveys – the Panchromatic Hubble Andromeda Treasury (PHAT) program and the Panchromatic Hubble Andromeda Southern Treasury (PHAST) program – was used to construct the mosaic.
With it, astronomers will be able to learn more about the age of Andromeda as well as its heavy-element abundance and the stellar masses inside of it. The surveys will also help astronomers understand how Andromeda might have merged with other galaxies in its past.
"Andromeda's a train wreck. It looks like it has been through some kind of event that caused it to form a lot of stars and then just shut down," said Daniel Weisz at the University of California, Berkeley.
"This was probably due to a collision with another galaxy in the neighborhood."
NASA has multiple sizes of the panoramic available for download, including the full-size 203 MB image (42,208 x 9,870) and a more user friendly 9 MB variant (10,552 x 2,468).
Hubble has been in orbit for more than three decades, and continues to provide astronomers with meaningful science data. That said, NASA already has its successor waiting in the wings.
The Nancy Grace Roman Space Telescope, scheduled to launch by May 2027, will feature a mirror roughly the same size as the one Hubble uses but will be able to capture much higher resolution images. A single Roman exposure will capture the equivalent of at least 100 high-resolution Hubble snaps, according to NASA.
Brendan Carr dumps plan to ban bulk billing deals that lock renters into one ISP:
Federal Communications Commission Chairman Brendan Carr has dropped the previous administration's proposal to ban bulk billing deals that require tenants to pay for a specific provider's Internet service.
In March 2024, then-Chairwoman Jessica Rosenworcel proposed a ban on arrangements in which "tenants are required to pay for broadband, cable, and satellite service provided by a specific communications provider, even if they do not wish to take the service or would prefer to use another provider."
Rosenworcel's Notice of Proposed Rulemaking was opposed by Internet providers and sat on the FCC's list of items on circulation throughout 2024 without any final vote, despite the commission having a 3-2 Democratic majority at the time. Carr, who was elevated to the chairmanship by President Trump, emptied the list of items under consideration by commissioners on Friday.
With bulk billing deals in which a company agrees to provide service to every tenant of a building, residents are billed a prorated share of the total cost. Tenants may be billed by either the landlord or the telco provider. only banned
Technically, Rosenworcel's plan would have allowed bulk billing arrangements to continue as long as tenants are given the ability to opt out of them. In March, Rosenworcel's office said her plan would "increase competition for communications service in these buildings by making it more profitable for competitive providers to deploy service in buildings where it is currently too expensive to serve consumers because tenants are required to take a certain provider's service."
"Too often, tenants living in these households are forced to pay high prices with limited choices for Internet or other services," Rosenworcel's office said in March, arguing that her plan would "lower costs and address the lack of choice for broadband services" in apartments, condos, and public housing.
Housing industry lobby groups praised Carr in a press release issued by the National Multifamily Housing Council (NMHC), National Apartment Association (NAA), and Real Estate Technology and Transformation Center (RETTC). "His decision to withdraw the proposal will ensure that millions of consumers—renters, homeowners and condominium owners—will continue to reap the benefits of bulk billing," the press release said.
The industry press release claims that bulk billing agreements negotiated between property owners and Internet service providers "typically secur[e] high-speed Internet for renters at rates up to 50 percent lower than standard retail pricing" and remove "barriers to broadband adoption like credit checks, security deposits, equipment rentals, or installation fees."
"Bulk billing arrangements have made high-speed internet more accessible and affordable for millions of Americans, especially for low-income renters and seniors living in affordable housing," NMHC President Sharon Wilson Géno said.
[...] Consumer advocacy group Public Knowledge and 30 other groups urged the FCC to approve Rosenworcel's proposal in July. "As they exist now, bulk billing arrangements sacrifice consumer choice to preserve in-building monopolies at the expense of tenants," the groups said in a letter to the commission. "For the many tenants trapped with high-cost or less-capable Internet that does not meet their needs, an opt-out option provides a vital escape. This is especially true for those eligible for low-income plans or Lifeline subsidy, which by definition are not available in a bulk billing arrangement."
[...] John Bergmayer, legal director at Public Knowledge, told Ars today that a bulk billing ban would have made the commission's rules more effective "by eliminating one of the ways that landlords, HOAs, and telecom and cable companies collaborate to bypass the intended effort of those rules, and require people to pay for Internet service they don't want or need. It's a shame. The arguments on the pro-bulk-billing side were spurious or overblown, and the MTE [multiple tenant environments] access rules have broad, bipartisan support, as well as a lot of industry support."
The Pebble was 2012 smartwatch built using an e-paper display. It had great battery life, a UI that could be hacked using C or Javascript, and a very loyal fanbase. Unlike current incumbent smartwatches like the Google Pixel Watch or Apple Watch, Pebble kept its feature set under control and aimed to supplement a smartphone rather than step on its toes. The end result was a compact product with great battery life that was genuinely liked by gadgeteers.
Unfortunately, after a number of strategic missteps, the manufacturer of the Pebble was bought out in 2016, by a competitor, FitBit. The product was discontinued almost immediately, leaving the world of nifty wrist-mounted doodads noticeably poorer.
Ever since there's been a sort of grassroots campaign to support the Pebble, called Rebble, which was started more-or-less immediately after FitBit shuttered the Pebble, and is helmed by one of Pebble's founders. For the longest time they weren't making much headway on delivering software updates, as they essentially had to start over from square one, without the initial startup resources they'd had the first time around. Mostly they just served as a home for applications and widgets.
This all changed on Monday, when the Rebble lead was able to get hold of some folks at Google—who bought out FitBit in 2021—and convinced them to open-source the Pebble OS. (It's not quite complete—like many open-sourcings of closed projects, there are some patent-encumbered bits missing.) There's now the (similarly-named yet distinct) Repebble project, which aims to begin a new production run of Pebble smartwatches.
Is this the beginning of a renaissance for resurrecting beloved Google-owned products? Probably not. But it's one less corpse in the ground, that's for sure.
Chevron, one of the world's largest oil companies, has announced plans to enter the rapidly growing field of artificial intelligence by building natural gas power plants directly connected to data centers:
These facilities will supply electricity to technology companies leveraging AI and other high-powered computing services, reported The New York Times. The move highlights the increasing energy demands of AI technologies and Chevron's strategic shift to diversify its operations beyond traditional oil and gas.
The company's CEO, Mike Wirth, revealed the initiative during a recent industry conference, emphasizing the role Chevron could play in bridging energy production and digital innovation. As data centers consume enormous amounts of electricity to support AI-driven computations, Chevron's natural gas plants are positioned to offer a reliable and efficient energy source. This strategy allows Chevron to capitalize on its core expertise in energy production while contributing to a sector that's reshaping industries globally.
[...] The company plans to integrate carbon capture technologies into its power plants to offset their environmental impact. Additionally, Chevron has committed to exploring renewable energy options alongside its natural gas operations, suggesting a balanced approach to meeting current energy demands while investing in a low-carbon future.
Related:
Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.
And it wasn't the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit's CEO called out all AI companies whose crawlers he said were "a pain in the ass to block," despite the tech industry otherwise agreeing to respect "no scraping" robots.txt rules.
[...]
Shortly after he noticed Facebook's crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers "clobbering" websites that he told Ars he hoped would give "teeth" to robots.txt.Building on an anti-spam cybersecurity tactic known as [tarpitting], he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."
Aaron clearly warns users that Nepenthes is aggressive malware.
[...]
Tarpits were originally designed to waste spammers' time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon.
[...]
It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed.
[...]
The only AI company that responded to Ars' request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.
"We're aware of efforts to disrupt AI web crawlers," OpenAI's spokesperson said. "We design our systems to be resilient while respecting robots.txt and standard web practices."
[...]
By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies' AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.
"Ultimately, it's like the Internet that I grew up on and loved is long gone," Aaron told Ars. "I'm just fed up, and you know what? Let's fight back, even if it's not successful. Be indigestible. Grow spikes."
[...]
Nepenthes was released in mid-January but was instantly popularized beyond Aaron's expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket."That's when I realized, 'oh this is going to be something,'" Aaron told Ars. "I'm kind of shocked by how much it's blown up."
[...]
When software developer and hacker Gergely Nagy, who goes by the handle "algernon" online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server's bandwidth was being "eaten" by AI crawlers.Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers.
[...]
Iocaine takes ideas (not code) from Nepenthes, but it's more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an "infinite maze of garbage" in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.
[...]
Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.
[...]
Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: "Let's make AI poisoning the norm. If we all do it, they won't have anything to crawl."
Related stories on SoylentNews:
Endlessh: an SSH Tarpit - 20190325
https://tails.net/news/version_6.11/index.en.html
https://gitlab.tails.boum.org/tails/tails/-/blob/master/debian/changelog
The vulnerabilities described below were identified during an external security audit by Radically Open Security and disclosed responsibly to our team. We are not aware of these attacks being used against Tails users until now. [Editor's Comment: I believe they mean 'up to now' or 'so far'.]
These vulnerabilities can only be exploited by a powerful attacker who has already exploited another vulnerability to take control of an application in Tails.
If you want to be extra careful and used Tails a lot since January 9 without upgrading, we recommend that you do a manual upgrade instead of an automatic upgrade.
Prevent an attacker from installing malicious software permanently. (#20701)
In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit a vulnerability in Tails Upgrader to install a malicious upgrade and permanently take control of your Tails.
Doing a manual upgrade would erase such malicious software.
Prevent an attacker from monitoring online activity. (#20709 and #20702)
In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit vulnerabilities in other applications that might lead to deanonymization or the monitoring of browsing activity:
In Onion Circuits, to get information about Tor circuits and close them.
In Unsafe Browser, to connect to the Internet without going through Tor.
In Tor Browser, to monitor your browsing activity.
In Tor Connection, to reconfigure or block your connection to the Tor network.Prevent an attacker from changing the Persistent Storage settings. (#20710)
Also, Tails still doesn't FULLY randomize the MAC address; so much for anonymity.
Scale AI and CAIS Unveil Results of Humanity's Last Exam, a Groundbreaking New Benchmark
Scale AI and the Center for AI Safety (CAIS) are proud to publish the results of Humanity's Last Exam, a groundbreaking new AI benchmark that was designed to test the limits of AI knowledge at the frontiers of human expertise. The results demonstrated a significant improvement from the reasoning capabilities of earlier models, but current models still were only able to answer fewer than 10 percent of the expert questions correctly. The paper can be read here.
The new benchmark, called "Humanity's Last Exam," evaluated whether AI systems have achieved world-class expert-level reasoning and knowledge capabilities across a wide range of fields, including math, humanities, and the natural sciences. Throughout the fall, CAIS and Scale AI crowdsourced questions from experts to assemble the hardest and broadest problems to stump the AI models. The exam was developed to address the challenge of "benchmark saturation": models that regularly achieve near-perfect scores on existing tests, but may not be able to answer questions outside of those tests. Saturation reduces the utility of a benchmark as a precise measurement of future model progress.
[Source]: Scale AI
New research "Computer-Use Agent" AI model can perform multi-step tasks through a web browser:
On Thursday, OpenAI released a research preview of "Operator," a web automation tool that uses a new AI model called Computer-Using Agent (CUA) to control computers through a visual interface. The system performs tasks by viewing and interacting with on-screen elements like buttons and text fields similar to how a human would.
Operator is available today for subscribers of the $200-per-month ChatGPT Pro plan at operator.chatgpt.com. The company plans to expand to Plus, Team, and Enterprise users later. OpenAI intends to integrate these capabilities directly into ChatGPT and later release CUA through its API for developers.
Operator watches on-screen content while you use your computer and executes tasks through simulated keyboard and mouse inputs. The Computer-Using Agent processes screenshots to understand the computer's state and then makes decisions about clicking, typing, and scrolling based on its observations.
OpenAI's release follows other tech companies as they push into what are often called "agentic" AI systems, which can take actions on a user's behalf. Google announced Project Mariner in December 2024, which performs automated tasks through the Chrome browser, and two months earlier, in October 2024, Anthropic launched a web automation tool called "Computer Use" focused on developers that can control a user's mouse cursor and take actions on a computer.
"The Operator interface looks very similar to Anthropic's Claude Computer Use demo from October," wrote AI researcher Simon Willison on his blog, "even down to the interface with a chat panel on the left and a visible interface being interacted with on the right."
To use your PC like you would, the Computer-Using Agent works in multiple steps. First, it captures screenshots to monitor your screen, then analyzes those images (using GPT-4o's vision capabilities with additional reinforcement learning) to process raw pixel data. Next, it determines what actions to take and then performs virtual inputs to control the computer. This iterative loop design reportedly lets the system recover from errors and handle complex tasks across different applications.
While it's working, Operator shows a miniature browser window of its actions.
However, the technology behind Operator is still relatively new and far from perfect. The model reportedly performs best at repetitive web tasks like creating shopping lists or playlists. It struggles more with unfamiliar interfaces like tables and calendars, and does poorly with complex text editing (with a 40 percent success rate), according to OpenAI's internal testing data.
[...] For any AI model that can see how you operate your computer and even control some aspects of it, privacy and safety are very important. OpenAI says it built multiple safety controls into Operator, requiring user confirmation before completing sensitive actions like sending emails or making purchases. Operator also has limits on what it can browse, set by OpenAI. It cannot access certain website categories, including gambling and adult content.
Traditionally, AI models based on large language model-style Transformer technology like Operator have been relatively easy to fool with jailbreaks and prompt injections.
To catch attempts at subverting Operator, which might hypothetically be embedded in websites that the AI model browses, OpenAI says it has implemented real-time moderation and detection systems. OpenAI reports the system recognized all but one case of prompt injection attempts during an early internal red-teaming session.
However, Willison, who frequently covers AI security issues, isn't convinced Operator can stay secure, especially as new threats emerge. "Color me skeptical," he wrote in his blog post. "I imagine we'll see all kinds of novel successful prompt injection style attacks against this model once the rest of the world starts to explore it."
What could possibly go wrong?
Arthur T Knackerbracket has processed the following story:
Two NASA astronauts are set to venture outside the International Space Station (ISS) in search of signs of life.
Butch Wilmore and Suni Williams should have been back on Earth months ago, but, thanks to issues with Boeing's CST-100 Starliner capsule, are spending some additional time on the ISS before a planned return to Earth in a SpaceX Crew Dragon.
The plan is for the spacewalkers to collect samples from sites near life support system vents on the exterior of the ISS. Scientists will be able to determine if the ISS releases microorganisms and assess whether any can survive in the harsh environment outside the outpost.
These days, spacecraft and spacesuits are thoroughly sterilized before missions. However, humans carry plenty of microorganisms, and looking at what is collected outside the ISS will inform designs for crewed vehicles and missions to limit the spread of human contamination.
NASA said: "The data could help determine whether changes are needed to crewed spacecraft, including spacesuits, that are used to explore destinations where life may exist now or in the past."
With Mars now a priority for crewed expeditions, minimizing human contamination on the surface is crucial to avoid misidentifying it as traces of life on the red planet.
Many space agencies take the challenge of planetary protection very seriously. As an example, the European Space Agency (ESA) cites Article IX of the Outer Space Treaty, which requires care to be taken during exploration of the Moon and beyond "so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extra-terrestrial matter and, where necessary, [to] adopt appropriate measures for this purpose."
This also involves considering missions launched under less stringent standards than those in place today. Older spacecraft, for example, were not always subject to the same sterilization.
Memorably, a camera on NASA's Surveyor 3 lander, which the Apollo 12 astronauts retrieved, was found to have been contaminated [PDF] prior to launch. Despite vacuum testing, exposure to temperatures below -100° Celsius, and a stint on the lunar surface, scientists found that the microorganisms on the camera had survived.
Arthur T Knackerbracket has processed the following story:
Google will take firmer action against British businesses that use fake reviews to boost their star ratings on the search giant’s reviews platform. The UK’s Competition and Markets Authority (CMA) announced on Friday that Google has agreed to improve its processes for detecting and removing fake reviews, and will take action against the businesses and reviewers that post them.
This includes deactivating the ability to add new reviews for businesses found to be using fake reviews, and deleting all existing reviews for at least six months if they repeatedly engage in suspicious review activity. Google will also place prominent “warning alerts” on the Google profiles of businesses using fake reviews to help consumers be more aware of potentially misleading feedback. Individuals who repeatedly post fake or misleading reviews on UK business pages will be banned and have their review history deleted, even if they’re located in another country.
Google is required to report to the CMA over the next three years to ensure it’s complying with the agreement.
“The changes we’ve secured from Google ensure robust processes are in place, so people can have confidence in reviews and make the best possible choices,” CMA chief executive Sarah Cardell said in a statement. “This is a matter of fairness – for both business and consumers – and we encourage the entire sector to take note.”
Google made similar changes to reviews in Maps last year, saying that contributions “should reflect a genuine experience at a place or business.” However, those changes apply globally while Google’s commitment to improving reviews across all its properties appears to just apply to the UK for now.
The changes to reviews follow a CMA [*] investigation launched against Google and Amazon in 2021 over concerns the companies had violated consumer protection laws by not doing enough to tackle fake reviews on their platforms. The CMA says its probe into Amazon is still ongoing and that an update will be announced “in due course.”
[CMA: Competition and Markets Authority]
An undersea fiber optic cable between Latvia and Sweden was damaged on Sunday, likely as a result of external influence, Latvia said, triggering an investigation by local and NATO maritime forces in the Baltic Sea:
"We have determined that there is most likely external damage and that it is significant," Latvian Prime Minister Evika Silina told reporters following an extraordinary government meeting.
Latvia is coordinating with NATO and the countries of the Baltic Sea region to clarify the circumstances, she said separately in a post on X.
Latvia's navy earlier on Sunday said it had dispatched a patrol boat to inspect a ship and that two other vessels were also subject to investigation.
From Zerohedge's coverage:
Over the past 18 months, three alarming incidents have been reported in which commercial ships traveling to or from Russian ports are suspected of severing undersea cables in the Baltic region.
Washington Post recently cited Western officials who said these cable incidents are likely maritime accidents - not sabotage by Russia and/or China.
Due to all the cable severing risks, intentional and unintentional, a report from late November via TechCrunch [linked by submitter] said Meta planned a new "W" formation undersea cable route around the world to "avoid areas of geopolitical tension."
Related:
https://www.technologyreview.com/2025/01/28/1110613/mice-with-two-dads-crispr/
Mice with two dads have been created using CRISPR
It's a new way to create "bi-paternal" mice that can survive to adulthood—but human applications are still a long way off.
Mice with two fathers have been born—and have survived to adulthood—following a complex set of experiments by a team in China.
Zhi-Kun Li at the Chinese Academy of Sciences in Beijing and his colleagues used CRISPR to create the mice, using a novel approach to target genes that normally need to be inherited from both male and female parents. They hope to use the same approach to create primates with two dads.
Scientists might soon be able to create eggs and sperm from skin and blood cells. What will that mean?
Humans are off limits for now, but the work does help us better understand a strange biological phenomenon known as imprinting, which causes certain genes to be expressed differently depending on which parent they came from. For these genes, animals inherit part of a "dose" from each parent, and the two must work in harmony to create a healthy embryo. Without both doses, gene expression can go awry, and the resulting embryos can end up with abnormalities.
This is what researchers have found in previous attempts to create mice with two dads. In the 1980s, scientists in the UK tried injecting the DNA-containing nucleus of a sperm cell into a fertilized egg cell. The resulting embryos had DNA from two males (as well as a small amount of DNA from a female, in the cytoplasm of the egg).
But when these embryos were transferred to the uteruses of surrogate mouse mothers, none of them resulted in a healthy birth, seemingly because imprinted genes from both paternal and maternal genomes are needed for development.
Li and his colleagues took a different approach. The team used gene editing to knock out imprinted genes altogether.
Around 200 of a mouse's genes are imprinted, but Li's team focused on 20 that are known to be important for the development of the embryo.
In an attempt to create healthy mice with DNA from two male "dads," the team undertook a complicated set of experiments. To start, the team cultured cells with sperm DNA to collect stem cells in the lab. Then they used CRISPR to disrupt the 20 imprinted genes they were targeting.
These gene-edited cells were then injected, along with other sperm cells, into egg cells that had had their own nuclei removed. The result was embryonic cells with DNA from two male mice. These cells were then injected into a type of "embryo shell" used in research, which provides the cells required to make a placenta. The resulting embryos were transferred to the uteruses of female mice.
Advances could lead to babies with four or more biological parents—forcing us to reconsider parenthood.
It worked—to some degree. Some of the embryos developed into live pups, and they even survived to adulthood. The findings were published in the journal Cell Stem Cell.
"It's exciting," says Kotaro Sasaki, a developmental biologist at the University of Pennsylvania, who was not involved in the work. Not only have Li and his team been able to avoid a set of imprinting defects, but their approach is the second way scientists have found to create mice using DNA from two males.
The finding builds on research by Katsuhiko Hayashi, now at Osaka University in Japan, and his colleagues. A couple of years ago, that team presented evidence that they had found a way to take cells from the tails of adult male mice and turn them into immature egg cells. These could be fertilized with sperm to create bi-paternal embryos. The mice born from those embryos can reach adulthood and have their own offspring, Hayashi has said.
Li's team's more complicated approach was less successful. Only a small fraction of the mice survived, for a start. The team transferred 164 gene-edited embryos, but only seven live pups were born. And those that were born weren't entirely normal, either. They grew to be bigger than untreated mice, and their organs appeared enlarged. They didn't live as long as normal mice, and they were infertile.
It would be unethical to do such risky research with human cells and embryos. "Editing 20 imprinted genes in humans would not be acceptable, and producing individuals who could not be healthy or viable is simply not an option," says Li.
"There are numerous issues," says Sasaki. For a start, a lot of the technical lab procedures the team used have not been established for human cells. But even if we had those, this approach would be dangerous—knocking out human genes could have untold health consequences.
"There's lots and lots of hurdles," he says. "Human applications [are] still quite far."
Despite that, the work might shed a little more light on the mysterious phenomenon of imprinting. Previous research has shown that mice with two moms appear smaller, and live longer than expected, while the current study shows that mice with two dads are overgrown and die more quickly. Perhaps paternal imprinted genes support growth and maternal ones limit it, and animals need both to reach a healthy size, says Sasaki.
Physicist used interaction graphs to show how pieces attack and defend to analyze 20,000 top matches:
The game of chess has long been central to computer science and AI-related research, most notably in IBM's Deep Blue in the 1990s and, more recently, AlphaZero. But the game is about more than algorithms, according to Marc Barthelemy, a physicist at the Paris-Saclay University in France, with layers of depth arising from the psychological complexity conferred by player strategies.
Now, Barthelmey has taken things one step further by publishing a new paper in the journal Physical Review E that treats chess as a complex system, producing a handy metric that can help predict the proverbial "tipping points" in chess matches.
In his paper, Barthelemy cites Richard Reti, an early 20th-century chess master who gave a series of lectures in the 1920s on developing a scientific understanding of chess. It was an ambitious program involving collecting empirical data, constructing typologies, and devising laws based on those typologies, but Reti's insights fell by the wayside as advances in computer science came to dominate the field. That's understandable. "With its simple rules yet vast strategic depth, chess provides an ideal platform for developing and testing algorithms in AI, machine learning, and decision theory," Barthelemy writes.
Barthelemy's own expertise is in the application of statistical physics to complex systems, as well as the emerging science of cities. He realized that the history of the scientific study of chess had overlooked certain key features, most notably how certain moves at key moments can drastically alter the game; the matches effectively undergo a kind of phase transition. The rise of online chess platforms means there are now very large datasets ripe for statistical analysis, and researchers have made use of that, studying power-law distributions, for example, as well as response time distribution in rapid chess and long-range memory effects in game sequences.
For his analysis, Barthelemy chose to represent chess as a decision tree in which each "branch" leads to a win, loss, or draw. Players face the challenge of finding the best move amid all this complexity, particularly midgame, in order to steer gameplay into favorable branches. That's where those crucial tipping points come into play. Such positions are inherently unstable, which is why even a small mistake can have a dramatic influence on a match's trajectory.
A case of combinatorial complexity
Barthelemy has re-imagined a chess match as a network of forces in which pieces act as the network's nodes, and the ways they interact represent the edges, using an interaction graph to capture how different pieces attack and defend one another. The most important chess pieces are those that interact with many other pieces in a given match, which he calculated by measuring how frequently a node lies on the shortest path between all the node pairs in the network (its "betweenness centrality").
He also calculated so-called "fragility scores," which indicate how easy it is to remove those critical chess pieces from the board. And he was able to apply this analysis to more than 20,000 actual chess matches played by the world's top players over the last 200 years.
Barthelemy found that his metric could indeed identify tipping points in specific matches. Furthermore, when he averaged his analysis over a large number of games, an unexpected universal pattern emerged. "We observe a surprising universality: the average fragility score is the same for all players and for all openings," Barthelemy writes. And in famous chess matches, "the maximum fragility often coincides with pivotal moments, characterized by brilliant moves that decisively shift the balance of the game."
Specifically, fragility scores start to increase about eight moves before the critical tipping point position occurs and stay high for some 15 moves after that. "These results suggest that positional fragility follows a common trajectory, with tension peaking in the middle game and dissipating toward the endgame," he writes. "This analysis highlights the complex dynamics of chess, where the interaction between attack and defense shapes the game's overall structure."
Physical Review E, 2025. DOI: 10.1103/PhysRevE.00.004300 (About DOIs).