Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 9 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How often do registered SoylentNews users post as Anonymous Coward, and why?

  • 0%
  • 25%
  • 50%
  • 75%
  • 90%+
  • I pretend to be different people
  • I AM different people - what day is it?

[ Results | Polls ]
Comments:164 | Votes:144

posted by Fnord666 on Friday January 31, @10:57PM   Printer-friendly
from the gas-powered-electricity dept.

Chevron, one of the world's largest oil companies, has announced plans to enter the rapidly growing field of artificial intelligence by building natural gas power plants directly connected to data centers:

These facilities will supply electricity to technology companies leveraging AI and other high-powered computing services, reported The New York Times. The move highlights the increasing energy demands of AI technologies and Chevron's strategic shift to diversify its operations beyond traditional oil and gas.

The company's CEO, Mike Wirth, revealed the initiative during a recent industry conference, emphasizing the role Chevron could play in bridging energy production and digital innovation. As data centers consume enormous amounts of electricity to support AI-driven computations, Chevron's natural gas plants are positioned to offer a reliable and efficient energy source. This strategy allows Chevron to capitalize on its core expertise in energy production while contributing to a sector that's reshaping industries globally.

[...] The company plans to integrate carbon capture technologies into its power plants to offset their environmental impact. Additionally, Chevron has committed to exploring renewable energy options alongside its natural gas operations, suggesting a balanced approach to meeting current energy demands while investing in a low-carbon future.

Related:


Original Submission

posted by Fnord666 on Friday January 31, @06:12PM   Printer-friendly
from the rotator dept.

https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/

Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.

And it wasn't the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit's CEO called out all AI companies whose crawlers he said were "a pain in the ass to block," despite the tech industry otherwise agreeing to respect "no scraping" robots.txt rules.
[...]
Shortly after he noticed Facebook's crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers "clobbering" websites that he told Ars he hoped would give "teeth" to robots.txt.

Building on an anti-spam cybersecurity tactic known as [tarpitting], he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."

Aaron clearly warns users that Nepenthes is aggressive malware.
[...]
Tarpits were originally designed to waste spammers' time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon.
[...]
It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed.
[...]
The only AI company that responded to Ars' request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.
"We're aware of efforts to disrupt AI web crawlers," OpenAI's spokesperson said. "We design our systems to be resilient while respecting robots.txt and standard web practices."
[...]
By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies' AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification.

"Ultimately, it's like the Internet that I grew up on and loved is long gone," Aaron told Ars. "I'm just fed up, and you know what? Let's fight back, even if it's not successful. Be indigestible. Grow spikes."
[...]
Nepenthes was released in mid-January but was instantly popularized beyond Aaron's expectations after tech journalist Cory Doctorow boosted a tech commentator, Jürgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket.

"That's when I realized, 'oh this is going to be something,'" Aaron told Ars. "I'm kind of shocked by how much it's blown up."
[...]
When software developer and hacker Gergely Nagy, who goes by the handle "algernon" online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server's bandwidth was being "eaten" by AI crawlers.

Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers.
[...]
Iocaine takes ideas (not code) from Nepenthes, but it's more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an "infinite maze of garbage" in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.
[...]
Running malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.
[...]
Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine site—which, yes, is protected from scraping by Iocaine—he posted this call to action: "Let's make AI poisoning the norm. If we all do it, they won't have anything to crawl."

Related stories on SoylentNews:
Endlessh: an SSH Tarpit - 20190325


Original Submission

posted by janrinok on Friday January 31, @01:31PM   Printer-friendly

https://tails.net/news/version_6.11/index.en.html
https://gitlab.tails.boum.org/tails/tails/-/blob/master/debian/changelog

The vulnerabilities described below were identified during an external security audit by Radically Open Security and disclosed responsibly to our team. We are not aware of these attacks being used against Tails users until now. [Editor's Comment: I believe they mean 'up to now' or 'so far'.]

These vulnerabilities can only be exploited by a powerful attacker who has already exploited another vulnerability to take control of an application in Tails.

If you want to be extra careful and used Tails a lot since January 9 without upgrading, we recommend that you do a manual upgrade instead of an automatic upgrade.

        Prevent an attacker from installing malicious software permanently. (#20701)

        In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit a vulnerability in Tails Upgrader to install a malicious upgrade and permanently take control of your Tails.

        Doing a manual upgrade would erase such malicious software.

        Prevent an attacker from monitoring online activity. (#20709 and #20702)

        In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit vulnerabilities in other applications that might lead to deanonymization or the monitoring of browsing activity:
                In Onion Circuits, to get information about Tor circuits and close them.
                In Unsafe Browser, to connect to the Internet without going through Tor.
                In Tor Browser, to monitor your browsing activity.
                In Tor Connection, to reconfigure or block your connection to the Tor network.

        Prevent an attacker from changing the Persistent Storage settings. (#20710)

Also, Tails still doesn't FULLY randomize the MAC address; so much for anonymity.


Original Submission

posted by janrinok on Friday January 31, @08:42AM   Printer-friendly

Scale AI and CAIS Unveil Results of Humanity's Last Exam, a Groundbreaking New Benchmark

Scale AI and the Center for AI Safety (CAIS) are proud to publish the results of Humanity's Last Exam, a groundbreaking new AI benchmark that was designed to test the limits of AI knowledge at the frontiers of human expertise. The results demonstrated a significant improvement from the reasoning capabilities of earlier models, but current models still were only able to answer fewer than 10 percent of the expert questions correctly. The paper can be read here.

The new benchmark, called "Humanity's Last Exam," evaluated whether AI systems have achieved world-class expert-level reasoning and knowledge capabilities across a wide range of fields, including math, humanities, and the natural sciences. Throughout the fall, CAIS and Scale AI crowdsourced questions from experts to assemble the hardest and broadest problems to stump the AI models. The exam was developed to address the challenge of "benchmark saturation": models that regularly achieve near-perfect scores on existing tests, but may not be able to answer questions outside of those tests. Saturation reduces the utility of a benchmark as a precise measurement of future model progress.

[Source]: Scale AI


Original Submission

posted by hubie on Friday January 31, @03:55AM   Printer-friendly
from the I'm-sorry-I-can't-do-that-Dave dept.

New research "Computer-Use Agent" AI model can perform multi-step tasks through a web browser:

On Thursday, OpenAI released a research preview of "Operator," a web automation tool that uses a new AI model called Computer-Using Agent (CUA) to control computers through a visual interface. The system performs tasks by viewing and interacting with on-screen elements like buttons and text fields similar to how a human would.

Operator is available today for subscribers of the $200-per-month ChatGPT Pro plan at operator.chatgpt.com. The company plans to expand to Plus, Team, and Enterprise users later. OpenAI intends to integrate these capabilities directly into ChatGPT and later release CUA through its API for developers.

Operator watches on-screen content while you use your computer and executes tasks through simulated keyboard and mouse inputs. The Computer-Using Agent processes screenshots to understand the computer's state and then makes decisions about clicking, typing, and scrolling based on its observations.

OpenAI's release follows other tech companies as they push into what are often called "agentic" AI systems, which can take actions on a user's behalf. Google announced Project Mariner in December 2024, which performs automated tasks through the Chrome browser, and two months earlier, in October 2024, Anthropic launched a web automation tool called "Computer Use" focused on developers that can control a user's mouse cursor and take actions on a computer.

"The Operator interface looks very similar to Anthropic's Claude Computer Use demo from October," wrote AI researcher Simon Willison on his blog, "even down to the interface with a chat panel on the left and a visible interface being interacted with on the right."

To use your PC like you would, the Computer-Using Agent works in multiple steps. First, it captures screenshots to monitor your screen, then analyzes those images (using GPT-4o's vision capabilities with additional reinforcement learning) to process raw pixel data. Next, it determines what actions to take and then performs virtual inputs to control the computer. This iterative loop design reportedly lets the system recover from errors and handle complex tasks across different applications.

While it's working, Operator shows a miniature browser window of its actions.

However, the technology behind Operator is still relatively new and far from perfect. The model reportedly performs best at repetitive web tasks like creating shopping lists or playlists. It struggles more with unfamiliar interfaces like tables and calendars, and does poorly with complex text editing (with a 40 percent success rate), according to OpenAI's internal testing data.

[...] For any AI model that can see how you operate your computer and even control some aspects of it, privacy and safety are very important. OpenAI says it built multiple safety controls into Operator, requiring user confirmation before completing sensitive actions like sending emails or making purchases. Operator also has limits on what it can browse, set by OpenAI. It cannot access certain website categories, including gambling and adult content.

Traditionally, AI models based on large language model-style Transformer technology like Operator have been relatively easy to fool with jailbreaks and prompt injections.

To catch attempts at subverting Operator, which might hypothetically be embedded in websites that the AI model browses, OpenAI says it has implemented real-time moderation and detection systems. OpenAI reports the system recognized all but one case of prompt injection attempts during an early internal red-teaming session.

However, Willison, who frequently covers AI security issues, isn't convinced Operator can stay secure, especially as new threats emerge. "Color me skeptical," he wrote in his blog post. "I imagine we'll see all kinds of novel successful prompt injection style attacks against this model once the rest of the world starts to explore it."

What could possibly go wrong?

Also at: https://gizmodo.com/openai-reportedly-launching-operator-that-can-control-your-computer-this-week-2000553513


Original Submission

posted by hubie on Thursday January 30, @11:11PM   Printer-friendly
from the always-keep-your-hands-clean dept.

Arthur T Knackerbracket has processed the following story:

Two NASA astronauts are set to venture outside the International Space Station (ISS) in search of signs of life.

Butch Wilmore and Suni Williams should have been back on Earth months ago, but, thanks to issues with Boeing's CST-100 Starliner capsule, are spending some additional time on the ISS before a planned return to Earth in a SpaceX Crew Dragon.

The plan is for the spacewalkers to collect samples from sites near life support system vents on the exterior of the ISS. Scientists will be able to determine if the ISS releases microorganisms and assess whether any can survive in the harsh environment outside the outpost.

These days, spacecraft and spacesuits are thoroughly sterilized before missions. However, humans carry plenty of microorganisms, and looking at what is collected outside the ISS will inform designs for crewed vehicles and missions to limit the spread of human contamination.

NASA said: "The data could help determine whether changes are needed to crewed spacecraft, including spacesuits, that are used to explore destinations where life may exist now or in the past."

With Mars now a priority for crewed expeditions, minimizing human contamination on the surface is crucial to avoid misidentifying it as traces of life on the red planet.

Many space agencies take the challenge of planetary protection very seriously. As an example, the European Space Agency (ESA) cites Article IX of the Outer Space Treaty, which requires care to be taken during exploration of the Moon and beyond "so as to avoid their harmful contamination and also adverse changes in the environment of the Earth resulting from the introduction of extra-terrestrial matter and, where necessary, [to] adopt appropriate measures for this purpose."

This also involves considering missions launched under less stringent standards than those in place today. Older spacecraft, for example, were not always subject to the same sterilization.

Memorably, a camera on NASA's Surveyor 3 lander, which the Apollo 12 astronauts retrieved, was found to have been contaminated [PDF] prior to launch. Despite vacuum testing, exposure to temperatures below -100° Celsius, and a stint on the lunar surface, scientists found that the microorganisms on the camera had survived.


Original Submission

posted by hubie on Thursday January 30, @06:22PM   Printer-friendly
from the in-my-experience-hubie-is-the-best-editor-there-is-on-the-Internet dept.

Arthur T Knackerbracket has processed the following story:

Google will take firmer action against British businesses that use fake reviews to boost their star ratings on the search giant’s reviews platform. The UK’s Competition and Markets Authority (CMA) announced on Friday that Google has agreed to improve its processes for detecting and removing fake reviews, and will take action against the businesses and reviewers that post them.

This includes deactivating the ability to add new reviews for businesses found to be using fake reviews, and deleting all existing reviews for at least six months if they repeatedly engage in suspicious review activity. Google will also place prominent “warning alerts” on the Google profiles of businesses using fake reviews to help consumers be more aware of potentially misleading feedback. Individuals who repeatedly post fake or misleading reviews on UK business pages will be banned and have their review history deleted, even if they’re located in another country.

Google is required to report to the CMA over the next three years to ensure it’s complying with the agreement.

“The changes we’ve secured from Google ensure robust processes are in place, so people can have confidence in reviews and make the best possible choices,” CMA chief executive Sarah Cardell said in a statement. “This is a matter of fairness – for both business and consumers – and we encourage the entire sector to take note.”

Google made similar changes to reviews in Maps last year, saying that contributions “should reflect a genuine experience at a place or business.” However, those changes apply globally while Google’s commitment to improving reviews across all its properties appears to just apply to the UK for now.

The changes to reviews follow a CMA [*] investigation launched against Google and Amazon in 2021 over concerns the companies had violated consumer protection laws by not doing enough to tackle fake reviews on their platforms. The CMA says its probe into Amazon is still ongoing and that an update will be announced “in due course.”

[CMA: Competition and Markets Authority]


Original Submission

posted by hubie on Thursday January 30, @01:34PM   Printer-friendly
from the can-you-hear-me-n[kshht] dept.

An undersea fiber optic cable between Latvia and Sweden was damaged on Sunday, likely as a result of external influence, Latvia said, triggering an investigation by local and NATO maritime forces in the Baltic Sea:

"We have determined that there is most likely external damage and that it is significant," Latvian Prime Minister Evika Silina told reporters following an extraordinary government meeting.

Latvia is coordinating with NATO and the countries of the Baltic Sea region to clarify the circumstances, she said separately in a post on X.

Latvia's navy earlier on Sunday said it had dispatched a patrol boat to inspect a ship and that two other vessels were also subject to investigation.

From Zerohedge's coverage:

Over the past 18 months, three alarming incidents have been reported in which commercial ships traveling to or from Russian ports are suspected of severing undersea cables in the Baltic region.

Washington Post recently cited Western officials who said these cable incidents are likely maritime accidents - not sabotage by Russia and/or China.

Due to all the cable severing risks, intentional and unintentional, a report from late November via TechCrunch [linked by submitter] said Meta planned a new "W" formation undersea cable route around the world to "avoid areas of geopolitical tension."

Related:


Original Submission

posted by martyb on Thursday January 30, @08:29AM   Printer-friendly
from the CRISPR-critters dept.

https://www.technologyreview.com/2025/01/28/1110613/mice-with-two-dads-crispr/

Mice with two dads have been created using CRISPR

It's a new way to create "bi-paternal" mice that can survive to adulthood—but human applications are still a long way off.

Mice with two fathers have been born—and have survived to adulthood—following a complex set of experiments by a team in China.

Zhi-Kun Li at the Chinese Academy of Sciences in Beijing and his colleagues used CRISPR to create the mice, using a novel approach to target genes that normally need to be inherited from both male and female parents. They hope to use the same approach to create primates with two dads.

Scientists might soon be able to create eggs and sperm from skin and blood cells. What will that mean?

Humans are off limits for now, but the work does help us better understand a strange biological phenomenon known as imprinting, which causes certain genes to be expressed differently depending on which parent they came from. For these genes, animals inherit part of a "dose" from each parent, and the two must work in harmony to create a healthy embryo. Without both doses, gene expression can go awry, and the resulting embryos can end up with abnormalities.

This is what researchers have found in previous attempts to create mice with two dads. In the 1980s, scientists in the UK tried injecting the DNA-containing nucleus of a sperm cell into a fertilized egg cell. The resulting embryos had DNA from two males (as well as a small amount of DNA from a female, in the cytoplasm of the egg).

But when these embryos were transferred to the uteruses of surrogate mouse mothers, none of them resulted in a healthy birth, seemingly because imprinted genes from both paternal and maternal genomes are needed for development.

Li and his colleagues took a different approach. The team used gene editing to knock out imprinted genes altogether.

Around 200 of a mouse's genes are imprinted, but Li's team focused on 20 that are known to be important for the development of the embryo.

In an attempt to create healthy mice with DNA from two male "dads," the team undertook a complicated set of experiments. To start, the team cultured cells with sperm DNA to collect stem cells in the lab. Then they used CRISPR to disrupt the 20 imprinted genes they were targeting.

These gene-edited cells were then injected, along with other sperm cells, into egg cells that had had their own nuclei removed. The result was embryonic cells with DNA from two male mice. These cells were then injected into a type of "embryo shell" used in research, which provides the cells required to make a placenta. The resulting embryos were transferred to the uteruses of female mice.

Advances could lead to babies with four or more biological parents—forcing us to reconsider parenthood.

It worked—to some degree. Some of the embryos developed into live pups, and they even survived to adulthood. The findings were published in the journal Cell Stem Cell.

"It's exciting," says Kotaro Sasaki, a developmental biologist at the University of Pennsylvania, who was not involved in the work. Not only have Li and his team been able to avoid a set of imprinting defects, but their approach is the second way scientists have found to create mice using DNA from two males.

The finding builds on research by Katsuhiko Hayashi, now at Osaka University in Japan, and his colleagues. A couple of years ago, that team presented evidence that they had found a way to take cells from the tails of adult male mice and turn them into immature egg cells. These could be fertilized with sperm to create bi-paternal embryos. The mice born from those embryos can reach adulthood and have their own offspring, Hayashi has said.

Li's team's more complicated approach was less successful. Only a small fraction of the mice survived, for a start. The team transferred 164 gene-edited embryos, but only seven live pups were born. And those that were born weren't entirely normal, either. They grew to be bigger than untreated mice, and their organs appeared enlarged. They didn't live as long as normal mice, and they were infertile.

It would be unethical to do such risky research with human cells and embryos. "Editing 20 imprinted genes in humans would not be acceptable, and producing individuals who could not be healthy or viable is simply not an option," says Li.

"There are numerous issues," says Sasaki. For a start, a lot of the technical lab procedures the team used have not been established for human cells. But even if we had those, this approach would be dangerous—knocking out human genes could have untold health consequences.

"There's lots and lots of hurdles," he says. "Human applications [are] still quite far."

Despite that, the work might shed a little more light on the mysterious phenomenon of imprinting. Previous research has shown that mice with two moms appear smaller, and live longer than expected, while the current study shows that mice with two dads are overgrown and die more quickly. Perhaps paternal imprinted genes support growth and maternal ones limit it, and animals need both to reach a healthy size, says Sasaki.

posted by martyb on Thursday January 30, @03:45AM   Printer-friendly
from the How-about-a-nice-game-of-chess? dept.

Physicist used interaction graphs to show how pieces attack and defend to analyze 20,000 top matches:

The game of chess has long been central to computer science and AI-related research, most notably in IBM's Deep Blue in the 1990s and, more recently, AlphaZero. But the game is about more than algorithms, according to Marc Barthelemy, a physicist at the Paris-Saclay University in France, with layers of depth arising from the psychological complexity conferred by player strategies.

Now, Barthelmey has taken things one step further by publishing a new paper in the journal Physical Review E that treats chess as a complex system, producing a handy metric that can help predict the proverbial "tipping points" in chess matches.

In his paper, Barthelemy cites Richard Reti, an early 20th-century chess master who gave a series of lectures in the 1920s on developing a scientific understanding of chess. It was an ambitious program involving collecting empirical data, constructing typologies, and devising laws based on those typologies, but Reti's insights fell by the wayside as advances in computer science came to dominate the field. That's understandable. "With its simple rules yet vast strategic depth, chess provides an ideal platform for developing and testing algorithms in AI, machine learning, and decision theory," Barthelemy writes.

Barthelemy's own expertise is in the application of statistical physics to complex systems, as well as the emerging science of cities. He realized that the history of the scientific study of chess had overlooked certain key features, most notably how certain moves at key moments can drastically alter the game; the matches effectively undergo a kind of phase transition. The rise of online chess platforms means there are now very large datasets ripe for statistical analysis, and researchers have made use of that, studying power-law distributions, for example, as well as response time distribution in rapid chess and long-range memory effects in game sequences.

For his analysis, Barthelemy chose to represent chess as a decision tree in which each "branch" leads to a win, loss, or draw. Players face the challenge of finding the best move amid all this complexity, particularly midgame, in order to steer gameplay into favorable branches. That's where those crucial tipping points come into play. Such positions are inherently unstable, which is why even a small mistake can have a dramatic influence on a match's trajectory.

A case of combinatorial complexity

Barthelemy has re-imagined a chess match as a network of forces in which pieces act as the network's nodes, and the ways they interact represent the edges, using an interaction graph to capture how different pieces attack and defend one another. The most important chess pieces are those that interact with many other pieces in a given match, which he calculated by measuring how frequently a node lies on the shortest path between all the node pairs in the network (its "betweenness centrality").

He also calculated so-called "fragility scores," which indicate how easy it is to remove those critical chess pieces from the board. And he was able to apply this analysis to more than 20,000 actual chess matches played by the world's top players over the last 200 years.

Barthelemy found that his metric could indeed identify tipping points in specific matches. Furthermore, when he averaged his analysis over a large number of games, an unexpected universal pattern emerged. "We observe a surprising universality: the average fragility score is the same for all players and for all openings," Barthelemy writes. And in famous chess matches, "the maximum fragility often coincides with pivotal moments, characterized by brilliant moves that decisively shift the balance of the game."

Specifically, fragility scores start to increase about eight moves before the critical tipping point position occurs and stay high for some 15 moves after that. "These results suggest that positional fragility follows a common trajectory, with tension peaking in the middle game and dissipating toward the endgame," he writes. "This analysis highlights the complex dynamics of chess, where the interaction between attack and defense shapes the game's overall structure."

Physical Review E, 2025. DOI: 10.1103/PhysRevE.00.004300  (About DOIs).


Original Submission

posted by hubie on Wednesday January 29, @11:02PM   Printer-friendly

Now-fixed web bugs allowed hackers to remotely unlock and start any of millions of Subarus. More disturbingly, they could also access at least a year of cars' location histories—and Subaru employees still can:

About a year ago, security researcher Sam Curry bought his mother a Subaru, on the condition that, at some point in the near future, she let him hack it.

It took Curry until last November, when he was home for Thanksgiving, to begin examining the 2023 Impreza's Internet-connected features and start looking for ways to exploit them. Sure enough, he and a researcher working with him online, Shubham Shah, soon discovered vulnerabilities in a Subaru web portal that let them hijack the ability to unlock the car, honk its horn, and start its ignition, reassigning control of those features to any phone or computer they chose.

Most disturbing for Curry, though, was that they found they could also track the Subaru's location—not merely where it was at the moment but also where it had been for the entire year that his mother had owned it. The map of the car's whereabouts was so accurate and detailed, Curry says, that he was able to see her doctor visits, the homes of the friends she visited, even which exact parking space his mother parked in every time she went to church.

"You can retrieve at least a year's worth of location history for the car, where it's pinged precisely, sometimes multiple times a day," Curry says. "Whether somebody's cheating on their wife or getting an abortion or part of some political group, there are a million scenarios where you could weaponize this against someone."

Curry and Shah today revealed in a blog post their method for hacking and tracking millions of Subarus, which they believe would have allowed hackers to target any of the company's vehicles equipped with its digital features known as Starlink in the US, Canada, or Japan. Vulnerabilities they found in a Subaru website intended for the company's staff allowed them to hijack an employee's account to both reassign control of cars' Starlink features and also access all the vehicle location data available to employees, including the car's location every time its engine started, as shown in their video below.

[...] Shah and Curry's research that led them to the discovery of Subaru's vulnerabilities began when they found that Curry's mother's Starlink app connected to the domain SubaruCS.com, which they realized was an administrative domain for employees. Scouring that site for security flaws, they found that they could reset employees' passwords simply by guessing their email address, which gave them the ability to take over any employee's account whose email they could find. The password reset functionality did ask for answers to two security questions, but they found that those answers were checked with code that ran locally in a user's browser, not on Subaru's server, allowing the safeguard to be easily bypassed. "There were really multiple systemic failures that led to this," Shah says.

[...] More unusual in Subaru's case, Curry and Shah say, is that they were able to access fine-grained, historical location data for Subarus going back at least a year. Subaru may in fact collect multiple years of location data, but Curry and Shah tested their technique only on Curry's mother, who had owned her Subaru for about a year.

Curry argues that Subaru's extensive location tracking is a particularly disturbing demonstration of the car industry's lack of privacy safeguards around its growing collection of personal data on drivers. "It's kind of bonkers," he says. "There's an expectation that a Google employee isn't going to be able to just go through your emails in Gmail, but there's literally a button on Subaru's admin panel that lets an employee view location history."

[...] "While we worried that our doorbells and watches that connect to the Internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines," Mozilla's report reads.

Curry and Shah's discovery of Subaru's security vulnerabilities in its tracking demonstrate a particularly egregious exposure of that data—but also a privacy problem that's hardly less disturbing now that the vulnerabilities are patched, says Robert Herrell, the executive director of the Consumer Federation of California, which has sought to create legislation for limiting a car's data tracking.

"It seems like there are a bunch of employees at Subaru that have a scary amount of detailed information," Herrell says. "People are being tracked in ways that they have no idea are happening."


Original Submission

posted by hubie on Wednesday January 29, @06:17PM   Printer-friendly
from the hopefully-reversed-by-the-time-this-story-posts dept.

Facebook's Internal Policy Makers Decided That Linux is Malware

https://distrowatch.com/weekly.php?issue=20250127#sitenews

Facebook ban

Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats". Any posts mentioning DistroWatch and multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed.

We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux.

The sad irony here is that Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers.

Unfortunately, there isn't anything we can do about this, apart from advising people to get their Linux-related information from sources other than Facebook. I've tried to appeal the ban and was told the next day that Linux-related material is staying on the cybersecurity filter. My Facebook account was also locked for my efforts.

We went through a similar experience when Twitter changed its name to X - suddenly accounts which had been re-posting news from our RSS feeds were no longer able to share links. This sort of censorship is an unpleasant side-effect of centralized communication platforms such as X, Facebook, Google+, and so on.

In an effort to continue to make it possible for people to talk about Linux (and DistroWatch), as well as share their views and links, we are providing two options. We have RSS news feeds which get updates whenever we post new announcements, stories, and our weekly newsletter. We also now have a Mastodon account where I will start to post updates - at least for new distributions and notice of our weekly newsletter. Over time we may also add news stories and updates about releases. Links for the feeds and the Mastodon account can be found on our contact page.

Meta (Facebook) Begins Blocking Posts Linking to DistroWatch

Apparently Meta, aka Facebook, is now blocking links to DistroWatch. The DistroWatch team makes the following announcement in issue 1106 from 27 January 2025 of the DistroWatch weekly newsletter DistroWatch Weekly:

Starting on January 19, 2025 Facebook's internal policy makers decided that Linux is malware and labelled groups associated with Linux as being "cybersecurity threats". Any posts mentioning DistroWatch and multiple groups associated with Linux and Linux discussions have either been shut down or had many of their posts removed.

We've been hearing all week from readers who say they can no longer post about Linux on Facebook or share links to DistroWatch. Some people have reported their accounts have been locked or limited for posting about Linux.

The sad irony here is that Facebook runs much of its infrastructure on Linux and often posts job ads looking for Linux developers. [...]

This is unfortunate. There are fewer and fewer sites remaining where GNU/Linux can be discussed without restrictions, especially in regards to the F-word. Already on YouTube, Bytedance's Tiktok, the orange site, the red site, and to a certain extent the green site, mention of either will have negative repercussions. YouTube consistently demonetizes videos with the string "Linux" the title. The orange site, owned by anti-FOSS Condé Nast, has long since fired many of its "subredditors" and has been deleting comments and closing accounts if one strays too far off the reservation and mentions either Linux or other topics too close to the F-word.

<sarcasm>It's almost like the subject is taboo or something</sarcasm>.

Thus the discussion is steered back to the allowed subjects and viewpoints and away from Linux, GNU, or above all the F-word. Linux is not the only topic those sites censor, but since too many pretend that open discourse is possible in social control media, the necessary discussions about mass manipulation of public opinion and censorship can't even begin to happen.


Original Submission

Original Submission

posted by hubie on Wednesday January 29, @01:33PM   Printer-friendly

Backdoor infecting VPNs used "magic packets" for stealth and security:

When threat actors use backdoor malware to gain access to a network, they want to make sure all their hard work can't be leveraged by competing groups or detected by defenders. One countermeasure is to equip the backdoor with a passive agent that remains dormant until it receives what's known in the business as a "magic packet." On Thursday, researchers revealed that a never-before-seen backdoor that quietly took hold of dozens of enterprise VPNs running Juniper Network's Junos OS has been doing just that.

J-Magic, the tracking name for the backdoor, goes one step further to prevent unauthorized access. After receiving a magic packet hidden in the normal flow of TCP traffic, it relays a challenge to the device that sent it. The challenge comes in the form of a string of text that's encrypted using the public portion of an RSA key. The initiating party must then respond with the corresponding plaintext, proving it has access to the secret key.

The lightweight backdoor is also notable because it resided only in memory, a trait that makes detection harder for defenders. The combination prompted researchers at Lumin Technology's Black Lotus Lab to sit up and take notice.

[...] The researchers found J-Magic on VirusTotal and determined that it had run inside the networks of 36 organizations. They still don't know how the backdoor got installed. Here's how the magic packet worked:

The passive agent is deployed to quietly observe all TCP traffic sent to the device. It discreetly analyzes the incoming packets and watches for one of five specific sets of data contained in them. The conditions are obscure enough to blend in with the normal flow of traffic that network defense products won't detect a threat. At the same time, they're unusual enough that they're not likely to be found in normal traffic.

Those conditions are:

Condition 1:

  • at offset 0x02 from the start of the TCP options shows the following two-byte sequence: "1366"
  • the TCP options must be at least 4 bytes in size
  • the attacker IP address will be in the "Sequence Number" field of the TCP header
  • the destination port number equals 443

Condition 2:

  • the source port of the TCP header must contain the following two-byte sequence "36429"
  • the attacker IP address will be in the Sequence Number field of the TCP header
  • the destination port number equals 443

Condition 3:

  • the payload data following the IP and TCP headers starts with the four-byte string: Z4vE
  • the attacker IP address will immediately follow the four-byte string: 0x04
  • the attacker port number will immediately follow the IP address at offset 0x08

Condition 4:

  • at offset 0x08 within the TCP header, the option field starts the following two-byte sequence "59020"
  • at offset 0xA within the TCP options starts the attacker IP address
  • the destination port number equals 443

Condition 5:

  • offset 0x08 within the TCP options starts the following two-byte sequence "59022"
  • offset 0xA within the TCP options starts the attacker IP address
  • the attacker port number will follow the attacker IP at offset 0x0E from the start of the TCP option

Black Lotus Labs wrote:

If any of the remote IP addresses match on one of the five predefined conditions above, it moves to spawn a reverse shell. The reverse_shell function forks, creating a child process and renames it to [nfsiod 1]. Next it enters a loop that will connect back to the IP and port retrieved from the packet filter, using SSL. It creates a random alphanumeric string that is five characters long. This random string is encrypted using a hardcoded public RSA key.

It sends the encrypted five-character string as a challenge to the supplied IP/port combo. The response from the IP is compared to the previously created random string. If they are not equal, the connection is closed. If the strings are equal, then a shell is created with the command prompt "" until it receives the exit command. This would allow them to run arbitrary commands on the impacted device.

The reason for the RSA challenge in J-Magic is likely to prevent other attackers from spraying magic packets all over the Internet to enumerate infected networks and then use the backdoor for their own competing purposes. Black Lotus Labs said a backdoor used in 2014 by Russian-state threat group Turla also used such a challenge.

Magic packets give backdoors more stealth because the malware doesn't need to open a specific port to listen for incoming connections. Defenders routinely scan their networks for such ports. If they spot an open port they don't recognize, it's likely the infection will be detected. Backdoors like J-Magic listen to all incoming data and search for tiny specks of it that meet certain conditions.

[...] Black Lotus has determined that J-Magic was active from mid-2023 until at least mid-2024. Targets came from a wide array of industries, including semiconductor, energy, manufacturing, and IT verticals.


Original Submission

posted by Fnord666 on Wednesday January 29, @08:49AM   Printer-friendly
from the You’re-a-Bad-Driver-if-Tesla-Full-Self-Driving-Is-Better-Than-You dept.

Motor Trend reports on FSD in their long term test 2023 Model Y, https://www.motortrend.com/reviews/2024-tesla-model-y-long-range-yearlong-review-update-9-full-self-driving-fsd-version-13/ They weren't too impressed with the first software version that came with the car,

I'm not using FSD because it's bringing me much utility or, much less, enjoyment. I'm using it because we paid $15,000 for this software (it costs $8,000 today), and I'm going to do my job and report on how it works, dammit.

That's despite FSD giving me many, many reasons to forsake it, to decide that my safety and sanity are worth more than what it cost. Yet the Full Self Driving note I created in my phone to log the system's transgressions has an ever-increasing abundance of entries as we pile the miles onto our Model Y.

Dumb and Dangerous Decisions
      For example, there was the time it failed to recognize an increased speed limit sign and continued bumbling along at a 15-mph deficit. In stark contrast, later in that same drive, it detected a 55-mph speed limit sign specific to vehicles towing (which it wasn't), decelerating from 75 mph so rapidly that traffic behind had to swerve around the Model Y. Moments later, FSD decided to change lanes to follow the navigation route toward its next turn—still some 10 miles away—cutting someone off in the process.
[...]
On a different day, FSD deviated from my navigation route because it neglected to recognize that the lane it was occupying became right turn only. After making that turn, it wanted to correct its error and resume the route by making a U-turn at the next intersection, where a "No U-Turn" sign was clearly posted. It tried to make its illegal U-turn from the right side of that double-protected left turn, such that if I hadn't intervened it would've overlapped with the vehicle turning from the left-side lane.
[...]
FSD's errors aren't always dangerous. More often, they're just asinine. Like when our Model Y didn't react to a green arrow for a protected right turn, inconveniencing me and drivers behind. Or entered a packed intersection as a yellow light expired, coming to a stop inside a crosswalk. Or braked hard after it accelerated up a freeway on-ramp because it detected an inactive traffic control signal. Or when it encountered an unexpected road closure and drove around the same block three times because each time it arrived back at the closure it failed to recalculate its route. Who knows how long FSD might've kept circling had I not turned it off.

Then the car updated itself to FSD v.13 and while the general experience was smoother there were still problems.

Twice in a week, FSD 13 completely missed freeway exits because it shifted over too late to negotiate fitting in with other traffic. Those bungles are bizarre given FSD 13's tendency to otherwise move toward exits literal miles early, often abdicating the fast-moving left lane to fall behind slow vehicles to the right—vehicles it could've passed had it stayed put for longer. When merging onto a freeway, it will stubbornly attempt to fit into a gap regardless of whether those other closest drivers seem willing to allow it, ignoring suitable openings immediately ahead or behind. Generally, its lane shifting strategy is poor; I've counted as many as eight back-and-forth lane changes within a minute as FSD 13 tries to figure out what to do. The day this article was due, it veered toward a wall as two lanes converged into one.

If simple cruise control was the genesis for any self-driving tech, then FSD 13 represents an ignominious legacy as it struggles to maintain speeds. Once, it gradually relaxed its cruise speed to 64 mph in a 65 mph zone, when the maximum speed I allowed for it was 75 mph. There was absolutely no traffic, faulty input, or other detectable reason for this slowing. At other times, it doesn't keep up with leading traffic accelerating to the speed limit, driving slower than necessary and letting a gap ahead grow even when the Hurry logic is selected.

One would also hope that software ostensibly aware of the physical dimensions and dynamic abilities of the Tesla it controls would know how to avoid hitting markers along freeway curves and keep itself centered in the lane. Not FSD 13.

I believe this long term test car is being used in LA and surrounding areas. YMMV if you live elsewhere.


Original Submission

posted by Fnord666 on Wednesday January 29, @04:03AM   Printer-friendly
from the playing-with-things-we-don't-understand dept.

Technology is advancing at an exponential rate, but we have very little ability to control it if something goes horribly wrong. Many experts are warning that some of the new technologies that are being developed right now represent very serious existential threats to humanity. In other words, they believe that we could literally be creating technology that could wipe us out someday. Unfortunately, the scientific community is not showing any restraint at all. If something is possible, they want to try to do it. All over the globe, hordes of mad scientists are feverishly rushing into the unknown, and it is quite likely that the consequences will be horrific. The following are 5 super creepy new technologies that should chill all of us to the core:

#1 Scientists in China have been able to get AI models to create "functioning replicas of themselves"...

[...] #2 Do you remember Operation Warp Speed? That was a public-private partnership that was initiated during the first Trump administration, and we all know how that turned out.

Now another public-private partnership that has been dubbed "Stargate" is supposed to greatly accelerate the development of AI in the United States...

[...] #3 Does creating an "artificial sun" sound like a good idea? Unfortunately, the Chinese have actually created such a thing, and they just set a new record by running it for 1,066 seconds...

[...] #4 Anyone that has watched Jurassic Park knows that bringing back ancient species that have gone extinct is a really bad idea. But now a company called Colossal BioSciences plans to do exactly that...

[...] #5 A whistleblower has told Joe Rogan that the U.S. military has mastered anti-gravity propulsion that is based on recovered alien technology...


Original Submission