Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:34 | Votes:160

posted by hubie on Thursday May 29, @08:30PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Russia's official customs data suggests the country's once-thriving market for US-made processors has nearly disappeared. Figures from the Federal Customs Service (FCS), reported by Russian publication Kommersant, show Intel CPU imports fell by 95 percent last year compared to the previous year. By comparison, AMD shipments dropped by 81 percent. That amounts to just 37,000 CPUs total – a steep decline from 537,000 units in 2023.

Executives in Russia's tech manufacturing sector paint a different picture. Leaders at major domestic assemblers like Lotos Group and Rikor told Kommersant that processor deliveries are not only continuing but increasing. Rikor reports purchasing over 120,000 processors last year – about 30 percent more than the year before. Many Russian tech firms also say chip supplies have improved for the third consecutive year.

Sanctions enforcement is struggling to keep up with a growing number of workarounds. Hong Kong remains a key hub in this network, with one address reportedly managing billions of dollars in smuggled semiconductors. Meanwhile, other chips enter Russia through countries like Malaysia and India, often relabeled or bundled within broader product categories that conceal their true nature from customs officials.

Industry insiders say many processors arrive without being labeled as such. A Russian tech executive told Kommersant that the word "processor" often doesn't appear on delivery sheets. This practice helps explain why the Federal Customs Service's import numbers look so anemic, even though factory shelves remain well-stocked.

It's not all smooth sailing, however. Suppliers warn Russian buyers to expect a 10 to 12 percent price increase in 2025, citing inflation and ongoing tensions in US-China trade relations as key factors. Still, prices for mainstream processors have remained relatively stable for the time being.


Original Submission

posted by janrinok on Thursday May 29, @03:48PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A team of scientists has unveiled a breakthrough that could one day propel computers to operate at speeds millions of times faster than today's most advanced processors.

The discovery, led by researchers at the University of Arizona and their international collaborators, centers on harnessing ultrafast pulses of light to control the movement of electrons in graphene – a material just one atom thick.

The research, recently published in Nature Communications, demonstrates that electrons can be made to bypass barriers almost instantaneously by firing laser pulses lasting less than a trillionth of a second at graphene. This phenomenon, known as quantum tunneling, has long intrigued physicists, but the team's ability to observe and manipulate it in real time marks a significant milestone.

Mohammed Hassan, an associate professor of physics and optical sciences at the University of Arizona, explained that this advance could usher in processing speeds in the petahertz range – over a thousand times faster than the chips powering today's computers. Such a leap, he said, would transform the landscape of computing, enabling dramatic progress in fields ranging from artificial intelligence and space research to chemistry and health care.

Hassan, who previously led the development of the world's fastest electron microscope, worked alongside colleagues from the University of Arizona, the California Institute of Technology's Jet Propulsion Laboratory, and Ludwig Maximilian University of Munich. Their initial focus was studying how graphene conducts electricity when exposed to laser light. Typically, the symmetrical structure of graphene causes the currents generated on either side to cancel each other out, resulting in no net current.

However, the team made a surprising discovery after modifying the graphene samples. They observed that a single electron could "tunnel" through the material – and that this fleeting event could be captured in real time. This unexpected result prompted further investigation and ultimately led to the creation of what Hassan calls "the world's fastest petahertz quantum transistor."

To achieve this, the scientists used a commercially available graphene phototransistor, enhanced with a special silicon layer. They exposed it to a laser switching on and off at an astonishing rate of 638 attoseconds – each attosecond being one quintillionth of a second. The result was a transistor capable of operating at petahertz speeds, a feat previously considered far beyond reach.

Unlike many scientific breakthroughs that require highly controlled laboratory environments, this new transistor functioned in everyday, ambient conditions. This opens the door for the technology to be adapted for commercial use and integrated into future generations of electronic devices.

Hassan and his team are now working with Tech Launch Arizona to patent and commercialize their invention. Their next goal is to develop a version of the transistor that operates using standard, commercially available lasers, making the technology more accessible to industry partners.

Journal Reference: Sennary, M., Shah, J., Yuan, M. et al. Light-induced quantum tunnelling current in graphene. Nat Commun 16, 4335 (2025). https://doi.org/10.1038/s41467-025-59675-5


Original Submission

posted by janrinok on Thursday May 29, @11:06AM   Printer-friendly

Prediction: General-purpose AI could start getting worse:

Opinion: I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.

Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I've noticed that AI-enabled search, too, has been getting crappier.

In particular, I'm finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission's (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they're never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get... interesting,

This isn't just Perplexity. I've done the exact same searches on all the major AI search bots, and they all give me "questionable" results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality."

Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.

I like how the AI company Aquant puts it: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."

I'm not the only one seeing AI results starting to go downhill. In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts would produce bad results.

[...] As Amanda Stent, Bloomberg's head of AI strategy & research in the office of the CTO, explained: "This counterintuitive finding has far-reaching implications given how ubiquitously RAG is used in gen AI applications such as customer support agents and question-answering systems. The average internet user interacts with RAG-based systems daily. AI practitioners need to be thoughtful about how to use RAG responsibly."

That sounds good, but a "responsible AI user" is an oxymoron. For all the crap about how AI will encourage us to spend more time doing better work, the truth is AI users write fake papers including bullshit results. This ranges from your kid's high school report to fake scientific research documents to the infamous Chicago Sun-Times best of summer feature, which included forthcoming novels that don't exist.

[...] Some researchers argue that collapse can be mitigated by mixing synthetic data with fresh human-generated content. What a cute idea. Where is that human-generated content going to come from?

Given a choice between good content that requires real work and study to produce and AI slop, I know what most people will do. It's not just some kid wanting a B on their book report of John Steinbeck's The Pearl; it's businesses eager, they claim, to gain operational efficiency, but really wanting to fire employees to increase profits.

Quality? Please. Get real.

We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it.

How long will it take? I think it's already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI's leader and cheerleader, Sam Altman, who tweeted in February 2024 that "OpenAI now generates about 100 billion words per day," and we presume many of those words end up online, it won't take long.


Original Submission

posted by kolie on Thursday May 29, @06:15AM   Printer-friendly
from the cosmic-lost-and-found dept.

Arthur T Knackerbracket has processed the following story:

Just when you thought you knew all the worlds in the solar system, astronomers go and discover a new object that could rewrite the space map. 

This icy world, temporarily named 2017 OF201, could be a distant cousin of Pluto — and scientists mean "distant" quite literally. At its farthest point, it's more than 1,600 times the distance of Earth from the sun. At its closest, it's still 44.5 times farther than Earth.

What makes 2017 OF201 stand out is its very stretched-out path around the sun, which takes an incredible 25,000 Earth-years to complete. For comparison, Pluto makes a lap around the sun every 248 Earth-years. 

How this world got to the edge of the solar system is a mystery — perhaps the result of close encounters with a giant planet like Jupiter or Neptune that tossed it out into a wide orbit. Or maybe when it was originally ejected, it ended up in the so-called Oort Cloud before returning. The Oort Cloud is thought to be a sphere of ancient, icy objects surrounding the solar system. NASA says the cloud remains a theory because the comets there have been too faint and distant to be directly observed.

The International Astronomical Union’s Minor Planet Center, which catalogs new moons and other small bodies in the solar system, announced the discovery on May 21. At roughly 435 miles wide, 2017 OF201 could qualify as a dwarf planet, the same designation Pluto has had since its demotion from ninth planet in 2006. 

"Even though advances in telescopes have enabled us to explore distant parts of the universe," said Sihao Cheng, the Institute for Advanced Study researcher who led the discovery, in a statement, "there is still a great deal to discover about our own solar system."

Cheng, along with Princeton University graduate students, found the possible dwarf planet while searching for a potential "Planet 9," a hypothetical hidden world whose gravitational effects could be responsible for a strange clustering of far-flung objects beyond Neptune.

The team used computer programs to look through years of space pictures taken by the Victor M. Blanco Telescope in Chile and the Canada France Hawaii Telescope. By connecting bright spots that moved slowly across the sky, they were able to identify it.

But 2017 OF201 is a strange outlier because it doesn’t follow the clustering pattern of other trans-Neptunian objects.

"The existence of 2017 OF201 might suggest that Planet 9 or X doesn't exist," said Jiaxuan Li, one of the collaborators, on his personal website. Their research is available now on the arXiv pre-print server. 

The discovery also challenges many scientists' notion of the outer solar system. The area beyond the Kuiper Belt, where the object is located, has previously been thought of as fairly empty. NASA's New Horizons probe, which snapped pictures of Pluto and its moons in 2015, has since more than doubled that distance, though surprisingly, it still hasn't reached the edge of the belt. 

That could mean the spacecraft will travel billions of more miles before reaching interstellar space, a region that is no longer influenced by the sun's radiation and particles. In 2019, New Horizons snapped photos of an icy red dumbbell-shaped thing, named Arrokoth, the farthest object a spacecraft has ever encountered. 

If 2017 OF201 only spends 1 percent of its orbit close enough for people to detect it, that may imply what lies outside the Kuiper Belt is not so empty after all. 

"The presence of this single object suggests that there could be another hundred or so other objects with similar orbit and size, " Cheng said. "They are just too far away to be detectable now."


Original Submission

posted by kolie on Thursday May 29, @01:30AM   Printer-friendly
from the should-have-been-bare-knuckles dept.

Unitree director Wang Qixin says the robotics company used AI and motion capture to train the robots on real fight moves:

Four artificial intelligence-enhanced robots have been put through their paces in a Chinese robot fighting competition, duking it out in kickboxing matches until one was declared the champion.

The World Robot Competition Mecha Fighting Series had four human-controlled robots built by China-based firm Unitree compete in three, two-minute rounds with winners crowned through a points system, according to a May 26 report from the China state-owned outlet the Global Times.

[...] The robots reportedly weighed 35 kilograms and stood 132 centimeters tall. Ahead of the boxing rounds, the pint-sized robots were put through tests to demonstrate a variety of kicks and punches and assist the organizers in refining the rules.

The team with the highest points across the three rounds moves on to fight another opponent. A punch to the head was worth one point, and a kick to the head was worth three. Teams lost five points if their robot fell and 10 points if their robot was down for over eight seconds.

[...] Chen Xiyun, a Unitree team member, said the “robots fight in a human-machine collaborative way,” with the machines pre-taught moves, but ultimately, a person controls the bot’s movements.


Original Submission

posted by kolie on Wednesday May 28, @08:49PM   Printer-friendly
from the intel-inside-tm-job dept.

Arthur T Knackerbracket has processed the following story:

An insider and an outside allegedly colluded to embezzle over $840,000 from Intel.

Israeli news source Calcalist has reported that Intel Israel has initiated legal action against Natalia Avtsin, a former employee, and Yafim Tsibolevsky, a previous component supplier, for their alleged conspiracy to embezzle over NIS 3 million, approximately $842,000. This embezzlement allegedly took place between October 2023 and November 2024, remaining undetected until Intel exposed the fraud.

Avtsin was employed in Intel Israel's hardware production department until her dismissal in November 2024. Intel stated that her termination was part of a strategy to reduce operations in Israel and was unrelated to her alleged crimes, which were still undiscovered at that time. In September 2023, Tsibolevsky registered as an authorized dealer under the name "Energy Electronics 2000" and subsequently became an official Intel supplier the following month.

Avtsin and Tsibolevsky's operation began with Avtsin asking Tsibolevsky for price quotes on hardware components. Avtsin then sent the quotes to her manager for approval, but supposedly altered the transaction classification afterward. She is said to have changed the classification from "components" to "services," which bypasses essential verification protocols. Logically, only an insider could know how a reclassification can easily bypass many security checks.

Intel Israel informed Calcalist that payments for services were less strict compared to payments for components. For service payments, a signed delivery note or confirmation receipt was not required. With no verification barriers, Tsibolevsky could submit invoices and receive payments at his convenience.

If the purchase had been classified as "services," Tsibolevsky would not have escaped scrutiny, as Energy Electronics 2000 failed to register with Intel for service provision. To evade detection, Tsibolevsky issued invoices of $20,000 or less, aligning with Avtsin's transaction limit. Once more, Tsibolevsky would have likely been unaware of this information without an insider's involvement.

Intel Israel's investigation suggests possible third-party involvement. It appears that certain transactions were processed through Levanon Kogan, a company providing purchasing services to firms that are not registered with Intel. The chipmaker has not accused Levanon Kogan of any misdemeanor, but these activities appear to correlate with the scheme of Avtsin and Tsibolevsky.

In certain operations, Avtsin obtained a quote from Tsibolevsky and requested Levanon Kogan to make payments to him. Like with other fraudulent schemes, she allegedly reclassified these transactions from components to services. Intel found that as many as 30 counterfeit orders were processed via Levanon Kogan, amounting to over NIS 2 million, or approximately $561,000.

Intel is suing Avtsin and Tsibolevsky in the Haifa District Court, asking the defendants to repay the stolen funds and any profits they made from them.


Original Submission

posted by kolie on Wednesday May 28, @04:04PM   Printer-friendly
from the hd-drizzle-upgrade dept.

Arthur T Knackerbracket has processed the following story:

The Sun’s outer atmosphere—the corona—is the piping hot outer limit of our star, and is usually hidden from view except during rare total eclipses. Now, scientists have gotten their clearest look ever at this mysterious region, thanks to a new adaptive optics system that scrubs away atmospheric blur, revealing fine views of the wispy plasma on the star’s surface.

Researchers from the National Solar Observatory and New Jersey Institute of Technology unveiled the system today, along with dazzling new images and videos of the Sun’s corona. The findings, published in Nature Astronomy, show fine-scale structures in solar prominences, short-lived plasma jets called spicules, and even coronal rain: cooling plasma that falls back to the solar surface along the star’s magnetic field lines.

The team’s imaging breakthrough hinges on a technology called coronal adaptive optics. Installed on the 5.25-foot (1.6-meter) Goode Solar Telescope in California, the new system—nicknamed “Cona”—adjusts a mirror 2,200 times per second to correct for distortions caused by the churn of Earth’s atmosphere. The remarkable technology counterbalances any would-be wobble in the telescope, thereby producing particularly sharp images of the corona.

“This technological advancement is a game-changer,” said Dirk Schmidt, an adaptive optics scientist at NSO and the study’s lead author, in an observatory release. “There is a lot to discover when you boost your resolution by a factor of 10.”

Until now, solar telescopes have used adaptive optics mainly to study the Sun’s surface, the release stated. Observing the fainter corona has remained a challenge, with coronal features blurred to scales of 621 miles (1,000 kilometers)—a limit that’s existed for 80 years. But Cona now resolves features down to just 39 miles (63 km), the theoretical limit of the Goode telescope.

Among the new footage captured by the team are shots of a twisting solar prominence reshaping in real time, spicules flickering on the surface, and fine, hair-like strands of coronal rain narrower than 12.5 miles (20 km). When you consider how far the Sun is from Earth, how faint the corona is relative to the rest of the star, and how much of Earth’s turbulent atmosphere the team had to cut through and correct for, the sharpness of the images is a triumph.

“This transformative technology, which is likely to be adopted at observatories world-wide, is poised to reshape ground-based solar astronomy,” said study co-author Philip Goode, a physicist at NJIT-CSTR, in the same release. “With coronal adaptive optics now in operation, this marks the beginning of a new era in solar physics, promising many more discoveries in the years and decades to come.”

The observations offer crucial data for unraveling enduring solar mysteries—like why the corona is millions of degrees hotter than the solar surface.

The team plans to bring the coronal adaptive optics technology to the 13-foot (4-meter) Daniel K. Inouye Solar Telescope in Hawaiʻi—potentially revealing even smaller details of the Sun’s atmosphere.


Original Submission

posted by kolie on Wednesday May 28, @11:19AM   Printer-friendly
from the iCant-make-it-here dept.

Arthur T Knackerbracket has processed the following story:

US President Donald Trump can huff, puff, and threaten to blow Tim Cook's house down with a 25 percent iPhone import tariff, but analysts say even that threat is unlikely to bring Apple's manufacturing home.

In response to Trump's statement last week, analysts from Morgan Stanley published a research brief on Tuesday that concluded Apple is unlikely to respond to Trump's latest tariff threat in a way that will please him. 

The report, provided to The Register, concluded that the original 145 percent tariff imposed by Trump on certain imports from China last month might have made Apple budge on the matter, but since the President lost his international staredown and promised to reduce that rate, the economics no longer make sense for Cupertino.

According to the Morgan Stanley number crunchers, an iPhone manufactured in the United States would be at least 35 percent more expensive than one made overseas when accounting for tariffs on single-source components still made in China and higher US labor costs. That means a $999 iPhone would be $1,350 - at a minimum - if Apple wanted to retain a similar gross margin.

[A] 25 percent tariff will have no effect; it will need to be many times higher to compensate for the local production cost

With a 25 percent tariff on iPhone imports from China or India in place, on the other hand, Apple would need to increase prices on iPhones by only four to six percent globally to keep profits up. 

Canalys smartphone and IoT analyst Runar Bjorhovde agreed with Morgan Stanley's analysis in an email to The Register. "[A] 25 percent tariff will have no effect; it will need to be many times higher to compensate for the local production cost," Bjorhovde told us. 

In other words, nice try, Mr. President, but those threats will need to be more serious. 

In further comments on LinkedIn looking at what it would take for Apple to onshore iPhone production for the US market, Bjorhovde also agreed that there's a lot more to the picture than tariffs.

If Apple decided to cave to Trump's demands, it would have to build new US factories, train a bunch of new workers to manufacture iPhones, and deal with "surging assembly and testing costs" in addition to the aforementioned labor and component costs, said Bjorhovde. 

Factor all that in, says Morgan Stanley, and we're looking at a minimum of two years before Apple could build, equip, staff, and start assembling iPhones at a new US-based greenfield plant. Apple would need more than one factory to meet US iPhone demand, Morgan Stanley predicted, and would have to find more than 100,000 people "skilled in highly precise tooling equipment" to meet peak-period demand. 

"Both of these facts present significant challenges in time to market," Morgan Stanley said. 

More realistically, the investment bank said, it would take four or more years to get production going if we consider the case of TSMC's new Arizona chip fab. That facility began construction in 2020 and only came online late last year, Morgan Stanley noted. 

If an Apple commitment followed the same timeline, the first US-built iPhone might not reach consumers until after President Trump leaves office

"If an Apple commitment followed the same timeline, the first US-built iPhone might not reach consumers until after President Trump leaves office," the bankers concluded - perfect timing for the next White House occupant to take credit.

Bjorhovde is even less optimistic. 

"I think we will look at a three-to-five-year investment minimum to get any production capacity to the US," the Canalys analyst told us, and even that timeline comes with a number of caveats. Most notably, Apple would have to find a way to import iPhone manufacturing experts from China to the United States "backed by an investment from, for example, Foxconn," Bjorhovde said. 

Let's not forget that Trump can't simply wave his hands and implement targeted tariffs, either. Morgan Stanley pointed out that smartphones are currently exempted from Trump's various on-again, off-again tariff decrees, giving the US just two options to enact Trump's weekend decree. One option is through the International Emergency Economic Powers Act, which gives the President the power to levy tariffs if a national emergency is declared. The second comes through a section 232 study, which seeks to establish national security risks of manufacturing stuff outside the US.

"Both options face legal headwinds," Morgan Stanley said. "A Section 232 investigation could have firmer standing given the administration is already evaluating semiconductor tariffs via this route." 

Steve Jobs said it, we've said it, and we'll say it again: iPhones just aren't ever going to be a made-in-the-USA product. That doesn't mean Apple won't try to do something else to appease Trump, though. 

If Apple ignores Trump's 25 percent tariff and declines its accompanying demand to bring iPhone manufacturing to the US, "Tim Cook's status with the current administration [will] deteriorate," Morgan Stanley predicted. Apple may also face further tariff threats, the bank predicted, which could further worry spooked investors.

Apple has already pledged to invest $500 billion in the US over four years, spanning areas like AI, chips, and workforce training, but not iPhone manufacturing, signaling it's still willing to play ball with Washington. Morgan Stanley predicted it might do more that doesn't involve messing with the margins on its top product

Morgan Stanley believes it would make sense for Tim Cook to announce reshoring some "smaller products," suggesting Macs, HomePods, AirTags, and other products could be made in the US with much less investment. 

It's "not as symbolic as the iPhone," Morgan Stanley said, but it would be a win for both Trump and Apple. The former "gets the largest electronics company in the world to commit, publicly, to new US production," the bank noted, while Apple reduces "geopolitical threats at home." 


Original Submission

posted by hubie on Wednesday May 28, @06:34AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The internet has seen its fair share of weird, but a Star Wars fan site secretly run by the CIA to communicate with overseas spies might top the list. StarWarsWeb.net looked like any other 2010-era fan page, complete with lightsabers, Yoda quotes ("Like these games you will"), LEGO ads, and hyped-up mentions of games like Battlefront 2 and The Force Unleashed II. But behind that nostalgic facade was a covert login system. If you entered the right password into the search bar, you'd unlock a secure line to CIA handlers. Or at least, that was the plan.

This bizarre piece of intel comes courtesy of Ciro Santilli, an independent researcher with a knack for rooting around the dusty corners of the web, who spoke to 404media.

Santili took it upon himself to dig deeper after a Reuters uncovering piece titled "America's Throwaway Spies" revealed a handful of suspicious domains back in 2022. Armed with little more than open-source tools, web dev know-how, and apparently endless patience, he ended up uncovering hundreds of similar sites.

As it turned out, the Star Wars page was just one star in a galaxy of CIA-run covert communication sites.

There were comedy pages, extreme sports sites, and even a Brazilian music fan page. Some were clearly geared toward users in states like Iran and China, where their discovery led to devastating consequences, including the execution of CIA sources around 2011-2012. But others appeared to target France, Germany, Spain, and Brazil.

The fatal flaw, according to both Santilli and the Reuters report, was that many of the sites were sloppily coded, reusing sequential IP addresses or other easily traceable breadcrumbs. Once one site was found, identifying others was often just a matter of basic detective work. This was something Iranian and Chinese counterintelligence teams apparently figured out over a decade ago. You can read this in more detail on Santilli's writeup.

Despite the deadly fallout, this digital forensics saga is now a cold case. Santilli described it as being "like a museum," saying that thanks to the Wayback Machine, people can still go back and view the site.

All said, fifteen years later, the CIA's attempt at geek-coded spycraft remains a cautionary tale that even intelligence agencies are only human. And that on the internet, your secrets have a shelf life.


Original Submission

posted by hubie on Wednesday May 28, @01:47AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Evidence of an attack on administration officials appeared last week on leak site Distributed Denial of Secrets, hosted an archive of messages that included details of over 60 government workers, a White House staffer, and members of the Secret.

The leak, first reported by Reuters, isn't as serious as Signalgate - no one was discussing air strikes and possible war crimes - but it's still suboptimal.

The White House said that it was "aware of the cyber security incident" but didn't comment further.

TeleMessage servers are reportedly closed while an investigation is carried out.

Europol had already detailed attempts to take down the Qakbot and Danabot malware groups, and last Friday it announced the disruption of the following five malware crews:

Operation Endgame II, a combined operation involving police from the EU, UK, US, and Canada, has now led to 20 arrests and 18 suspects have been added to the EU's most wanted list. In addition a total of €21.2 million has been seized.

"This new phase demonstrates law enforcement’s ability to adapt and strike again, even as cybercriminals retool and reorganise," said Catherine De Bolle, Europol executive director. "By disrupting the services criminals rely on to deploy ransomware, we are breaking the kill chain at its source."

Two government boffins have proposed a method for predicting which security vulnerabilities criminals are likely to exploit, and think it could be used to improve patching choices.

In a recent paper [PDF], cybersecurity specialist Jono Spring of CISA and Peter Mell, a former senior computer scientist retired from Uncle Sam's NIST this month, suggest a new system that addresses a blind spot in current flaw fixing methodologies.

Here's the current list of patches under active attack, courtesy of US government security guards at CISA.

CVSS 9.8 - CVE-2025-4632 is a path traversal vulnerability in Samsung MagicINFO 9 Server which would allow anyone with the skill to write arbitrary files as a system authority.

CVSS 7.2 - CVE-2025-4428 is a vulnerability in Ivanti Endpoint Manager Mobile 12.5.0.0 and earlier builds. It allows full remote code execution using a specially crafted API request.

One current tool to help users prioritize the fixes to deploy is the US Cybersecurity and Infrastructure Security Agency’s (CISA’s) known exploited vulnerabilities (KEV) database that lists which CVEs under active attack. Regulations require US federal government agencies to patch bugs on the list within six months. Private sector admins also use the list.

Further help comes from an industry group known as the Forum of Incident Response and Security Teams (FIRST) which feeds CVE data into a separate Exploit Prediction Scoring System (EPSS). This machine-learning system predicts which vulnerabilities criminals are likely to attack in the next 30 days.

Spring and Mell have suggested a new system to help admins that combines KEV and EPSS and called it a likely exploited vulnerabilities (LEV) list, and assert that it offers helpfully accurate indicators to focus patching priorities.

Hosting biz GoDaddy has agreed a settlement with the US FTC after the regulator took action over the lamentable state of its security.

In 2023 GoDaddy was forced to admit that it didn’t notice its systems were under attack for three years. The biz hadn't bothered with multi-factor authentication for key accounts, was lax about patching its applications, didn’t have great logs of security events, or secure its network connection.

As a result thousands of GoDaddy customers suffered outages and had their websites infected with malware. The furor caused the FTC to step in, but the settlement is so mild as to make the phrase "slap on the wrist" sound violent.

As a result GoDaddy has agreed to be "prohibited from making misrepresentations about its security," revamp its security systems - something it should have been doing anyway - and to hire independent infosec consultants to check on GoDaddy's work.

A security researcher has found something really rather disturbing - an unsecured database containing 47.42GB of data.

Jeremiah Fowler, a security specialist at vpnMentor, found the database and claims it contained 184,162,718 unique logins and passwords. He tested 10,000 of the credentials and found 479 Facebook accounts, 475 Google accounts, 240 Instagram accounts, 227 Roblox accounts, 209 Discord accounts, and more than 100 Microsoft, Netflix, and PayPal accounts, Wired reports.

"To confirm the authenticity of the data, I messaged multiple email addresses listed in the database and explained that I was investigating a data exposure that may have involved their information," he said. "I was able to validate several records as these individuals confirmed that the records contained their accurate and valid passwords."

Fowler suspects the database was compiled by users of infostealer malware. He contacted the hosting company on whose services he found the trove, but it declined to identify the customer whose instance hosted the database.


Original Submission

posted by hubie on Tuesday May 27, @09:04PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Starfish aims to roll out a battery-free system which connects to multiple parts of the brain simultaneously.

Starfish Neuroscience, a startup co-founded by Valve CEO Gabe Newell, has published an article revealing the first details of its brain-computer interface (BCI) chip.

The firm proposes a “new class of minimally-invasive, distributed neural interfaces that enable simultaneous access to multiple brain regions.” Moreover, a Starfish BCI could be the first fully wireless, battery-free implant available, if all goes to plan. According to its blog, the startup’s first chips are expected to arrive “in late 2025.” Perhaps the relationship with Newell means related tech will eventually find its way into gaming headsets and controllers.

In its report on the Starfish BCI news, The Verge notes that Newell’s fascination with BCIs began over 10 years ago, and that Valve once considered adding earlobe monitors to its VR headset products. As recently as 2019, Valve also publicly explored BCIs for gaming. Later the same year, Newell incorporated Starfish Neuroscience, and we are now seeing the first fruits as it emerges from stealth.

In its new blog post, Starfish says its BCI has the opportunity to do well thanks to two key features, its minimal size and the eschewing of built-in battery power. In regular use, the Starfish processor will consume just 1.1mW, it says. That contrasts with the Neuralink N1, which uses around 6mW.

[...] The startup also thinks that its smaller, lower power BCI implant(s) may work best connected to multiple parts of the brain simultaneously. For use in medical therapy, this multi-zone methodology could address human brain issues which affect several areas of the brain, like Parkinson’s disease.

Starfish isn’t so bold as to think it can go it alone with its new processor and BCI system. Rather, its blog floats the idea of collaborators on wireless power delivery and communication, and on custom implanted neural interfaces. It also admits “there is tons of work yet to be done here,” and is looking for employees, as well as partners, to boost its fortunes.


Original Submission

posted by hubie on Tuesday May 27, @04:16PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Texas could become the next US state to lay down the law with social media platforms. A Texas bill that would ban social media use for anyone under 18 recently moved past the Senate committee and is due for a vote in front of the Texas State Senate. The bill has until the state's legislative session comes to an end on June 2, leaving roughly a week for it to be approved by both the Senate and the governor.

Earlier this year, the bill passed the House committee stage and was later voted in favor of by the state's House of Representatives. If made into law, the bill would force social media platforms to verify the age of anyone setting up an account, much like how Texas passed legislation requiring websites hosting porn to implement an age verification system. On top of that, Texas' social media ban proposes to let parents delete their child's social media account, allowing the platforms 10 days to comply with the request or face a fine from the state's attorney general.

Texas isn't the only governing body interested in restricting social media access. Last year, Florida's governor, Ron DeSantis, signed into law a bill that outright bans anyone under 14 from using social media and requires 14- and 15-year-olds to get parental consent to make an account or use an existing account. Notably, Texas' proposed law is much stricter than that.

On a larger scale, the US Senate introduced a bill to ban social media platforms for anyone under 13 in April 2024. After being stuck in the committee stage, Senators Brian Schatz (D-Hawaii) and Ted Cruz (R-Texas) recently made comments that signal a potential second attempt at getting this passed.


Original Submission

posted by janrinok on Tuesday May 27, @11:31AM   Printer-friendly

Research Reveals 'Forever Chemicals' Present in Beer

Research reveals 'forever chemicals' present in beer:

Infamous for their environmental persistence and potential links to health conditions, per- and polyfluoroalkyl substances (PFAS), often called forever chemicals, are being discovered in unexpected places, including beer. Researchers publishing in ACS' Environmental Science & Technology tested beers brewed in different areas around the U.S. for these substances. They found that beers produced in parts of the country with known PFAS-contaminated water sources showed the highest levels of forever chemicals.

"As an occasional beer drinker myself, I wondered whether PFAS in water supplies was making its way into our pints," says research lead Jennifer Hoponick Redmon. "I hope these findings inspire water treatment strategies and policies that help reduce the likelihood of PFAS in future pours."

PFAS are human-made chemicals produced for their water-, oil- and stain-repellent properties. They have been found in surface water, groundwater and municipal water supplies across the U.S. and the world. Although breweries typically have water filtration and treatment systems, they are not designed to remove PFAS. By modifying a U.S. Environmental Protection Agency (EPA) testing method for analyzing levels of PFAS in drinking water, Hoponick Redmon and colleagues tested 23 beers. The test subjects were produced by U.S. brewers in areas with documented water system contamination, plus popular domestic and international beers from larger companies with unknown water sources.

The researchers found a strong correlation between PFAS concentrations in municipal drinking water and levels in locally brewed beer — a phenomenon that Hoponick Redmon and colleagues say has not yet been studied in U.S. retail beer. They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water. Notably, the team found that beers brewed near the Cape Fear River Basin in North Carolina, an area with known PFAS pollution, had the highest levels and most diverse mix of forever chemicals, including PFOS and PFOA.

This work shows that PFAS contamination at one source can spread into other products, and the researchers call for greater awareness among brewers, consumers and regulators to limit overall PFAS exposure. These results also highlight the possible need for water treatment upgrades at brewing facilities as PFAS regulations in drinking water change or updates to municipal water system treatment are implemented.

Journal Reference: Hold My Beer: The Linkage between Municipal Water and Brewing Location on PFAS in Popular Beverages, Jennifer Hoponick Redmon, Nicole M. DeLuca, Evan Thorp, et al., Environmental Science & Technology 2025 59 (17), 8368-8379 DOI: 10.1021/acs.est.4c11265 [open access]

95% of a sample of cans of USA beer contaminated with PFAS

"We purchased 23 canned beer types in North Carolina stores in August 2021, with most of the beer purchases having at least 5 different cans of the same beer. Some beers are brewed in multiple locations; thus we confirmed brewing location for the purchased cans based on the brewery can code."

"They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water."

"The most detected PFAS in beer aliquots were PFSAs–PFOS, PFBS, and PFHxS [84% (n = 63), 53% (n = 40), and 47% (n = 35), respectively]"

So if you literally drink beer like water, "While there are currently no standards for PFAS levels in beer, these drinking water standards can provide insight, as beers are intended for direct consumption similar to drinking water. We found that some of the beers exceeded the health standards."

pop sci coverage: https://phys.org/news/2025-05-pfas-beers-highest-contaminated.html
journal article: https://pubs.acs.org/doi/10.1021/acs.est.4c11265


Original Submission #1Original Submission #2

posted by janrinok on Tuesday May 27, @06:43AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

By combining information from many large datasets, MIT researchers have identified several new potential targets for treating or preventing Alzheimer’s disease.

The study revealed genes and cellular pathways that haven’t been linked to Alzheimer’s before, including one involved in DNA repair. Identifying new drug targets is critical because many of the Alzheimer’s drugs that have been developed to this point haven’t been as successful as hoped.

Working with researchers at Harvard Medical School, the team used data from humans and fruit flies to identify cellular pathways linked to neurodegeneration. This allowed them to identify additional pathways that may be contributing to the development of Alzheimer’s.

“All the evidence that we have indicates that there are many different pathways involved in the progression of Alzheimer’s. It is multifactorial, and that may be why it’s been so hard to develop effective drugs,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We will need some kind of combination of treatments that hit different parts of this disease.”

Matthew Leventhal PhD ’25 is the lead author of the paper, which appears today in Nature Communications.

Over the past few decades, many studies have suggested that Alzheimer’s disease is caused by the buildup of amyloid plaques in the brain, which triggers a cascade of events that leads to neurodegeneration.

A handful of drugs have been developed to block or break down these plaques, but these drugs usually do not have a dramatic effect on disease progression. In hopes of identifying new drug targets, many scientists are now working on uncovering other mechanisms that might contribute to the development of Alzheimer’s.

“One possibility is that maybe there’s more than one cause of Alzheimer’s, and that even in a single person, there could be multiple contributing factors,” Fraenkel says. “So, even if the amyloid hypothesis is correct — and there are some people who don’t think it is — you need to know what those other factors are. And then if you can hit all the causes of the disease, you have a better chance of blocking and maybe even reversing some losses.”

To try to identify some of those other factors, Fraenkel’s lab teamed up with Mel Feany, a professor of pathology at Harvard Medical School and a geneticist specializing in fruit fly genetics.

Using fruit flies as a model, Feany and others in her lab did a screen in which they knocked out nearly every conserved gene expressed in fly neurons. Then, they measured whether each of these gene knockdowns had any effect on the age at which the flies develop neurodegeneration. This allowed them to identify about 200 genes that accelerate neurodegeneration.

Some of these were already linked to neurodegeneration, including genes for the amyloid precursor protein and for proteins called presenillins, which play a role in the formation of amyloid proteins.

The researchers then analyzed this data using network algorithms that Fraenkel’s lab has been developing over the past several years. These are algorithms that can identify connections between genes that may be involved in the same cellular pathways and functions.

In this case, the aim was to try to link the genes identified in the fruit fly screen with specific processes and cellular pathways that might contribute to neurodegeneration. To do that, the researchers combined the fruit fly data with several other datasets, including genomic data from postmortem tissue of Alzheimer’s patients.

The first stage of their analysis revealed that many of the genes identified in the fruit fly study also decline as humans age, suggesting that they may be involved in neurodegeneration in humans.

In the next phase of their study, the researchers incorporated additional data relevant to Alzheimer’s disease, including eQTL (expression quantitative trait locus) data — ­a measure of how different gene variants affect the expression levels of certain proteins.

Using their network optimization algorithms on this data, the researchers identified pathways that link genes to their potential role in Alzheimer’s development. The team chose two of those pathways to focus on in the new study.

The first is a pathway, not previously linked to Alzheimer’s disease, related to RNA modification. The network suggested that when one of two of the genes in this pathway — MEPCE and HNRNPA2B1 — are missing, neurons become more vulnerable to the Tau tangles that form in the brains of Alzheimer’s patients. The researchers confirmed this effect by knocking down those genes in studies of fruit flies and in human neurons derived from induced pluripotent stem cells (IPSCs).

The second pathway reported in this study is involved in DNA damage repair. This network includes two genes called NOTCH1 and CSNK2A1, which have been linked to Alzheimer’s before, but not in the context of DNA repair. Both genes are most well-known for their roles in regulating cell growth.

In this study, the researchers found evidence that when these genes are missing, DNA damage builds up in cells, through two different DNA-damaging pathways. Buildup of unrepaired DNA has previously been shown to lead to neurodegeneration.

Now that these targets have been identified, the researchers hope to collaborate with other labs to help explore whether drugs that target them could improve neuron health. Fraenkel and other researchers are working on using IPSCs from Alzheimer’s patients to generate neurons that could be used to evaluate such drugs.

“The search for Alzheimer’s drugs will get dramatically accelerated when there are very good, robust experimental systems,” he says. “We’re coming to a point where a couple of really innovative systems are coming together. One is better experimental models based on IPSCs, and the other one is computational models that allow us to integrate huge amounts of data. When those two mature at the same time, which is what we’re about to see, then I think we’ll have some breakthroughs.”


Original Submission

posted by hubie on Tuesday May 27, @01:53AM   Printer-friendly
from the of-(mis)direction dept.

https://techxplore.com/news/2025-05-google-ads-ai-chatgpt.html

Google said Wednesday it is beginning to weave advertisements into its new AI Mode for online search, a strategic move to counter the challenge posed by ChatGPT as the primary source for online answers.

[...] "The future of advertising fueled by AI isn't coming—it's already here," stated Vidhya Srinivasan, Google's vice president of Ads & Commerce.

"We're reimagining the future of ads and shopping: Ads that don't interrupt, but help customers discover a product or service."

Will this make Google's so-called AI summaries better?
Will you start or continue ignoring them?
Are Google searches your preferred destination when you want to buy something?


Original Submission

Today's News | May 30 | May 28  >