Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:35 | Votes:173

posted by kolie on Wednesday May 28, @08:49PM   Printer-friendly
from the intel-inside-tm-job dept.

Arthur T Knackerbracket has processed the following story:

An insider and an outside allegedly colluded to embezzle over $840,000 from Intel.

Israeli news source Calcalist has reported that Intel Israel has initiated legal action against Natalia Avtsin, a former employee, and Yafim Tsibolevsky, a previous component supplier, for their alleged conspiracy to embezzle over NIS 3 million, approximately $842,000. This embezzlement allegedly took place between October 2023 and November 2024, remaining undetected until Intel exposed the fraud.

Avtsin was employed in Intel Israel's hardware production department until her dismissal in November 2024. Intel stated that her termination was part of a strategy to reduce operations in Israel and was unrelated to her alleged crimes, which were still undiscovered at that time. In September 2023, Tsibolevsky registered as an authorized dealer under the name "Energy Electronics 2000" and subsequently became an official Intel supplier the following month.

Avtsin and Tsibolevsky's operation began with Avtsin asking Tsibolevsky for price quotes on hardware components. Avtsin then sent the quotes to her manager for approval, but supposedly altered the transaction classification afterward. She is said to have changed the classification from "components" to "services," which bypasses essential verification protocols. Logically, only an insider could know how a reclassification can easily bypass many security checks.

Intel Israel informed Calcalist that payments for services were less strict compared to payments for components. For service payments, a signed delivery note or confirmation receipt was not required. With no verification barriers, Tsibolevsky could submit invoices and receive payments at his convenience.

If the purchase had been classified as "services," Tsibolevsky would not have escaped scrutiny, as Energy Electronics 2000 failed to register with Intel for service provision. To evade detection, Tsibolevsky issued invoices of $20,000 or less, aligning with Avtsin's transaction limit. Once more, Tsibolevsky would have likely been unaware of this information without an insider's involvement.

Intel Israel's investigation suggests possible third-party involvement. It appears that certain transactions were processed through Levanon Kogan, a company providing purchasing services to firms that are not registered with Intel. The chipmaker has not accused Levanon Kogan of any misdemeanor, but these activities appear to correlate with the scheme of Avtsin and Tsibolevsky.

In certain operations, Avtsin obtained a quote from Tsibolevsky and requested Levanon Kogan to make payments to him. Like with other fraudulent schemes, she allegedly reclassified these transactions from components to services. Intel found that as many as 30 counterfeit orders were processed via Levanon Kogan, amounting to over NIS 2 million, or approximately $561,000.

Intel is suing Avtsin and Tsibolevsky in the Haifa District Court, asking the defendants to repay the stolen funds and any profits they made from them.


Original Submission

posted by kolie on Wednesday May 28, @04:04PM   Printer-friendly
from the hd-drizzle-upgrade dept.

Arthur T Knackerbracket has processed the following story:

The Sun’s outer atmosphere—the corona—is the piping hot outer limit of our star, and is usually hidden from view except during rare total eclipses. Now, scientists have gotten their clearest look ever at this mysterious region, thanks to a new adaptive optics system that scrubs away atmospheric blur, revealing fine views of the wispy plasma on the star’s surface.

Researchers from the National Solar Observatory and New Jersey Institute of Technology unveiled the system today, along with dazzling new images and videos of the Sun’s corona. The findings, published in Nature Astronomy, show fine-scale structures in solar prominences, short-lived plasma jets called spicules, and even coronal rain: cooling plasma that falls back to the solar surface along the star’s magnetic field lines.

The team’s imaging breakthrough hinges on a technology called coronal adaptive optics. Installed on the 5.25-foot (1.6-meter) Goode Solar Telescope in California, the new system—nicknamed “Cona”—adjusts a mirror 2,200 times per second to correct for distortions caused by the churn of Earth’s atmosphere. The remarkable technology counterbalances any would-be wobble in the telescope, thereby producing particularly sharp images of the corona.

“This technological advancement is a game-changer,” said Dirk Schmidt, an adaptive optics scientist at NSO and the study’s lead author, in an observatory release. “There is a lot to discover when you boost your resolution by a factor of 10.”

Until now, solar telescopes have used adaptive optics mainly to study the Sun’s surface, the release stated. Observing the fainter corona has remained a challenge, with coronal features blurred to scales of 621 miles (1,000 kilometers)—a limit that’s existed for 80 years. But Cona now resolves features down to just 39 miles (63 km), the theoretical limit of the Goode telescope.

Among the new footage captured by the team are shots of a twisting solar prominence reshaping in real time, spicules flickering on the surface, and fine, hair-like strands of coronal rain narrower than 12.5 miles (20 km). When you consider how far the Sun is from Earth, how faint the corona is relative to the rest of the star, and how much of Earth’s turbulent atmosphere the team had to cut through and correct for, the sharpness of the images is a triumph.

“This transformative technology, which is likely to be adopted at observatories world-wide, is poised to reshape ground-based solar astronomy,” said study co-author Philip Goode, a physicist at NJIT-CSTR, in the same release. “With coronal adaptive optics now in operation, this marks the beginning of a new era in solar physics, promising many more discoveries in the years and decades to come.”

The observations offer crucial data for unraveling enduring solar mysteries—like why the corona is millions of degrees hotter than the solar surface.

The team plans to bring the coronal adaptive optics technology to the 13-foot (4-meter) Daniel K. Inouye Solar Telescope in Hawaiʻi—potentially revealing even smaller details of the Sun’s atmosphere.


Original Submission

posted by kolie on Wednesday May 28, @11:19AM   Printer-friendly
from the iCant-make-it-here dept.

Arthur T Knackerbracket has processed the following story:

US President Donald Trump can huff, puff, and threaten to blow Tim Cook's house down with a 25 percent iPhone import tariff, but analysts say even that threat is unlikely to bring Apple's manufacturing home.

In response to Trump's statement last week, analysts from Morgan Stanley published a research brief on Tuesday that concluded Apple is unlikely to respond to Trump's latest tariff threat in a way that will please him. 

The report, provided to The Register, concluded that the original 145 percent tariff imposed by Trump on certain imports from China last month might have made Apple budge on the matter, but since the President lost his international staredown and promised to reduce that rate, the economics no longer make sense for Cupertino.

According to the Morgan Stanley number crunchers, an iPhone manufactured in the United States would be at least 35 percent more expensive than one made overseas when accounting for tariffs on single-source components still made in China and higher US labor costs. That means a $999 iPhone would be $1,350 - at a minimum - if Apple wanted to retain a similar gross margin.

[A] 25 percent tariff will have no effect; it will need to be many times higher to compensate for the local production cost

With a 25 percent tariff on iPhone imports from China or India in place, on the other hand, Apple would need to increase prices on iPhones by only four to six percent globally to keep profits up. 

Canalys smartphone and IoT analyst Runar Bjorhovde agreed with Morgan Stanley's analysis in an email to The Register. "[A] 25 percent tariff will have no effect; it will need to be many times higher to compensate for the local production cost," Bjorhovde told us. 

In other words, nice try, Mr. President, but those threats will need to be more serious. 

In further comments on LinkedIn looking at what it would take for Apple to onshore iPhone production for the US market, Bjorhovde also agreed that there's a lot more to the picture than tariffs.

If Apple decided to cave to Trump's demands, it would have to build new US factories, train a bunch of new workers to manufacture iPhones, and deal with "surging assembly and testing costs" in addition to the aforementioned labor and component costs, said Bjorhovde. 

Factor all that in, says Morgan Stanley, and we're looking at a minimum of two years before Apple could build, equip, staff, and start assembling iPhones at a new US-based greenfield plant. Apple would need more than one factory to meet US iPhone demand, Morgan Stanley predicted, and would have to find more than 100,000 people "skilled in highly precise tooling equipment" to meet peak-period demand. 

"Both of these facts present significant challenges in time to market," Morgan Stanley said. 

More realistically, the investment bank said, it would take four or more years to get production going if we consider the case of TSMC's new Arizona chip fab. That facility began construction in 2020 and only came online late last year, Morgan Stanley noted. 

If an Apple commitment followed the same timeline, the first US-built iPhone might not reach consumers until after President Trump leaves office

"If an Apple commitment followed the same timeline, the first US-built iPhone might not reach consumers until after President Trump leaves office," the bankers concluded - perfect timing for the next White House occupant to take credit.

Bjorhovde is even less optimistic. 

"I think we will look at a three-to-five-year investment minimum to get any production capacity to the US," the Canalys analyst told us, and even that timeline comes with a number of caveats. Most notably, Apple would have to find a way to import iPhone manufacturing experts from China to the United States "backed by an investment from, for example, Foxconn," Bjorhovde said. 

Let's not forget that Trump can't simply wave his hands and implement targeted tariffs, either. Morgan Stanley pointed out that smartphones are currently exempted from Trump's various on-again, off-again tariff decrees, giving the US just two options to enact Trump's weekend decree. One option is through the International Emergency Economic Powers Act, which gives the President the power to levy tariffs if a national emergency is declared. The second comes through a section 232 study, which seeks to establish national security risks of manufacturing stuff outside the US.

"Both options face legal headwinds," Morgan Stanley said. "A Section 232 investigation could have firmer standing given the administration is already evaluating semiconductor tariffs via this route." 

Steve Jobs said it, we've said it, and we'll say it again: iPhones just aren't ever going to be a made-in-the-USA product. That doesn't mean Apple won't try to do something else to appease Trump, though. 

If Apple ignores Trump's 25 percent tariff and declines its accompanying demand to bring iPhone manufacturing to the US, "Tim Cook's status with the current administration [will] deteriorate," Morgan Stanley predicted. Apple may also face further tariff threats, the bank predicted, which could further worry spooked investors.

Apple has already pledged to invest $500 billion in the US over four years, spanning areas like AI, chips, and workforce training, but not iPhone manufacturing, signaling it's still willing to play ball with Washington. Morgan Stanley predicted it might do more that doesn't involve messing with the margins on its top product

Morgan Stanley believes it would make sense for Tim Cook to announce reshoring some "smaller products," suggesting Macs, HomePods, AirTags, and other products could be made in the US with much less investment. 

It's "not as symbolic as the iPhone," Morgan Stanley said, but it would be a win for both Trump and Apple. The former "gets the largest electronics company in the world to commit, publicly, to new US production," the bank noted, while Apple reduces "geopolitical threats at home." 


Original Submission

posted by hubie on Wednesday May 28, @06:34AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The internet has seen its fair share of weird, but a Star Wars fan site secretly run by the CIA to communicate with overseas spies might top the list. StarWarsWeb.net looked like any other 2010-era fan page, complete with lightsabers, Yoda quotes ("Like these games you will"), LEGO ads, and hyped-up mentions of games like Battlefront 2 and The Force Unleashed II. But behind that nostalgic facade was a covert login system. If you entered the right password into the search bar, you'd unlock a secure line to CIA handlers. Or at least, that was the plan.

This bizarre piece of intel comes courtesy of Ciro Santilli, an independent researcher with a knack for rooting around the dusty corners of the web, who spoke to 404media.

Santili took it upon himself to dig deeper after a Reuters uncovering piece titled "America's Throwaway Spies" revealed a handful of suspicious domains back in 2022. Armed with little more than open-source tools, web dev know-how, and apparently endless patience, he ended up uncovering hundreds of similar sites.

As it turned out, the Star Wars page was just one star in a galaxy of CIA-run covert communication sites.

There were comedy pages, extreme sports sites, and even a Brazilian music fan page. Some were clearly geared toward users in states like Iran and China, where their discovery led to devastating consequences, including the execution of CIA sources around 2011-2012. But others appeared to target France, Germany, Spain, and Brazil.

The fatal flaw, according to both Santilli and the Reuters report, was that many of the sites were sloppily coded, reusing sequential IP addresses or other easily traceable breadcrumbs. Once one site was found, identifying others was often just a matter of basic detective work. This was something Iranian and Chinese counterintelligence teams apparently figured out over a decade ago. You can read this in more detail on Santilli's writeup.

Despite the deadly fallout, this digital forensics saga is now a cold case. Santilli described it as being "like a museum," saying that thanks to the Wayback Machine, people can still go back and view the site.

All said, fifteen years later, the CIA's attempt at geek-coded spycraft remains a cautionary tale that even intelligence agencies are only human. And that on the internet, your secrets have a shelf life.


Original Submission

posted by hubie on Wednesday May 28, @01:47AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Evidence of an attack on administration officials appeared last week on leak site Distributed Denial of Secrets, hosted an archive of messages that included details of over 60 government workers, a White House staffer, and members of the Secret.

The leak, first reported by Reuters, isn't as serious as Signalgate - no one was discussing air strikes and possible war crimes - but it's still suboptimal.

The White House said that it was "aware of the cyber security incident" but didn't comment further.

TeleMessage servers are reportedly closed while an investigation is carried out.

Europol had already detailed attempts to take down the Qakbot and Danabot malware groups, and last Friday it announced the disruption of the following five malware crews:

Operation Endgame II, a combined operation involving police from the EU, UK, US, and Canada, has now led to 20 arrests and 18 suspects have been added to the EU's most wanted list. In addition a total of €21.2 million has been seized.

"This new phase demonstrates law enforcement’s ability to adapt and strike again, even as cybercriminals retool and reorganise," said Catherine De Bolle, Europol executive director. "By disrupting the services criminals rely on to deploy ransomware, we are breaking the kill chain at its source."

Two government boffins have proposed a method for predicting which security vulnerabilities criminals are likely to exploit, and think it could be used to improve patching choices.

In a recent paper [PDF], cybersecurity specialist Jono Spring of CISA and Peter Mell, a former senior computer scientist retired from Uncle Sam's NIST this month, suggest a new system that addresses a blind spot in current flaw fixing methodologies.

Here's the current list of patches under active attack, courtesy of US government security guards at CISA.

CVSS 9.8 - CVE-2025-4632 is a path traversal vulnerability in Samsung MagicINFO 9 Server which would allow anyone with the skill to write arbitrary files as a system authority.

CVSS 7.2 - CVE-2025-4428 is a vulnerability in Ivanti Endpoint Manager Mobile 12.5.0.0 and earlier builds. It allows full remote code execution using a specially crafted API request.

One current tool to help users prioritize the fixes to deploy is the US Cybersecurity and Infrastructure Security Agency’s (CISA’s) known exploited vulnerabilities (KEV) database that lists which CVEs under active attack. Regulations require US federal government agencies to patch bugs on the list within six months. Private sector admins also use the list.

Further help comes from an industry group known as the Forum of Incident Response and Security Teams (FIRST) which feeds CVE data into a separate Exploit Prediction Scoring System (EPSS). This machine-learning system predicts which vulnerabilities criminals are likely to attack in the next 30 days.

Spring and Mell have suggested a new system to help admins that combines KEV and EPSS and called it a likely exploited vulnerabilities (LEV) list, and assert that it offers helpfully accurate indicators to focus patching priorities.

Hosting biz GoDaddy has agreed a settlement with the US FTC after the regulator took action over the lamentable state of its security.

In 2023 GoDaddy was forced to admit that it didn’t notice its systems were under attack for three years. The biz hadn't bothered with multi-factor authentication for key accounts, was lax about patching its applications, didn’t have great logs of security events, or secure its network connection.

As a result thousands of GoDaddy customers suffered outages and had their websites infected with malware. The furor caused the FTC to step in, but the settlement is so mild as to make the phrase "slap on the wrist" sound violent.

As a result GoDaddy has agreed to be "prohibited from making misrepresentations about its security," revamp its security systems - something it should have been doing anyway - and to hire independent infosec consultants to check on GoDaddy's work.

A security researcher has found something really rather disturbing - an unsecured database containing 47.42GB of data.

Jeremiah Fowler, a security specialist at vpnMentor, found the database and claims it contained 184,162,718 unique logins and passwords. He tested 10,000 of the credentials and found 479 Facebook accounts, 475 Google accounts, 240 Instagram accounts, 227 Roblox accounts, 209 Discord accounts, and more than 100 Microsoft, Netflix, and PayPal accounts, Wired reports.

"To confirm the authenticity of the data, I messaged multiple email addresses listed in the database and explained that I was investigating a data exposure that may have involved their information," he said. "I was able to validate several records as these individuals confirmed that the records contained their accurate and valid passwords."

Fowler suspects the database was compiled by users of infostealer malware. He contacted the hosting company on whose services he found the trove, but it declined to identify the customer whose instance hosted the database.


Original Submission

posted by hubie on Tuesday May 27, @09:04PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Starfish aims to roll out a battery-free system which connects to multiple parts of the brain simultaneously.

Starfish Neuroscience, a startup co-founded by Valve CEO Gabe Newell, has published an article revealing the first details of its brain-computer interface (BCI) chip.

The firm proposes a “new class of minimally-invasive, distributed neural interfaces that enable simultaneous access to multiple brain regions.” Moreover, a Starfish BCI could be the first fully wireless, battery-free implant available, if all goes to plan. According to its blog, the startup’s first chips are expected to arrive “in late 2025.” Perhaps the relationship with Newell means related tech will eventually find its way into gaming headsets and controllers.

In its report on the Starfish BCI news, The Verge notes that Newell’s fascination with BCIs began over 10 years ago, and that Valve once considered adding earlobe monitors to its VR headset products. As recently as 2019, Valve also publicly explored BCIs for gaming. Later the same year, Newell incorporated Starfish Neuroscience, and we are now seeing the first fruits as it emerges from stealth.

In its new blog post, Starfish says its BCI has the opportunity to do well thanks to two key features, its minimal size and the eschewing of built-in battery power. In regular use, the Starfish processor will consume just 1.1mW, it says. That contrasts with the Neuralink N1, which uses around 6mW.

[...] The startup also thinks that its smaller, lower power BCI implant(s) may work best connected to multiple parts of the brain simultaneously. For use in medical therapy, this multi-zone methodology could address human brain issues which affect several areas of the brain, like Parkinson’s disease.

Starfish isn’t so bold as to think it can go it alone with its new processor and BCI system. Rather, its blog floats the idea of collaborators on wireless power delivery and communication, and on custom implanted neural interfaces. It also admits “there is tons of work yet to be done here,” and is looking for employees, as well as partners, to boost its fortunes.


Original Submission

posted by hubie on Tuesday May 27, @04:16PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Texas could become the next US state to lay down the law with social media platforms. A Texas bill that would ban social media use for anyone under 18 recently moved past the Senate committee and is due for a vote in front of the Texas State Senate. The bill has until the state's legislative session comes to an end on June 2, leaving roughly a week for it to be approved by both the Senate and the governor.

Earlier this year, the bill passed the House committee stage and was later voted in favor of by the state's House of Representatives. If made into law, the bill would force social media platforms to verify the age of anyone setting up an account, much like how Texas passed legislation requiring websites hosting porn to implement an age verification system. On top of that, Texas' social media ban proposes to let parents delete their child's social media account, allowing the platforms 10 days to comply with the request or face a fine from the state's attorney general.

Texas isn't the only governing body interested in restricting social media access. Last year, Florida's governor, Ron DeSantis, signed into law a bill that outright bans anyone under 14 from using social media and requires 14- and 15-year-olds to get parental consent to make an account or use an existing account. Notably, Texas' proposed law is much stricter than that.

On a larger scale, the US Senate introduced a bill to ban social media platforms for anyone under 13 in April 2024. After being stuck in the committee stage, Senators Brian Schatz (D-Hawaii) and Ted Cruz (R-Texas) recently made comments that signal a potential second attempt at getting this passed.


Original Submission

posted by janrinok on Tuesday May 27, @11:31AM   Printer-friendly

Research Reveals 'Forever Chemicals' Present in Beer

Research reveals 'forever chemicals' present in beer:

Infamous for their environmental persistence and potential links to health conditions, per- and polyfluoroalkyl substances (PFAS), often called forever chemicals, are being discovered in unexpected places, including beer. Researchers publishing in ACS' Environmental Science & Technology tested beers brewed in different areas around the U.S. for these substances. They found that beers produced in parts of the country with known PFAS-contaminated water sources showed the highest levels of forever chemicals.

"As an occasional beer drinker myself, I wondered whether PFAS in water supplies was making its way into our pints," says research lead Jennifer Hoponick Redmon. "I hope these findings inspire water treatment strategies and policies that help reduce the likelihood of PFAS in future pours."

PFAS are human-made chemicals produced for their water-, oil- and stain-repellent properties. They have been found in surface water, groundwater and municipal water supplies across the U.S. and the world. Although breweries typically have water filtration and treatment systems, they are not designed to remove PFAS. By modifying a U.S. Environmental Protection Agency (EPA) testing method for analyzing levels of PFAS in drinking water, Hoponick Redmon and colleagues tested 23 beers. The test subjects were produced by U.S. brewers in areas with documented water system contamination, plus popular domestic and international beers from larger companies with unknown water sources.

The researchers found a strong correlation between PFAS concentrations in municipal drinking water and levels in locally brewed beer — a phenomenon that Hoponick Redmon and colleagues say has not yet been studied in U.S. retail beer. They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water. Notably, the team found that beers brewed near the Cape Fear River Basin in North Carolina, an area with known PFAS pollution, had the highest levels and most diverse mix of forever chemicals, including PFOS and PFOA.

This work shows that PFAS contamination at one source can spread into other products, and the researchers call for greater awareness among brewers, consumers and regulators to limit overall PFAS exposure. These results also highlight the possible need for water treatment upgrades at brewing facilities as PFAS regulations in drinking water change or updates to municipal water system treatment are implemented.

Journal Reference: Hold My Beer: The Linkage between Municipal Water and Brewing Location on PFAS in Popular Beverages, Jennifer Hoponick Redmon, Nicole M. DeLuca, Evan Thorp, et al., Environmental Science & Technology 2025 59 (17), 8368-8379 DOI: 10.1021/acs.est.4c11265 [open access]

95% of a sample of cans of USA beer contaminated with PFAS

"We purchased 23 canned beer types in North Carolina stores in August 2021, with most of the beer purchases having at least 5 different cans of the same beer. Some beers are brewed in multiple locations; thus we confirmed brewing location for the purchased cans based on the brewery can code."

"They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water."

"The most detected PFAS in beer aliquots were PFSAs–PFOS, PFBS, and PFHxS [84% (n = 63), 53% (n = 40), and 47% (n = 35), respectively]"

So if you literally drink beer like water, "While there are currently no standards for PFAS levels in beer, these drinking water standards can provide insight, as beers are intended for direct consumption similar to drinking water. We found that some of the beers exceeded the health standards."

pop sci coverage: https://phys.org/news/2025-05-pfas-beers-highest-contaminated.html
journal article: https://pubs.acs.org/doi/10.1021/acs.est.4c11265


Original Submission #1Original Submission #2

posted by janrinok on Tuesday May 27, @06:43AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

By combining information from many large datasets, MIT researchers have identified several new potential targets for treating or preventing Alzheimer’s disease.

The study revealed genes and cellular pathways that haven’t been linked to Alzheimer’s before, including one involved in DNA repair. Identifying new drug targets is critical because many of the Alzheimer’s drugs that have been developed to this point haven’t been as successful as hoped.

Working with researchers at Harvard Medical School, the team used data from humans and fruit flies to identify cellular pathways linked to neurodegeneration. This allowed them to identify additional pathways that may be contributing to the development of Alzheimer’s.

“All the evidence that we have indicates that there are many different pathways involved in the progression of Alzheimer’s. It is multifactorial, and that may be why it’s been so hard to develop effective drugs,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We will need some kind of combination of treatments that hit different parts of this disease.”

Matthew Leventhal PhD ’25 is the lead author of the paper, which appears today in Nature Communications.

Over the past few decades, many studies have suggested that Alzheimer’s disease is caused by the buildup of amyloid plaques in the brain, which triggers a cascade of events that leads to neurodegeneration.

A handful of drugs have been developed to block or break down these plaques, but these drugs usually do not have a dramatic effect on disease progression. In hopes of identifying new drug targets, many scientists are now working on uncovering other mechanisms that might contribute to the development of Alzheimer’s.

“One possibility is that maybe there’s more than one cause of Alzheimer’s, and that even in a single person, there could be multiple contributing factors,” Fraenkel says. “So, even if the amyloid hypothesis is correct — and there are some people who don’t think it is — you need to know what those other factors are. And then if you can hit all the causes of the disease, you have a better chance of blocking and maybe even reversing some losses.”

To try to identify some of those other factors, Fraenkel’s lab teamed up with Mel Feany, a professor of pathology at Harvard Medical School and a geneticist specializing in fruit fly genetics.

Using fruit flies as a model, Feany and others in her lab did a screen in which they knocked out nearly every conserved gene expressed in fly neurons. Then, they measured whether each of these gene knockdowns had any effect on the age at which the flies develop neurodegeneration. This allowed them to identify about 200 genes that accelerate neurodegeneration.

Some of these were already linked to neurodegeneration, including genes for the amyloid precursor protein and for proteins called presenillins, which play a role in the formation of amyloid proteins.

The researchers then analyzed this data using network algorithms that Fraenkel’s lab has been developing over the past several years. These are algorithms that can identify connections between genes that may be involved in the same cellular pathways and functions.

In this case, the aim was to try to link the genes identified in the fruit fly screen with specific processes and cellular pathways that might contribute to neurodegeneration. To do that, the researchers combined the fruit fly data with several other datasets, including genomic data from postmortem tissue of Alzheimer’s patients.

The first stage of their analysis revealed that many of the genes identified in the fruit fly study also decline as humans age, suggesting that they may be involved in neurodegeneration in humans.

In the next phase of their study, the researchers incorporated additional data relevant to Alzheimer’s disease, including eQTL (expression quantitative trait locus) data — ­a measure of how different gene variants affect the expression levels of certain proteins.

Using their network optimization algorithms on this data, the researchers identified pathways that link genes to their potential role in Alzheimer’s development. The team chose two of those pathways to focus on in the new study.

The first is a pathway, not previously linked to Alzheimer’s disease, related to RNA modification. The network suggested that when one of two of the genes in this pathway — MEPCE and HNRNPA2B1 — are missing, neurons become more vulnerable to the Tau tangles that form in the brains of Alzheimer’s patients. The researchers confirmed this effect by knocking down those genes in studies of fruit flies and in human neurons derived from induced pluripotent stem cells (IPSCs).

The second pathway reported in this study is involved in DNA damage repair. This network includes two genes called NOTCH1 and CSNK2A1, which have been linked to Alzheimer’s before, but not in the context of DNA repair. Both genes are most well-known for their roles in regulating cell growth.

In this study, the researchers found evidence that when these genes are missing, DNA damage builds up in cells, through two different DNA-damaging pathways. Buildup of unrepaired DNA has previously been shown to lead to neurodegeneration.

Now that these targets have been identified, the researchers hope to collaborate with other labs to help explore whether drugs that target them could improve neuron health. Fraenkel and other researchers are working on using IPSCs from Alzheimer’s patients to generate neurons that could be used to evaluate such drugs.

“The search for Alzheimer’s drugs will get dramatically accelerated when there are very good, robust experimental systems,” he says. “We’re coming to a point where a couple of really innovative systems are coming together. One is better experimental models based on IPSCs, and the other one is computational models that allow us to integrate huge amounts of data. When those two mature at the same time, which is what we’re about to see, then I think we’ll have some breakthroughs.”


Original Submission

posted by hubie on Tuesday May 27, @01:53AM   Printer-friendly
from the of-(mis)direction dept.

https://techxplore.com/news/2025-05-google-ads-ai-chatgpt.html

Google said Wednesday it is beginning to weave advertisements into its new AI Mode for online search, a strategic move to counter the challenge posed by ChatGPT as the primary source for online answers.

[...] "The future of advertising fueled by AI isn't coming—it's already here," stated Vidhya Srinivasan, Google's vice president of Ads & Commerce.

"We're reimagining the future of ads and shopping: Ads that don't interrupt, but help customers discover a product or service."

Will this make Google's so-called AI summaries better?
Will you start or continue ignoring them?
Are Google searches your preferred destination when you want to buy something?


Original Submission

posted by hubie on Monday May 26, @09:07PM   Printer-friendly
from the fire-up-that-amateur-radio-license-for-those-HF-QSOs dept.

The Sun is Producing Strong Solar Flares, Creating Blackouts. What to Know

The sun is producing strong solar flares, creating blackouts. What to know:

A recent period of strong solar flares is expected to gradually decline over the coming weeks and months, scientists say, along with the potential for brief communication blackouts as the sun's solar cycle begins to fade.

The most powerful eruption of 2025 so far was observed last week by NASA's Solar Dynamics Observatory and the U.S. National Oceanic and Atmospheric Administration (NOAA).

The flare, classified as an X2.7, caused a 10-minute period of "degraded communications" for high-frequency radio systems in the Middle East, according to NOAA's Space Weather Prediction Center.

"We are at solar maximum, so there can be periods of more activity," a spokesperson for the Space Weather Prediction Center told Global News in an email.

The spokesperson added that the active region last week's flare emanated from, however, "has weakened magnetically, and even though it remains capable of producing a notable event, it seems less likely at this time."

[...] The 10-minute blackout in the Middle East occurred because that part of the Earth was facing the sun at the time.

However, because the active region was still somewhat off to the side, a related coronal mass ejection — which produces plasma and magnetic energy from the sun's corona — did not impact Earth.

Taylor Cameron, a space weather forecaster at the Canadian Hazards Information Service, told Global News it's difficult to predict specifically when a solar flare can erupt and which part of Earth it can affect.

The sun is currently at the peak of its 11-year solar cycle, known as solar maximum.

Although activity is generally declining, the Space Weather Prediction Center spokesperson told Global News that "sunspot activity and solar event expectations remain elevated this year and perhaps even into 2026."

[...] Cameron said solar flares only impact high-frequency radio communications, which can include ham radios, shortwave broadcasting, aviation air-to-ground communications and over-the-horizon radar systems. Other communication networks, like internet, 5G and cellular service, aren't affected.

The stronger a flare is, Cameron added, the more severe and longer a blackout or disruption can be.

To date, the most powerful flare of the current solar cycle was an X9.0 observed last October. That was strong enough to produce faint northern lights across parts of North America, which can occur during solar storms.

Another solar storm last spring produced stronger northern lights over much of Canada.

The Space Weather Prediction Center has reported brief radio blackouts due to multiple X-class solar flares recorded over the past several months.

See also:
    • R3 flare activity from Region 4087
    • Two X Class Solar Flares - The Sun Awakens
    • M Class Solar Flare, Filament Eruption, US Alert

Are There More Solar Flares Than Expected During This Solar Cycle?

Solar Cycle 25 is approaching its peak, but how does it measure up to the previous Solar Cycle 24?:

Like the number of sunspots, the occurrence of solar flares follows the approximately 11-year solar cycle.

But as the current Solar Cycle 25 approaches its peak, how are the number of solar flares stacking up against the previous, smaller Solar Cycle 24?

Due to a change in flare calibration levels from 2020, you'll find two answers to this question online — but only one is correct.

The sun follows an 11-year solar cycle of increasing and decreasing activity. The solar cycle is typically measured by the number of sunspots visible on the sun, with records dating back over 270 years. Most solar flares originate from sunspots, so with more sunspots — you'll get more flares.

Solar flares are categorized into flare classes, classified by the magnitude of soft X-rays observed in a narrow wavelength range of 0.1-0.8 nm. The flare classes are C-class, M-class and X-class, each 10 times stronger than the previous. (Flare levels are then sub-divided by a number, e.g. M2, X1, etc). Flares of these categories (except the very largest of the X-class events), tend to follow the solar cycle closely.

In terms of sunspot numbers, Solar Cycle 25 (our current cycle) has exceeded the sunspot levels of Solar Cycle 24 (which peaked in 2014). With higher sunspot numbers, we'd also expect higher flare counts. This is the case, but the difference is far from what some would have you believe.

How do solar flares compare between Solar Cycles 24 and 25? This seems like a simple enough question, but is muddied by a recalibration of solar flare levels in 2020 from the National Oceanic and Atmospheric Administration (NOAA).

Solar flare X-ray levels have been measured since 1974. X-rays do not penetrate Earth's atmosphere, and thus can only be measured by detectors on satellites in Earth orbit. For 50 years, these solar flare detectors have been placed on NOAA's GOES satellites. As technology improves, and old technology decays, newer detectors are launched on newer GOES satellites, to keep the continuous observation of solar flares going. GOES-18 (the 18th satellite in the sequence) is the current satellite responsible for primary X-ray observations, having launched in 2022.

Because flare levels have been measured (and their classes defined) by detectors across multiple satellites/instruments, corrections are sometimes needed to account for slight differences in calibration from one detector to the next.

From 2010-2020, flare levels were defined by measurements from GOES-14 and GOES-15. This period covered the solar maximum period of Solar Cycle 24, up to the end of that cycle. However, upon the launch of these two satellites, a calibration discrepancy was discovered between GOES-14/15 and all prior GOES X-ray detectors. To fix this, science data from 1974-2010 (from GOES-1 to GOES-13 satellites) were all readjusted to match the new calibration, which was believed to be correct at the time. A result of this was that the threshold for each flare class increased by 42%, meaning an individual solar flare in 2010 needed to be 42% larger than a flare from 2009, to be given the same X-class level.

However, and here comes the twist: following the switch to GOES-16 data on a new detector, it was discovered that the original calibration (from 1974-2010) had been correct all along, and the 2010-2020 calibration was the incorrect one. This meant that in 2020, all prior data (from 1974-2020) were again recalibrated to their previous correct levels, lowering back the threshold of different flare classes). With a lower flare threshold, it meant strong C-class flares (C7+) became M-class events, and strong M-class flares (M7+) became X-class flares. An X-class solar flare was therefore far easier to achieve in 2021 than it was in 2019. This 2020 recalibration therefore increased the number of higher classes flares in Solar Cycle 24 than initially reported.

Following the 2020 recalibration of solar flare levels, NOAA re-released their historic scientific flare datasets with the correct levels. However, the archived operations data, which lists solar flare levels as they were initially reported at the time, were not recalibrated. A consequence of this is that different flare lists compiled and analyzed by third parties, can either use the recalibrated science data, or un-recalibrated operations data when comparing solar flare levels between solar cycles. The former comparison yields correct results, while the latter compares current flare levels from cycle 25 with severely underestimated flare levels from previous cycles, producing scientifically incorrect comparisons. Let's compare some data!

[...] This graph shows the correct comparison of solar flares between Cycles 24 and 25. As you can see, although the number of Cycle 25 flares is still ahead of Cycle 24 at each flare level, the discrepancy is far less than that shown in the previous graph. The operations data undercounts the number of Cycle 24 flares by nearly half, a significant difference. In reality, the number of X-class solar flares in Cycle 24 is only half the total Cycle 24 quantity and even had fewer X-class flares until the recent solar activity from famous active regions AR 13663 and AR 13664. This graph also shows that although May 2024 saw a lot of X-class activity from these active regions, this level of activity is not unprecedented — with Solar Cycle 24 experiencing a similar leap in flares towards the end of 2015.

So remember, if you see the comparison of Solar Cycle flare levels online, be sure to check if they're using the historic operations data (incorrect), or recalibrated science data (correct).

See also:
    • Solar Cycle 25 - NASA Science
    • Solar cycle - Wikipedia


Original Submission #1Original Submission #2

posted by hubie on Monday May 26, @04:21PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The call came into the help desk at a large US retailer. An employee had been locked out of their corporate accounts.

But the caller wasn't actually a company employee. He was a Scattered Spider criminal trying to break into the retailer's systems - and he was really good, according to Jon DiMaggio, a former NSA analyst who now works as a chief security strategist at Analyst1.

Scattered Spider is a cyber gang linked to SIM swapping, fake IT calls, and ransomware crews like ALPHV. They've breached big names like MGM and Caesars, and despite arrests, keep evolving. They're tracked by Mandiant as UNC3944, also known as Octo Tempest.

DiMaggio listened in on this call, which was one of the group's recent attempts to infiltrate American retail organizations after hitting multiple UK-based shops. He won't name the company, other than to say it's a "big US retail organization." This attempt did not end with a successful ransomware infection or stolen data.

"But I got to listen to the phone calls, and those guys are good," DiMaggio told The Register. "It sounded legit, and they had information to make them sound like real employees."

Scattered Spider gave the help desk the employee's ID and email address. DiMaggio said he suspected the caller first social-engineered the employee to obtain this data, "but that is an assumption."

"The caller had all of their information: employee ID numbers, when they started working there, where they worked and resided," DiMaggio said. "They were calling from a number that was in the right demographic, they were well-spoken in English, they looked and felt real. They knew a lot about the company, so it's very difficult to flag these things. When these guys do it, they're good at what they do."

Luckily, the target was a big company with a big security budget, and it employs several former government and law enforcement infosec officials, including criminal-behavior experts, on its team. But not every organization has this type of staffing or resources to ward off these types of attacks where the would-be intruders are trying to break in from every access point.

"They are resourceful, they're smart, they're fast," Mandiant CTO Charles Carmakal told The Register.

"One of the challenges that defenders have is: it's not the shortage of network alerts," he added. "You know when Scattered Spider is targeting a company because people are calling the help desk and trying to reset passwords. They are running tools across an enterprise that will fire off on antivirus signatures and EDR alerts, tons and tons and tons of alerts. They operate at a speed that can be hard to defend against."

In this case, sometimes the best option — albeit a painful one — is for the organization to break its own IT systems before the criminals do.

This appears to have been the case with British retailer Co-op, which pulled its systems offline before Scattered Spider could encrypt its files and move throughout its networks.


Original Submission

posted by janrinok on Monday May 26, @11:36AM   Printer-friendly

Agent mode arrives, for better or worse:

Microsoft's GitHub Copilot can now act as a coding agent, capable of implementing tasks or addressing posted issues within the code hosting site.

What distinguishes a coding agent from an AI assistant is that it can iterate over its own output, possibly correcting errors, and can infer tasks that have not been specified to complete a prompted task.

But wait, further clarification is required. Having evidently inherited Microsoft's penchant for confusing names, the GitHub Copilot coding agent is not the same thing as the GitHub Copilot agent mode, which debuted in February.

Agent mode refers to synchronous (real-time) collaboration. You set a goal and the AI helps you get there. The coding agent is for asynchronous work – you delegate tasks, the coding agent then sets off on its own to do them while you do other things.

"Embedded directly into GitHub, the agent starts its work when you assign a GitHub issue to Copilot," said Thomas Dohmke, GitHub CEO, in a blog post provided to The Register ahead of the feature launch, to coincide with this year's Microsoft Build conference.

"The agent spins up a secure and fully customizable development environment powered by GitHub Actions. As the agent works, it pushes commits to a draft pull request, and you can track it every step of the way through the agent session logs."

Basically, once given a command, the agent uses GitHub Actions to boot a virtual machine. It then clones the relevant repository, sets up the development environment, scours the codebase, and pushes changes to a draft pull request. And this process can be traced in session log records.

Available to Copilot Enterprise and Copilot Pro+ users, Dohmke insists that agents do not weaken organizational security posture because existing policies still apply and agent-authored pull requests still require human approval before they're run.

By default, the agent can only push code to branches it has created. As a further backstop, the developer who asked the agent to open a pull request is not allowed to approve it. The agent's internet access is limited to predefined trusted destinations and GitHub Actions workflows require approval before they will run.

With GitHub as its jurisdiction, Copilot's agent interactions can be used to automate various development-related site interactions via github.com, in GitHub Mobile, or through the GitHub CLI.

But the agent can also be configured to work with MCP (model context protocol) servers in order to connect to external resources. And it can respond to input beyond text, thanks to vision capabilities in the underlying AI models. So it can interpret screenshots of desired design patterns, for example.

"With its autonomous coding agent, GitHub is looking to shift Copilot from an in-editor assistant to a genuine collaborator in the development process," said Kate Holterhoff, senior analyst at RedMonk, in a statement provided by GitHub. "This evolution aims to enable teams to delegate implementation tasks and thereby achieve a more efficient allocation of developer resources across the software lifecycle."

GitHub claims it has used the Copilot code agent in its own operations to handle maintenance tasks, freeing its billing team to pursue features that add value. The biz also says the Copilot agent reduced the amount of time required to get engineers up to speed with its AI models.

GitHub found various people to say nice things about the Copilot agent. We'll leave it at that.


Original Submission

posted by janrinok on Monday May 26, @06:48AM   Printer-friendly

Positive proof-of-concept experiments may lead to the world's first treatment for celiac disease:

An investigational treatment for celiac disease effectively controls the condition—at least in an animal model—in a first-of-its-kind therapeutic for a condition that affects approximately 70 million people worldwide.

Currently, there is no treatment for celiac disease, which is caused by dietary exposure to gluten, a protein in wheat, barley and rye. The grains can produce severe intestinal symptoms, leading to inflammation and bloating.

Indeed, celiac disease is the bane of bread and pasta lovers around the world, and despite fastidiously maintaining a gluten-free eating plan, the disease can still lead to social isolation and poor nutrition, gastroenterologists say. It is a serious autoimmune disorder that, when left unaddressed, can cause malnutrition, bone loss, anemia, and elevated cancer risk, primarily intestinal lymphoma.

Now, an international team of scientists led by researchers in Switzerland hope to change the fate of celiac patients for the better. A series of innovative experiments has produced "a cell soothing" technique that targets regulatory T cells, the immune system components commonly known as Tregs.

The cell-based technique borrows from a form of cancer therapy and underlies a unique discovery that may eventually lead to a new treatment strategy, data in the study suggests.

"Celiac disease is a chronic inflammatory disorder of the small intestine with a global prevalence of about 1%," writes Dr. Raphaël Porret, lead author of the research published in Science Translational Medicine.

"The condition is caused by a maladapted immune response to cereal gluten proteins, which causes tissue damage in the gut and the formation of autoantibodies to the enzyme transglutaminase," continued Porret, a researcher in the department of Immunology and Allergy at the University of Lausanne.

Working with colleagues from the University of California, San Francisco, as well as at the Norwegian Celiac Disease Research Center at the University of Oslo, Porret and colleagues have advanced a novel concept. They theorize that a form of cell therapy, based on a breakthrough form of cancer treatment, might also work against celiac disease.

In an animal model, Porret and his global team of researchers have tested the equivalent of CAR T cell therapy against celiac disease. The team acknowledged that the "Treg contribution to the natural history of celiac disease is still controversial," but the researchers also demonstrated that at least in their animal model of human celiac disease, the treatment worked.

CAR T cell therapy is a type of cancer immunotherapy in which a patient's T cells are genetically modified in the laboratory to recognize and kill cancer cells. The cells are then infused back into the patient to provide a round-the-clock form of cancer treatment. In the case of celiac disease, the T cells are modified to affect the activity of T cells that become hyperactive in the presence of gluten.

To make this work, the researchers had to know every aspect of the immune response against gluten. "Celiac disease, a gluten-sensitive enteropathy, demonstrates a strong human leukocyte antigen association, with more than 90% of patients carrying the HLA-DQ2.5 allotype," Porret wrote, describing the human leukocyte antigen profile of most patients with celiac disease.

As a novel treatment against the condition, the team engineered effector T cells and regulatory T cells and successfully tested them in their animal model. Scientists infused these cells together into mice and evaluated the regulatory T cells' ability to quiet the effector T cells response to gluten. They observed that oral exposure to gluten caused the effector cells to flock to the intestines when they were infused without the engineered Tregs.

However, the engineered regulatory T cells prevented this gut migration and suppressed the effector T cells' proliferation in response to gluten. Although this is a first step, the promising early results indicate that cell therapy approaches could one day lead to a long-sought treatment for this debilitating intestinal disorder.

"Our study paves the way for a better understanding of key antigen-activating steps after dietary antigen [gluten] uptake," Porret concluded. "Although further work is needed to assess Treg efficacy in the setting of an active disease, our study provides proof-of-concept evidence that engineered Tregs hold therapeutic potential for restoring gluten tolerance in patients with celiac disease."

Journal Reference: Raphaël Porret et al, T cell receptor precision editing of regulatory T cells for celiac disease, Science Translational Medicine (2025). DOI: 10.1126/scitranslmed.adr8941


Original Submission

posted by mrpg on Monday May 26, @02:00AM   Printer-friendly
from the 50-is-more-than-1.21 dept.

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

AI's integration into our lives is the most significant shift in online life in more than a decade. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos. But what's powering all of that?

[...] Given the direction AI is headed—more personalized, able to reason and solve complex problems on our behalf, and everywhere we look—it's likely that our AI footprint today is the smallest it will ever be. According to new projections published by Lawrence Berkeley National Laboratory in December, by 2028 more than half of the electricity going to data centers will be used for AI. At that point, AI alone could consume as much electricity annually as 22% of all US households.

[...] Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it's estimated that training OpenAI's GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days. It's only after this training, when consumers or customers "inference" the AI models to get answers or generate outputs, that model makers hope to recoup their massive costs and eventually turn a profit.

"For any company to make money out of a model—that only happens on inference," says Esha Choukse, a researcher at Microsoft Azure who has studied how to make AI inference more efficient.

As conversations with experts and AI companies made clear, inference, not training, represents an increasing majority of AI's energy demands and will continue to do so in the near future. It's now estimated that 80–90% of computing power for AI is used for inference.


Original Submission

Today's News | May 29 | May 27  >