Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:496

posted by hubie on Tuesday May 27 2025, @09:04PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Starfish aims to roll out a battery-free system which connects to multiple parts of the brain simultaneously.

Starfish Neuroscience, a startup co-founded by Valve CEO Gabe Newell, has published an article revealing the first details of its brain-computer interface (BCI) chip.

The firm proposes a “new class of minimally-invasive, distributed neural interfaces that enable simultaneous access to multiple brain regions.” Moreover, a Starfish BCI could be the first fully wireless, battery-free implant available, if all goes to plan. According to its blog, the startup’s first chips are expected to arrive “in late 2025.” Perhaps the relationship with Newell means related tech will eventually find its way into gaming headsets and controllers.

In its report on the Starfish BCI news, The Verge notes that Newell’s fascination with BCIs began over 10 years ago, and that Valve once considered adding earlobe monitors to its VR headset products. As recently as 2019, Valve also publicly explored BCIs for gaming. Later the same year, Newell incorporated Starfish Neuroscience, and we are now seeing the first fruits as it emerges from stealth.

In its new blog post, Starfish says its BCI has the opportunity to do well thanks to two key features, its minimal size and the eschewing of built-in battery power. In regular use, the Starfish processor will consume just 1.1mW, it says. That contrasts with the Neuralink N1, which uses around 6mW.

[...] The startup also thinks that its smaller, lower power BCI implant(s) may work best connected to multiple parts of the brain simultaneously. For use in medical therapy, this multi-zone methodology could address human brain issues which affect several areas of the brain, like Parkinson’s disease.

Starfish isn’t so bold as to think it can go it alone with its new processor and BCI system. Rather, its blog floats the idea of collaborators on wireless power delivery and communication, and on custom implanted neural interfaces. It also admits “there is tons of work yet to be done here,” and is looking for employees, as well as partners, to boost its fortunes.


Original Submission

posted by hubie on Tuesday May 27 2025, @04:16PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Texas could become the next US state to lay down the law with social media platforms. A Texas bill that would ban social media use for anyone under 18 recently moved past the Senate committee and is due for a vote in front of the Texas State Senate. The bill has until the state's legislative session comes to an end on June 2, leaving roughly a week for it to be approved by both the Senate and the governor.

Earlier this year, the bill passed the House committee stage and was later voted in favor of by the state's House of Representatives. If made into law, the bill would force social media platforms to verify the age of anyone setting up an account, much like how Texas passed legislation requiring websites hosting porn to implement an age verification system. On top of that, Texas' social media ban proposes to let parents delete their child's social media account, allowing the platforms 10 days to comply with the request or face a fine from the state's attorney general.

Texas isn't the only governing body interested in restricting social media access. Last year, Florida's governor, Ron DeSantis, signed into law a bill that outright bans anyone under 14 from using social media and requires 14- and 15-year-olds to get parental consent to make an account or use an existing account. Notably, Texas' proposed law is much stricter than that.

On a larger scale, the US Senate introduced a bill to ban social media platforms for anyone under 13 in April 2024. After being stuck in the committee stage, Senators Brian Schatz (D-Hawaii) and Ted Cruz (R-Texas) recently made comments that signal a potential second attempt at getting this passed.


Original Submission

posted by janrinok on Tuesday May 27 2025, @11:31AM   Printer-friendly

Research Reveals 'Forever Chemicals' Present in Beer

Research reveals 'forever chemicals' present in beer:

Infamous for their environmental persistence and potential links to health conditions, per- and polyfluoroalkyl substances (PFAS), often called forever chemicals, are being discovered in unexpected places, including beer. Researchers publishing in ACS' Environmental Science & Technology tested beers brewed in different areas around the U.S. for these substances. They found that beers produced in parts of the country with known PFAS-contaminated water sources showed the highest levels of forever chemicals.

"As an occasional beer drinker myself, I wondered whether PFAS in water supplies was making its way into our pints," says research lead Jennifer Hoponick Redmon. "I hope these findings inspire water treatment strategies and policies that help reduce the likelihood of PFAS in future pours."

PFAS are human-made chemicals produced for their water-, oil- and stain-repellent properties. They have been found in surface water, groundwater and municipal water supplies across the U.S. and the world. Although breweries typically have water filtration and treatment systems, they are not designed to remove PFAS. By modifying a U.S. Environmental Protection Agency (EPA) testing method for analyzing levels of PFAS in drinking water, Hoponick Redmon and colleagues tested 23 beers. The test subjects were produced by U.S. brewers in areas with documented water system contamination, plus popular domestic and international beers from larger companies with unknown water sources.

The researchers found a strong correlation between PFAS concentrations in municipal drinking water and levels in locally brewed beer — a phenomenon that Hoponick Redmon and colleagues say has not yet been studied in U.S. retail beer. They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water. Notably, the team found that beers brewed near the Cape Fear River Basin in North Carolina, an area with known PFAS pollution, had the highest levels and most diverse mix of forever chemicals, including PFOS and PFOA.

This work shows that PFAS contamination at one source can spread into other products, and the researchers call for greater awareness among brewers, consumers and regulators to limit overall PFAS exposure. These results also highlight the possible need for water treatment upgrades at brewing facilities as PFAS regulations in drinking water change or updates to municipal water system treatment are implemented.

Journal Reference: Hold My Beer: The Linkage between Municipal Water and Brewing Location on PFAS in Popular Beverages, Jennifer Hoponick Redmon, Nicole M. DeLuca, Evan Thorp, et al., Environmental Science & Technology 2025 59 (17), 8368-8379 DOI: 10.1021/acs.est.4c11265 [open access]

95% of a sample of cans of USA beer contaminated with PFAS

"We purchased 23 canned beer types in North Carolina stores in August 2021, with most of the beer purchases having at least 5 different cans of the same beer. Some beers are brewed in multiple locations; thus we confirmed brewing location for the purchased cans based on the brewery can code."

"They found PFAS in 95% of the beers they tested. These include perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA), two forever chemicals with recently established EPA limits in drinking water."

"The most detected PFAS in beer aliquots were PFSAs–PFOS, PFBS, and PFHxS [84% (n = 63), 53% (n = 40), and 47% (n = 35), respectively]"

So if you literally drink beer like water, "While there are currently no standards for PFAS levels in beer, these drinking water standards can provide insight, as beers are intended for direct consumption similar to drinking water. We found that some of the beers exceeded the health standards."

pop sci coverage: https://phys.org/news/2025-05-pfas-beers-highest-contaminated.html
journal article: https://pubs.acs.org/doi/10.1021/acs.est.4c11265


Original Submission #1Original Submission #2

posted by janrinok on Tuesday May 27 2025, @06:43AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

By combining information from many large datasets, MIT researchers have identified several new potential targets for treating or preventing Alzheimer’s disease.

The study revealed genes and cellular pathways that haven’t been linked to Alzheimer’s before, including one involved in DNA repair. Identifying new drug targets is critical because many of the Alzheimer’s drugs that have been developed to this point haven’t been as successful as hoped.

Working with researchers at Harvard Medical School, the team used data from humans and fruit flies to identify cellular pathways linked to neurodegeneration. This allowed them to identify additional pathways that may be contributing to the development of Alzheimer’s.

“All the evidence that we have indicates that there are many different pathways involved in the progression of Alzheimer’s. It is multifactorial, and that may be why it’s been so hard to develop effective drugs,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We will need some kind of combination of treatments that hit different parts of this disease.”

Matthew Leventhal PhD ’25 is the lead author of the paper, which appears today in Nature Communications.

Over the past few decades, many studies have suggested that Alzheimer’s disease is caused by the buildup of amyloid plaques in the brain, which triggers a cascade of events that leads to neurodegeneration.

A handful of drugs have been developed to block or break down these plaques, but these drugs usually do not have a dramatic effect on disease progression. In hopes of identifying new drug targets, many scientists are now working on uncovering other mechanisms that might contribute to the development of Alzheimer’s.

“One possibility is that maybe there’s more than one cause of Alzheimer’s, and that even in a single person, there could be multiple contributing factors,” Fraenkel says. “So, even if the amyloid hypothesis is correct — and there are some people who don’t think it is — you need to know what those other factors are. And then if you can hit all the causes of the disease, you have a better chance of blocking and maybe even reversing some losses.”

To try to identify some of those other factors, Fraenkel’s lab teamed up with Mel Feany, a professor of pathology at Harvard Medical School and a geneticist specializing in fruit fly genetics.

Using fruit flies as a model, Feany and others in her lab did a screen in which they knocked out nearly every conserved gene expressed in fly neurons. Then, they measured whether each of these gene knockdowns had any effect on the age at which the flies develop neurodegeneration. This allowed them to identify about 200 genes that accelerate neurodegeneration.

Some of these were already linked to neurodegeneration, including genes for the amyloid precursor protein and for proteins called presenillins, which play a role in the formation of amyloid proteins.

The researchers then analyzed this data using network algorithms that Fraenkel’s lab has been developing over the past several years. These are algorithms that can identify connections between genes that may be involved in the same cellular pathways and functions.

In this case, the aim was to try to link the genes identified in the fruit fly screen with specific processes and cellular pathways that might contribute to neurodegeneration. To do that, the researchers combined the fruit fly data with several other datasets, including genomic data from postmortem tissue of Alzheimer’s patients.

The first stage of their analysis revealed that many of the genes identified in the fruit fly study also decline as humans age, suggesting that they may be involved in neurodegeneration in humans.

In the next phase of their study, the researchers incorporated additional data relevant to Alzheimer’s disease, including eQTL (expression quantitative trait locus) data — ­a measure of how different gene variants affect the expression levels of certain proteins.

Using their network optimization algorithms on this data, the researchers identified pathways that link genes to their potential role in Alzheimer’s development. The team chose two of those pathways to focus on in the new study.

The first is a pathway, not previously linked to Alzheimer’s disease, related to RNA modification. The network suggested that when one of two of the genes in this pathway — MEPCE and HNRNPA2B1 — are missing, neurons become more vulnerable to the Tau tangles that form in the brains of Alzheimer’s patients. The researchers confirmed this effect by knocking down those genes in studies of fruit flies and in human neurons derived from induced pluripotent stem cells (IPSCs).

The second pathway reported in this study is involved in DNA damage repair. This network includes two genes called NOTCH1 and CSNK2A1, which have been linked to Alzheimer’s before, but not in the context of DNA repair. Both genes are most well-known for their roles in regulating cell growth.

In this study, the researchers found evidence that when these genes are missing, DNA damage builds up in cells, through two different DNA-damaging pathways. Buildup of unrepaired DNA has previously been shown to lead to neurodegeneration.

Now that these targets have been identified, the researchers hope to collaborate with other labs to help explore whether drugs that target them could improve neuron health. Fraenkel and other researchers are working on using IPSCs from Alzheimer’s patients to generate neurons that could be used to evaluate such drugs.

“The search for Alzheimer’s drugs will get dramatically accelerated when there are very good, robust experimental systems,” he says. “We’re coming to a point where a couple of really innovative systems are coming together. One is better experimental models based on IPSCs, and the other one is computational models that allow us to integrate huge amounts of data. When those two mature at the same time, which is what we’re about to see, then I think we’ll have some breakthroughs.”


Original Submission

posted by hubie on Tuesday May 27 2025, @01:53AM   Printer-friendly
from the of-(mis)direction dept.

https://techxplore.com/news/2025-05-google-ads-ai-chatgpt.html

Google said Wednesday it is beginning to weave advertisements into its new AI Mode for online search, a strategic move to counter the challenge posed by ChatGPT as the primary source for online answers.

[...] "The future of advertising fueled by AI isn't coming—it's already here," stated Vidhya Srinivasan, Google's vice president of Ads & Commerce.

"We're reimagining the future of ads and shopping: Ads that don't interrupt, but help customers discover a product or service."

Will this make Google's so-called AI summaries better?
Will you start or continue ignoring them?
Are Google searches your preferred destination when you want to buy something?


Original Submission

posted by hubie on Monday May 26 2025, @09:07PM   Printer-friendly
from the fire-up-that-amateur-radio-license-for-those-HF-QSOs dept.

The Sun is Producing Strong Solar Flares, Creating Blackouts. What to Know

The sun is producing strong solar flares, creating blackouts. What to know:

A recent period of strong solar flares is expected to gradually decline over the coming weeks and months, scientists say, along with the potential for brief communication blackouts as the sun's solar cycle begins to fade.

The most powerful eruption of 2025 so far was observed last week by NASA's Solar Dynamics Observatory and the U.S. National Oceanic and Atmospheric Administration (NOAA).

The flare, classified as an X2.7, caused a 10-minute period of "degraded communications" for high-frequency radio systems in the Middle East, according to NOAA's Space Weather Prediction Center.

"We are at solar maximum, so there can be periods of more activity," a spokesperson for the Space Weather Prediction Center told Global News in an email.

The spokesperson added that the active region last week's flare emanated from, however, "has weakened magnetically, and even though it remains capable of producing a notable event, it seems less likely at this time."

[...] The 10-minute blackout in the Middle East occurred because that part of the Earth was facing the sun at the time.

However, because the active region was still somewhat off to the side, a related coronal mass ejection — which produces plasma and magnetic energy from the sun's corona — did not impact Earth.

Taylor Cameron, a space weather forecaster at the Canadian Hazards Information Service, told Global News it's difficult to predict specifically when a solar flare can erupt and which part of Earth it can affect.

The sun is currently at the peak of its 11-year solar cycle, known as solar maximum.

Although activity is generally declining, the Space Weather Prediction Center spokesperson told Global News that "sunspot activity and solar event expectations remain elevated this year and perhaps even into 2026."

[...] Cameron said solar flares only impact high-frequency radio communications, which can include ham radios, shortwave broadcasting, aviation air-to-ground communications and over-the-horizon radar systems. Other communication networks, like internet, 5G and cellular service, aren't affected.

The stronger a flare is, Cameron added, the more severe and longer a blackout or disruption can be.

To date, the most powerful flare of the current solar cycle was an X9.0 observed last October. That was strong enough to produce faint northern lights across parts of North America, which can occur during solar storms.

Another solar storm last spring produced stronger northern lights over much of Canada.

The Space Weather Prediction Center has reported brief radio blackouts due to multiple X-class solar flares recorded over the past several months.

See also:
    • R3 flare activity from Region 4087
    • Two X Class Solar Flares - The Sun Awakens
    • M Class Solar Flare, Filament Eruption, US Alert

Are There More Solar Flares Than Expected During This Solar Cycle?

Solar Cycle 25 is approaching its peak, but how does it measure up to the previous Solar Cycle 24?:

Like the number of sunspots, the occurrence of solar flares follows the approximately 11-year solar cycle.

But as the current Solar Cycle 25 approaches its peak, how are the number of solar flares stacking up against the previous, smaller Solar Cycle 24?

Due to a change in flare calibration levels from 2020, you'll find two answers to this question online — but only one is correct.

The sun follows an 11-year solar cycle of increasing and decreasing activity. The solar cycle is typically measured by the number of sunspots visible on the sun, with records dating back over 270 years. Most solar flares originate from sunspots, so with more sunspots — you'll get more flares.

Solar flares are categorized into flare classes, classified by the magnitude of soft X-rays observed in a narrow wavelength range of 0.1-0.8 nm. The flare classes are C-class, M-class and X-class, each 10 times stronger than the previous. (Flare levels are then sub-divided by a number, e.g. M2, X1, etc). Flares of these categories (except the very largest of the X-class events), tend to follow the solar cycle closely.

In terms of sunspot numbers, Solar Cycle 25 (our current cycle) has exceeded the sunspot levels of Solar Cycle 24 (which peaked in 2014). With higher sunspot numbers, we'd also expect higher flare counts. This is the case, but the difference is far from what some would have you believe.

How do solar flares compare between Solar Cycles 24 and 25? This seems like a simple enough question, but is muddied by a recalibration of solar flare levels in 2020 from the National Oceanic and Atmospheric Administration (NOAA).

Solar flare X-ray levels have been measured since 1974. X-rays do not penetrate Earth's atmosphere, and thus can only be measured by detectors on satellites in Earth orbit. For 50 years, these solar flare detectors have been placed on NOAA's GOES satellites. As technology improves, and old technology decays, newer detectors are launched on newer GOES satellites, to keep the continuous observation of solar flares going. GOES-18 (the 18th satellite in the sequence) is the current satellite responsible for primary X-ray observations, having launched in 2022.

Because flare levels have been measured (and their classes defined) by detectors across multiple satellites/instruments, corrections are sometimes needed to account for slight differences in calibration from one detector to the next.

From 2010-2020, flare levels were defined by measurements from GOES-14 and GOES-15. This period covered the solar maximum period of Solar Cycle 24, up to the end of that cycle. However, upon the launch of these two satellites, a calibration discrepancy was discovered between GOES-14/15 and all prior GOES X-ray detectors. To fix this, science data from 1974-2010 (from GOES-1 to GOES-13 satellites) were all readjusted to match the new calibration, which was believed to be correct at the time. A result of this was that the threshold for each flare class increased by 42%, meaning an individual solar flare in 2010 needed to be 42% larger than a flare from 2009, to be given the same X-class level.

However, and here comes the twist: following the switch to GOES-16 data on a new detector, it was discovered that the original calibration (from 1974-2010) had been correct all along, and the 2010-2020 calibration was the incorrect one. This meant that in 2020, all prior data (from 1974-2020) were again recalibrated to their previous correct levels, lowering back the threshold of different flare classes). With a lower flare threshold, it meant strong C-class flares (C7+) became M-class events, and strong M-class flares (M7+) became X-class flares. An X-class solar flare was therefore far easier to achieve in 2021 than it was in 2019. This 2020 recalibration therefore increased the number of higher classes flares in Solar Cycle 24 than initially reported.

Following the 2020 recalibration of solar flare levels, NOAA re-released their historic scientific flare datasets with the correct levels. However, the archived operations data, which lists solar flare levels as they were initially reported at the time, were not recalibrated. A consequence of this is that different flare lists compiled and analyzed by third parties, can either use the recalibrated science data, or un-recalibrated operations data when comparing solar flare levels between solar cycles. The former comparison yields correct results, while the latter compares current flare levels from cycle 25 with severely underestimated flare levels from previous cycles, producing scientifically incorrect comparisons. Let's compare some data!

[...] This graph shows the correct comparison of solar flares between Cycles 24 and 25. As you can see, although the number of Cycle 25 flares is still ahead of Cycle 24 at each flare level, the discrepancy is far less than that shown in the previous graph. The operations data undercounts the number of Cycle 24 flares by nearly half, a significant difference. In reality, the number of X-class solar flares in Cycle 24 is only half the total Cycle 24 quantity and even had fewer X-class flares until the recent solar activity from famous active regions AR 13663 and AR 13664. This graph also shows that although May 2024 saw a lot of X-class activity from these active regions, this level of activity is not unprecedented — with Solar Cycle 24 experiencing a similar leap in flares towards the end of 2015.

So remember, if you see the comparison of Solar Cycle flare levels online, be sure to check if they're using the historic operations data (incorrect), or recalibrated science data (correct).

See also:
    • Solar Cycle 25 - NASA Science
    • Solar cycle - Wikipedia


Original Submission #1Original Submission #2

posted by hubie on Monday May 26 2025, @04:21PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The call came into the help desk at a large US retailer. An employee had been locked out of their corporate accounts.

But the caller wasn't actually a company employee. He was a Scattered Spider criminal trying to break into the retailer's systems - and he was really good, according to Jon DiMaggio, a former NSA analyst who now works as a chief security strategist at Analyst1.

Scattered Spider is a cyber gang linked to SIM swapping, fake IT calls, and ransomware crews like ALPHV. They've breached big names like MGM and Caesars, and despite arrests, keep evolving. They're tracked by Mandiant as UNC3944, also known as Octo Tempest.

DiMaggio listened in on this call, which was one of the group's recent attempts to infiltrate American retail organizations after hitting multiple UK-based shops. He won't name the company, other than to say it's a "big US retail organization." This attempt did not end with a successful ransomware infection or stolen data.

"But I got to listen to the phone calls, and those guys are good," DiMaggio told The Register. "It sounded legit, and they had information to make them sound like real employees."

Scattered Spider gave the help desk the employee's ID and email address. DiMaggio said he suspected the caller first social-engineered the employee to obtain this data, "but that is an assumption."

"The caller had all of their information: employee ID numbers, when they started working there, where they worked and resided," DiMaggio said. "They were calling from a number that was in the right demographic, they were well-spoken in English, they looked and felt real. They knew a lot about the company, so it's very difficult to flag these things. When these guys do it, they're good at what they do."

Luckily, the target was a big company with a big security budget, and it employs several former government and law enforcement infosec officials, including criminal-behavior experts, on its team. But not every organization has this type of staffing or resources to ward off these types of attacks where the would-be intruders are trying to break in from every access point.

"They are resourceful, they're smart, they're fast," Mandiant CTO Charles Carmakal told The Register.

"One of the challenges that defenders have is: it's not the shortage of network alerts," he added. "You know when Scattered Spider is targeting a company because people are calling the help desk and trying to reset passwords. They are running tools across an enterprise that will fire off on antivirus signatures and EDR alerts, tons and tons and tons of alerts. They operate at a speed that can be hard to defend against."

In this case, sometimes the best option — albeit a painful one — is for the organization to break its own IT systems before the criminals do.

This appears to have been the case with British retailer Co-op, which pulled its systems offline before Scattered Spider could encrypt its files and move throughout its networks.


Original Submission

posted by janrinok on Monday May 26 2025, @11:36AM   Printer-friendly

Agent mode arrives, for better or worse:

Microsoft's GitHub Copilot can now act as a coding agent, capable of implementing tasks or addressing posted issues within the code hosting site.

What distinguishes a coding agent from an AI assistant is that it can iterate over its own output, possibly correcting errors, and can infer tasks that have not been specified to complete a prompted task.

But wait, further clarification is required. Having evidently inherited Microsoft's penchant for confusing names, the GitHub Copilot coding agent is not the same thing as the GitHub Copilot agent mode, which debuted in February.

Agent mode refers to synchronous (real-time) collaboration. You set a goal and the AI helps you get there. The coding agent is for asynchronous work – you delegate tasks, the coding agent then sets off on its own to do them while you do other things.

"Embedded directly into GitHub, the agent starts its work when you assign a GitHub issue to Copilot," said Thomas Dohmke, GitHub CEO, in a blog post provided to The Register ahead of the feature launch, to coincide with this year's Microsoft Build conference.

"The agent spins up a secure and fully customizable development environment powered by GitHub Actions. As the agent works, it pushes commits to a draft pull request, and you can track it every step of the way through the agent session logs."

Basically, once given a command, the agent uses GitHub Actions to boot a virtual machine. It then clones the relevant repository, sets up the development environment, scours the codebase, and pushes changes to a draft pull request. And this process can be traced in session log records.

Available to Copilot Enterprise and Copilot Pro+ users, Dohmke insists that agents do not weaken organizational security posture because existing policies still apply and agent-authored pull requests still require human approval before they're run.

By default, the agent can only push code to branches it has created. As a further backstop, the developer who asked the agent to open a pull request is not allowed to approve it. The agent's internet access is limited to predefined trusted destinations and GitHub Actions workflows require approval before they will run.

With GitHub as its jurisdiction, Copilot's agent interactions can be used to automate various development-related site interactions via github.com, in GitHub Mobile, or through the GitHub CLI.

But the agent can also be configured to work with MCP (model context protocol) servers in order to connect to external resources. And it can respond to input beyond text, thanks to vision capabilities in the underlying AI models. So it can interpret screenshots of desired design patterns, for example.

"With its autonomous coding agent, GitHub is looking to shift Copilot from an in-editor assistant to a genuine collaborator in the development process," said Kate Holterhoff, senior analyst at RedMonk, in a statement provided by GitHub. "This evolution aims to enable teams to delegate implementation tasks and thereby achieve a more efficient allocation of developer resources across the software lifecycle."

GitHub claims it has used the Copilot code agent in its own operations to handle maintenance tasks, freeing its billing team to pursue features that add value. The biz also says the Copilot agent reduced the amount of time required to get engineers up to speed with its AI models.

GitHub found various people to say nice things about the Copilot agent. We'll leave it at that.


Original Submission

posted by janrinok on Monday May 26 2025, @06:48AM   Printer-friendly

Positive proof-of-concept experiments may lead to the world's first treatment for celiac disease:

An investigational treatment for celiac disease effectively controls the condition—at least in an animal model—in a first-of-its-kind therapeutic for a condition that affects approximately 70 million people worldwide.

Currently, there is no treatment for celiac disease, which is caused by dietary exposure to gluten, a protein in wheat, barley and rye. The grains can produce severe intestinal symptoms, leading to inflammation and bloating.

Indeed, celiac disease is the bane of bread and pasta lovers around the world, and despite fastidiously maintaining a gluten-free eating plan, the disease can still lead to social isolation and poor nutrition, gastroenterologists say. It is a serious autoimmune disorder that, when left unaddressed, can cause malnutrition, bone loss, anemia, and elevated cancer risk, primarily intestinal lymphoma.

Now, an international team of scientists led by researchers in Switzerland hope to change the fate of celiac patients for the better. A series of innovative experiments has produced "a cell soothing" technique that targets regulatory T cells, the immune system components commonly known as Tregs.

The cell-based technique borrows from a form of cancer therapy and underlies a unique discovery that may eventually lead to a new treatment strategy, data in the study suggests.

"Celiac disease is a chronic inflammatory disorder of the small intestine with a global prevalence of about 1%," writes Dr. Raphaël Porret, lead author of the research published in Science Translational Medicine.

"The condition is caused by a maladapted immune response to cereal gluten proteins, which causes tissue damage in the gut and the formation of autoantibodies to the enzyme transglutaminase," continued Porret, a researcher in the department of Immunology and Allergy at the University of Lausanne.

Working with colleagues from the University of California, San Francisco, as well as at the Norwegian Celiac Disease Research Center at the University of Oslo, Porret and colleagues have advanced a novel concept. They theorize that a form of cell therapy, based on a breakthrough form of cancer treatment, might also work against celiac disease.

In an animal model, Porret and his global team of researchers have tested the equivalent of CAR T cell therapy against celiac disease. The team acknowledged that the "Treg contribution to the natural history of celiac disease is still controversial," but the researchers also demonstrated that at least in their animal model of human celiac disease, the treatment worked.

CAR T cell therapy is a type of cancer immunotherapy in which a patient's T cells are genetically modified in the laboratory to recognize and kill cancer cells. The cells are then infused back into the patient to provide a round-the-clock form of cancer treatment. In the case of celiac disease, the T cells are modified to affect the activity of T cells that become hyperactive in the presence of gluten.

To make this work, the researchers had to know every aspect of the immune response against gluten. "Celiac disease, a gluten-sensitive enteropathy, demonstrates a strong human leukocyte antigen association, with more than 90% of patients carrying the HLA-DQ2.5 allotype," Porret wrote, describing the human leukocyte antigen profile of most patients with celiac disease.

As a novel treatment against the condition, the team engineered effector T cells and regulatory T cells and successfully tested them in their animal model. Scientists infused these cells together into mice and evaluated the regulatory T cells' ability to quiet the effector T cells response to gluten. They observed that oral exposure to gluten caused the effector cells to flock to the intestines when they were infused without the engineered Tregs.

However, the engineered regulatory T cells prevented this gut migration and suppressed the effector T cells' proliferation in response to gluten. Although this is a first step, the promising early results indicate that cell therapy approaches could one day lead to a long-sought treatment for this debilitating intestinal disorder.

"Our study paves the way for a better understanding of key antigen-activating steps after dietary antigen [gluten] uptake," Porret concluded. "Although further work is needed to assess Treg efficacy in the setting of an active disease, our study provides proof-of-concept evidence that engineered Tregs hold therapeutic potential for restoring gluten tolerance in patients with celiac disease."

Journal Reference: Raphaël Porret et al, T cell receptor precision editing of regulatory T cells for celiac disease, Science Translational Medicine (2025). DOI: 10.1126/scitranslmed.adr8941


Original Submission

posted by mrpg on Monday May 26 2025, @02:00AM   Printer-friendly
from the 50-is-more-than-1.21 dept.

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

AI's integration into our lives is the most significant shift in online life in more than a decade. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos. But what's powering all of that?

[...] Given the direction AI is headed—more personalized, able to reason and solve complex problems on our behalf, and everywhere we look—it's likely that our AI footprint today is the smallest it will ever be. According to new projections published by Lawrence Berkeley National Laboratory in December, by 2028 more than half of the electricity going to data centers will be used for AI. At that point, AI alone could consume as much electricity annually as 22% of all US households.

[...] Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it's estimated that training OpenAI's GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days. It's only after this training, when consumers or customers "inference" the AI models to get answers or generate outputs, that model makers hope to recoup their massive costs and eventually turn a profit.

"For any company to make money out of a model—that only happens on inference," says Esha Choukse, a researcher at Microsoft Azure who has studied how to make AI inference more efficient.

As conversations with experts and AI companies made clear, inference, not training, represents an increasing majority of AI's energy demands and will continue to do so in the near future. It's now estimated that 80–90% of computing power for AI is used for inference.


Original Submission

posted by mrpg on Sunday May 25 2025, @09:11PM   Printer-friendly
from the where-in-the-space-is-planet-nine dept.

Evidence for 'Planet Nine' lurking on the fringes of the Solar System is building. So why can't astronomers spot it? - ABC News:

A huge unknown lurks in the far reaches of our Solar System — something massive enough to pull distant space rocks into extraordinarily long, thin loops around the Sun.

At least, this is what US astronomer Michael Brown believes.

In 2016, he and a colleague at the California Institute of Technology (Caltech) proposed something almost unfathomable: a huge planet, up to 10 times heftier than Earth, way out on the edge of our Solar System.

[...] Those that are convinced Planet Nine is out there are waiting for the new Vera Rubin Observatory to come online in Chile early next year.

The telescope has an 8.4-metre mirror, which makes it the largest camera ever built for astronomy.

"It's going to be doing something called the Legacy Survey of Space and Time, which is a massive survey — taking images of the sky every single night," Swinburne University of Technology astrophysicist Sara Webb says.

[...] "If Vera Rubin doesn't find it by reflected sunlight, the next best thing is to find it not as reflected sunlight, but by using radio telescopes," he says.

"They're not designed to look at little planets; they're designed to look at the whole sky at once. It'll take a while for the telescopes to be able to see that this planet has moved from one place to the other, so it'll be a couple of years of those surveys before we know it's there.


Original Submission

posted by janrinok on Sunday May 25 2025, @04:25PM   Printer-friendly
from the and-did-those-feet-in-ancient-time dept.

The Roman massacre that never happened according to a new study of an iconic archaeological site:

A new study by archaeologists at Bournemouth University (BU) has revealed that bodies recovered from a 'war-cemetery' previously attributed to the Roman Conquest of Britain at Maiden Castle Iron Age hillfort in Dorset, did not die in a single dramatic event.

A re-analysis of the burials, including a new programme of radiocarbon dating, has revealed that, rather than dying in a single, catastrophic event, individuals fell in periods of lethal violence spanning multiple generations, spread across the late first century BC to the early first century AD. This is suggestive of episodic periods of bloodshed, possibly the result of localised turmoil, executions or dynastic infighting during the decades leading up to the Roman Conquest of Britain.

BU's Dr Martin Smith, Associate Professor in Forensic and Biological Anthropology, who analysed the bodies said: "The find of dozens of human skeletons displaying lethal weapon injuries was never in doubt, however, by undertaking a systematic programme of radiocarbon dating we have been able to establish that these individuals died over a period of decades, rather than a single terrible event".

The 'war-cemetery' of Maiden Castle Iron Age hillfort in Dorset is one of Britain's most famous archaeological discoveries. Discovered in 1936, many of the skeletons unearthed had clear evidence of trauma to the head and upper body. Dig director at the time, Sir Mortimer Wheeler suggested, were "the marks of battle", caused during a furious but ultimately futile defence of the hillfort against an all-conquering Roman legion. Wheeler's colourful account of an attack on the native hillfort and the massacre of its defenders by invading Romans, was accepted as fact, becoming an iconic event in popular narratives of Britain's 'Island Story'.

Principal Academic in Prehistoric and Roman Archaeology at BU, and the study's Dig Director, Dr Miles Russell said: "Since the 1930s, the story of Britons fighting Romans at one of the largest hillforts in the country has become a fixture in historical literature. With the Second World War fast approaching, no one was really prepared to question the results. The tale of innocent men and women of the local Durotriges tribe being slaughtered by Rome is powerful and poignant. It features in countless articles, books and TV documentaries. It has become a defining moment in British history, marking the sudden and violent end of the Iron Age."

Dr Russell added: "The trouble is it doesn't appear to have actually happened. Unfortunately, the archaeological evidence now points to it being untrue. This was a case of Britons killing Britons, the dead being buried in a long-abandoned fortification. The Roman army committed many atrocities, but this does not appear to be one of them."

[...] The study has also raised further questions as to what may still lie undiscovered at Maiden Castle. Paul Cheetham commented that "Whilst Wheeler's excavation was excellent in itself, he was only able to investigate a fraction of the site. It is likely that a larger number of burials still remains undiscovered around the immense ramparts."

Journal Reference: https://doi.org/10.1111/ojoa.12324 [open access]


Original Submission

posted by janrinok on Sunday May 25 2025, @11:43AM   Printer-friendly

https://archive.is/lhQuY

In November of 2021, Vladimir Dinets was driving his daughter to school when he first noticed a hawk using a pedestrian crosswalk.

The bird—a young Cooper's hawk, to be exact—wasn't using the crosswalk, in the sense of treading on the painted white stripes to reach the other side of the road in West Orange, New Jersey. But it was using the crosswalk—more specifically, the pedestrian-crossing signal that people activate to keep traffic out of said crosswalk—to ambush prey.

The crossing signal—a loud, rhythmic click audible from at least half a block away—was more of a pre-attack cue, or so the hawk had realized, Dinets, a zoologist now at the University of Tennessee at Knoxville, told me. On weekday mornings, when pedestrians would activate the signal during rush hour, roughly 10 cars would usually be backed up down a side street. This jam turned out to be the perfect cover for a stealth attack: Once the cars had assembled, the bird would swoop down from its perch in a nearby tree, fly low to the ground along the line of vehicles, then veer abruptly into a residential yard, where a small flock of sparrows, doves, and starlings would often gather to eat crumbs—blissfully unaware of their impending doom.

The hawk had masterminded a strategy, Dinets told me: To pull off the attacks, the bird had to create a mental map of the neighborhood—and, maybe even more important, understand that the rhythmic ticktock of the crossing signal would prompt a pileup of cars long enough to facilitate its assaults. The hawk, in other words, appears to have learned to interpret a traffic signal and take advantage of it, in its quest to hunt. Which is, with all due respect, more impressive than how most humans use a pedestrian crosswalk.

Cooper's hawks are known for their speedy sneak attacks in the wild, Janet Ng, a senior wildlife biologist with Environment and Climate Change Canada, told me. Zipping alongside bushes and branches for cover, they'll conceal themselves from prey until the very last moment of a planned ambush. "They're really fantastic hunters that way," Ng said. Those skills apparently translate fairly easily into urban environments, where Cooper's hawks flit amid trees and concrete landscapes, stalking city pigeons and doves.

[...] But maybe the most endearing part of this hawk's tale is the idea that it took advantage of a crosswalk signal at all—an environmental cue that, under most circumstances, is totally useless to birds and perhaps a nuisance. To see any animal blur the line between what we consider the human and non-human spheres is eerie, but also humbling: Most other creatures, Plotnik said, are simply more flexible than we'd ever think.


Original Submission

posted by mrpg on Sunday May 25 2025, @06:55AM   Printer-friendly
from the slimming-down-for-real-this-time dept.

A Caltech press release details research on the evolution of Jupiter.

From the release:

Understanding Jupiter's early evolution helps illuminate the broader story of how our solar system developed its distinct structure. Jupiter's gravity, often called the "architect" of our solar system, played a critical role in shaping the orbital paths of other planets and sculpting the disk of gas and dust from which they formed.

In a new study published in the journal Nature Astronomy, Konstantin Batygin (PhD '12), professor of planetary science at Caltech; and Fred C. Adams, professor of physics and astronomy at the University of Michigan; provide a detailed look into Jupiter's primordial state. Their calculations reveal that roughly 3.8 million years after the solar system's first solids formed—a key moment when the disk of material around the Sun, known as the protoplanetary nebula, was dissipating—Jupiter was significantly larger and had an even more powerful magnetic field.

"Our ultimate goal is to understand where we come from, and pinning down the early phases of planet formation is essential to solving the puzzle," Batygin says. "This brings us closer to understanding how not only Jupiter but the entire solar system took shape."

Batygin and Adams approached this question by studying Jupiter's tiny moons Amalthea and Thebe, which orbit even closer to Jupiter than Io, the smallest and nearest of the planet's four large Galilean moons. Because Amalthea and Thebe have slightly tilted orbits, Batygin and Adams analyzed these small orbital discrepancies to calculate Jupiter's original size: approximately twice its current radius, with a predicted volume that is the equivalent of over 2,000 Earths. The researchers also determined that Jupiter's magnetic field at that time was approximately 50 times stronger than it is today.

[...] Importantly, these insights were achieved through independent constraints that bypass traditional uncertainties in planetary formation models—which often rely on assumptions about gas opacity, accretion rate, or the mass of the heavy element core. Instead, the team focused on the orbital dynamics of Jupiter's moons and the conservation of the planet's angular momentum—quantities that are directly measurable. Their analysis establishes a clear snapshot of Jupiter at the moment the surrounding solar nebula evaporated, a pivotal transition point when the building materials for planet formation disappeared and the primordial architecture of the solar system was locked in.

Cool research with a novel methodology.

Referenced paper (Abstract)
DOI: https://doi.org/10.1038/s41550-025-02512-y


Original Submission

posted by hubie on Sunday May 25 2025, @02:09AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In an email posted on Reddit from "The VPN Secure Team" sent to lifetime subscription holders, it's explained that VPNSecure was acquired in 2023. The deal included the technology, domain, and customer database, but not the liabilities.

"Unfortunately, the previous owner did not disclose that thousands of Lifetime Deals (LTDs) had been sold through platforms like StackSocial," reads the mail.

"We discovered this only months later – when a large portion of our resources were strained by these LTD accounts and high support volume from users, who through part of the database, provided no sustaining income to help us improve and maintain the service."

As a result of this, the new owners began deactivating lifetime accounts that had been dormant for six months. While it's claimed that this was "technically fair," – for some reason – the new owners seem shocked that it led to a wave of negative reviews.

[...] Ars Technica reports that a follow-up email from VPNSecure shed more light on the situation. It states that InfiniteQuant Ltd, which is a different company than InfiniteQuant Capital Ltd, acquired VPN Secure in an "asset only deal."

It goes on to say that while the buyers received the tech, brand, infrastructure, and tech, they received none of the company, contracts, payments, or obligations from the previous owners.

It's also claimed the Lifetime Deals sold by the old team between 2015 and 2017 were not disclosed to InfiniteQuant Ltd, but it kept the accounts running for 2 extra years despite never receiving a "single cent from those subscriptions." So stop being ungrateful, basically.

The final part of the message claims that anyone who didn't see the original message explaining all this must have it in their spam folder or simply missed it completely.

The new owners said they didn't sue the seller over withholding the information on lifetime subs because "a corporate lawsuit would've cost more than the entire purchase of the business." The email also states that the buyers could have simply shut down VPNSecure but instead "chose the hard path."

While it's claimed the lifetime subscriptions were sold between 2015 and 2017, typing "VPNSecure lifetime subscriptions" into Google Search shows a 2021 ad on ZDNet for this $40 plan. An ad for a $28 lifetime subscription also ran on the site in 2022.

Lifetime subscriptions are rarely actual lifetimes. VPNSecure's plans lasted up to 20 years, according to online comments. There's always the chance new owners of companies won't honor the contracts either. Whether InfiniteQuant Ltd really didn't know about the subscriptions can't be confirmed, but it's led to a Trustpilot score of 1.2 for the VPN and pages of angry comments.


Original Submission

Today's News | May 28 | May 26  >