Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:165 | Votes:267

posted by jelizondo on Wednesday October 01, @11:47PM   Printer-friendly

How the von Neumann bottleneck is impeding AI computing:

Most computers are based on the von Neumann architecture, which separates compute and memory. This arrangement has been perfect for conventional computing, but it creates a data traffic jam in AI computing.

AI computing has a reputation for consuming epic quantities of energy. This is partly because of the sheer volume of data being handled. Training often requires billions or trillions of pieces of information to create a model with billions of parameters. But that's not the whole reason — it also comes down to how most computer chips are built.

Modern computer processors are quite efficient at performing the discrete computations they're usually tasked with. Though their efficiency nosedives when they must wait for data to move back and forth between memory and compute, they're designed to quickly switch over to work on some unrelated task. But for AI computing, almost all the tasks are interrelated, so there often isn't much other work that can be done when the processor gets stuck waiting, said IBM Research scientist Geoffrey Burr.

In that scenario, processors hit what is called the von Neumann bottleneck, the lag that happens when data moves slower than computation. It's the result of von Neumann architecture, found in almost every processor over the last six decades, wherein a processor's memory and computing units are separate, connected by a bus. This setup has advantages, including flexibility, adaptability to varying workloads, and the ability to easily scale systems and upgrade components. That makes this architecture great for conventional computing, and it won't be going away any time soon.

But for AI computing, whose operations are simple, numerous, and highly predictable, a conventional processor ends up working below its full capacity while it waits for model weights to be shuttled back and forth from memory. Scientists and engineers at IBM Research are working on new processors, like the AIU family, which use various strategies to break down the von Neumann bottleneck and supercharge AI computing.

The von Neumann bottleneck is named for mathematician and physicist John von Neumann, who first circulated a draft of his idea for a stored-program computer in 1945. In that paper, he described a computer with a processing unit, a control unit, memory that stored data and instructions, external storage, and input/output mechanisms. His description didn't name any specific hardware — likely to avoid security clearance issues with the US Army, for whom he was consulting. Almost no scientific discovery is made by one individual, though, and von Neumann architecture is no exception. Von Neumann's work was based on the work of J. Presper Eckert and John Mauchly, who invented the Electronic Numerical Integrator and Computer (ENIAC), the world's first digital computer. In the time since that paper was written, von Neumann architecture has become the norm.

"The von Neumann architecture is quite flexible, that's the main benefit," said IBM Research scientist Manuel Le Gallo-Bourdeau. "That's why it was first adopted, and that's why it's still the prominent architecture today."

[...] For AI computing, the von Neumann bottleneck creates a twofold efficiency problem: the number of model parameters (or weights) to move, and how far they need to move. More model weights mean larger storage, which usually means more distant storage, said IBM Research scientist Hsinyu (Sidney) Tsai. "Because the quantity of model weights is very large, you can't afford to hold them for very long, so you need to keep discarding and reloading," she said.

The main energy expenditure during AI runtime is spent on data transfers — bringing model weights back and forth from memory to compute. By comparison, the energy spent doing computations is low. In deep learning models, for example, the operations are almost all relatively simple matrix vector multiplication problems. Compute energy is still around 10% of modern AI workloads, so it isn't negligible, said Tsai. "It is just found to be no longer dominating energy consumption and latency, unlike in conventional workloads," she added.

About a decade ago, the von Neumann bottleneck wasn't a significant issue because processors and memory weren't so efficient, at least compared to the energy that was spent to transfer data, said Le Gallo-Bourdeau. But data transfer efficiency hasn't improved as much as processing and memory have over the years, so now processors can complete their computations much more quickly, leaving them sitting idle while data moves across the von Neumann bottleneck.

[...] Aside from eliminating the von Neumann bottleneck, one solution includes closing that distance. "The entire industry is working to try to improve data localization," Tsai said. IBM Research scientists recently announced such an approach: a polymer optical waveguide for co-packaged optics. This module brings the speed and bandwidth density of fiber optics to the edge of chips, supercharging their connectivity and hugely reducing model training time and energy costs.

With currently available hardware, though, the result of all these data transfers is that training an LLM can easily take months, consuming more energy than a typical US home does in that time. And AI doesn't stop needing energy after model training. Inferencing has similar computational requirements, meaning that the von Neumann bottleneck slows it down in a similar fashion.

[...] While von Neumann architecture creates a bottleneck for AI computing, for other applications, it's perfectly suited. Sure, it causes issues in model training and inference, but von Neumann architecture is perfect for processing computer graphics or other compute-heavy processes. And when 32- or 64-bit floating point precision is called for, the low precision of in-memory computing isn't up to the task.

"For general purpose computing, there's really nothing more powerful than the von Neumann architecture," said Burr. Under these circumstances, bytes are either operations or operands that are moving on a bus from a memory to a processor. "Just like an all-purpose deli where somebody might order some salami or pepperoni or this or that, but you're able to switch between them because you have the right ingredients on hand, and you can easily make six sandwiches in a row." Special-purpose computing, on the other hand, may involve 5,000 tuna sandwiches for one order — like AI computing as it shuttles static model weights.


Original Submission

posted by jelizondo on Wednesday October 01, @07:02PM   Printer-friendly

This black hole flipped its magnetic field:

The magnetic field swirling around an enormous black hole, located about 55 million light-years from Earth, has unexpectedly switched directions. This dramatic reversal challenges theories of black hole physics and provides scientists with new clues about the dynamic nature of these shadowy giants.

The supermassive black hole, nestled in the heart of the M87 galaxy, was first imaged in 2017. Those images revealed, for the first time, a glowing ring of plasma­ — an accretion disk — encircling the black hole, dubbed M87*. At the time, the disk's properties, including those of the magnetic field embedded in the plasma, matched theoretical predictions.

But observations of the accretion disk in the years that followed show that its magnetic field is not as stable as it first seemed, researchers report in a paper to appear in Astronomy & Astrophysics. In 2018, the magnetic field shifted and nearly disappeared. By 2021, the field had completely flipped direction.

"No theoretical models we have today can explain this switch," says study coauthor Chi-kwan Chan, an astronomer at Steward Observatory in Tucson. The magnetic field configuration, he says, was expected to be stable due to the black hole's large mass — roughly 6 billion times as massive as the sun, making it over a thousand times as hefty as the supermassive black hole at the center of the Milky Way.

In the new study, astronomers analyzed images of the accretion disk around M87* compiled by the Event Horizon Telescope, a global network of radio telescopes. The scientists focused on a specific component that's sensitive to magnetic field orientation called polarized light, which consists of light waves all oscillating in a particular direction.

By comparing the polarization patterns over the years, the astronomers saw that the magnetic field reversed direction. Magnetic fields around black holes are thought to funnel in material from their surrounding disks. With the new findings, astronomers will have to rethink their understanding of this process.

While researchers don't yet know what caused the flip in this disk's magnetic field, they think it could have been a combination of dynamics within the black hole and external influences.

"I was very surprised to see evidence for such a significant change in M87's magnetic field over a few years," says astrophysicist Jess McIver of the University of British Columbia in Vancouver, who was not involved with the research. "This changes my thinking about the stability of supermassive black holes and their environments."


Original Submission

posted by jelizondo on Wednesday October 01, @02:15PM   Printer-friendly

Expert calls security advice "unfairly outsourcing the problem to Anthropic's users"

On Tuesday [September 9, 2025], Anthropic launched a new file creation feature for its Claude AI assistant that enables users to generate Excel spreadsheets, PowerPoint presentations, and other documents directly within conversations on the web interface and in the Claude desktop app. While the feature may be handy for Claude users, the company's support documentation also warns that it "may put your data at risk" and details how the AI assistant can be manipulated to transmit user data to external servers.

The feature, awkwardly named "Upgraded file creation and analysis," is basically Anthropic's version of ChatGPT's Code Interpreter and an upgraded version of Anthropic's "analysis" tool. It's currently available as a preview for Max, Team, and Enterprise plan users, with Pro users scheduled to receive access "in the coming weeks," according to the announcement.

The security issue comes from the fact that the new feature gives Claude access to a sandbox computing environment, which enables it to download packages and run code to create files. "This feature gives Claude Internet access to create and analyze files, which may put your data at risk," Anthropic writes in its blog announcement. "Monitor chats closely when using this feature."

According to Anthropic's documentation, "a bad actor" manipulating this feature could potentially "inconspicuously add instructions via external files or websites" that manipulate Claude into "reading sensitive data from a claude.ai connected knowledge source" and "using the sandbox environment to make an external network request to leak the data."

This describes a prompt injection attack, where hidden instructions embedded in seemingly innocent content can manipulate the AI model's behavior—a vulnerability that security researchers first documented in 2022. These attacks represent a pernicious, unsolved security flaw of AI language models, since both data and instructions in how to process it are fed through as part of the "context window" to the model in the same format, making it difficult for the AI to distinguish between legitimate instructions and malicious commands hidden in user-provided content.

[...] Anthropic is not completely ignoring the problem, however. The company has implemented several security measures for the file creation feature. For Pro and Max users, Anthropic disabled public sharing of conversations that use the file creation feature. For Enterprise users, the company implemented sandbox isolation so that environments are never shared between users. The company also limited task duration and container runtime "to avoid loops of malicious activity."

[...] Anthropic's documentation states the company has "a continuous process for ongoing security testing and red-teaming of this feature." The company encourages organizations to "evaluate these protections against their specific security requirements when deciding whether to enable this feature."

[...] That kind of "ship first, secure it later" philosophy has caused frustrations among some AI experts like Willison, who has extensively documented prompt injection vulnerabilities (and coined the term). He recently described the current state of AI security as "horrifying" on his blog, noting that these prompt injection vulnerabilities remain widespread "almost three years after we first started talking about them."

In a prescient warning from September 2022, Willison wrote that "there may be systems that should not be built at all until we have a robust solution." His recent assessment in the present? "It looks like we built them anyway!"


Original Submission

posted by hubie on Wednesday October 01, @09:32AM   Printer-friendly

https://joel.drapper.me/p/rubygems-takeover/

Ruby Central recently took over a collection of open source projects from their maintainers without their consent. News of the takeover was first broken by Ellen on 19 September.

I have spoken to about a dozen people directly involved in the events, and seen a recording of a key meeting between Ruby Gems maintainers and Ruby Central, to uncover what went on.

https://narrativ.es/@janl/115258495596221725

Okay so this was a hostile takeover. The Ruby community needs to get their house in order.

And one more note, I assume it is implied in the write up, but you might now know: DHH is on the board of directors of Shopify. He exerts tremendous organisational and financial power.

It's hilarious he's threatened by three devs with a hobby project and is willing to burn his community's reputation over it.


Original Submission

posted by hubie on Wednesday October 01, @04:49AM   Printer-friendly
from the slop-for-you-slop-for-me-slop-for-everyone dept.

They finally came up with a word for it. "Workslop". To much AI usage among (co-)workers is leading to "workslop". Where there is to much AI production that doesn't turn out to be very valuable or productive. It looks fine at a first glance but has produced nothing of value or solved any problems. It just looks fine. All shiny surface, but nothing actual.

workslop is "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task."

AI promised to revolutionize productivity. Instead, 'workslop' is a giant time suck and the scourge of the 21st century office, Stanford warns

A benefits manager said of one AI-sourced document a colleague sent her, "It was annoying and frustrating to waste time trying to sort out something that should have been very straightforward."

So while companies may be spending hundreds of millions on AI software to create efficiencies and boost productivity, and encouraging employees to use it liberally, they may also be injecting friction into their operations.

The researchers say that "lazy" AI-generated work is not only slowing people down, it's also leading to employees losing respect for each other. After receiving workslop, staffers said they saw the peers behind it as less creative and less trustworthy.

"The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work," they write.

So shit literally flows downwards then?

https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://techcrunch.com/2025/09/27/beware-coworkers-who-produce-ai-generated-workslop/
https://fortune.com/2025/09/23/ai-workslop-workshop-workplace-communication/
https://edition.cnn.com/2025/09/26/business/ai-workslop-nightcap


Original Submission

posted by hubie on Wednesday October 01, @12:05AM   Printer-friendly

https://phys.org/news/2025-09-inequality-agri-food-chains-global.html

In the global agri-food system, most agricultural goods are produced in the Global South but value is captured by countries of the Global North through growth and control of the post-farmgate sectors. This is shown by a study from the Institute of Environmental Science and Technology at the Universitat Autònoma de Barcelona (ICTA-UAB), which reveals that between 1995 and 2020, non-agricultural sectors absorbed much of the value added in global agri-food systems. These sectors are disproportionately dominated by countries of the Global North.

The research, published in the journal Global Food Security and led by ICTA-UAB researcher Meghna Goyal together with Jason Hickel, also from ICTA-UAB, and Praveen Jha from Jawaharlal Nehru University, India, analyzes for the first time on a global scale the distribution of economic value in agri-food chains over a 25-year period.

The results show that, although the Global South has increased its share of agricultural production, countries of the North continue to capture a disproportionate share of income from higher-value sectors such as processing, logistics, finance, and services.

The study also notes that a substantial portion of revenue is recorded in low-tax jurisdictions with little agricultural production, suggesting that value-addition is recorded according to profit-maximizing strategies, rather than according to actual production or employment.

This demonstrates that value chains in agri-food systems reinforce structural inequalities through the international division of labor. Countries such as Singapore and Hong Kong capture up to 60 and 27 times more from the global agri-food system than the value of their agricultural production.

Researchers warn of the urgent need for economic sovereignty for the Global South to address structural unequal exchange in the global agri-food system.

"Value capture strategies reshape supply chains. Our findings alert us to its potentially negative consequences for development and equity for farming and the Global South economies," says Meghna Goyal, main author of the study.

ICTA-UAB researcher and co-author Jason Hickel states that "this is the first study to measure the global distribution of value in the agri-food system, and the results are damning. The people who do most of the agricultural production which sustains global civilization do not get a fair share of food-system incomes."

More information: Meghna Goyal et al, Increasing inequality in agri-food value chains: global trends from 1995-2020, Global Food Security (2025). DOI: 10.1016/j.gfs.2025.100883


Original Submission

posted by hubie on Tuesday September 30, @07:24PM   Printer-friendly
from the i-can-has-github? dept.

Australia to require age verification using Google or Microsoft to access adult material

Get your VPNs ready! Australia, having already nominated an age limit for social media coming this December (after they work out how it will be implemented), will progress to requiring Australians to verify their identity by logging in to a Microsoft or Google account to access adult material starting with search engines. Stop laughing. No, really. They will. Soon. Ok, two minute laugh session. Moving on. While this change in law is for 'good intentions' and Australian politicians high five themselves for 'protecting children', Professor Lisa Given of the RMIS Information Sciences department was quoted as saying that the changes "will definitely create more headaches for the everyday consumer and how they log in and use search services." Meanwhile, in England where similar laws have been enacted, VPN use has skyrocketed.

As stated in the law passed late last year, platforms also cannot rely solely on using government-issued ID for age verification, even though the government-backed technology study found this to be the most effective screening method.

Instead, the guidelines will direct platforms to take a "layered" approach to assessing age with multiple methods and to "minimise friction" for their users — such as by using AI-driven models that assess age with facial scans or by tracking user behaviour.

Ms Wells has previously highlighted those models as examples of cutting-edge technology, although the experts have raised questions about their effectiveness.

Australia's Under 16s Social Media Ban Could Extend to Reddit, Twitch, Roblox and Even Dating Apps

Lego Play and Steam among the unexpected additions to the list that includes Facebook, Instagram, TikTok, YouTube and X:

Reddit and X are among the companies approached by the eSafety commissioner, Julie Inman Grant, right, about the requirement to prevent under 16s from holding social media accounts. Composite: Guardian AustraliaView image in fullscreenReddit and X are among the companies approached by the eSafety commissioner, Julie Inman Grant, right, about the requirement to prevent under 16s from holding social media accounts. Composite: Guardian AustraliaAustralia's under 16s social media ban could extend to Reddit, Twitch, Roblox and even dating apps

Lego Play and Steam among the unexpected additions to the list that includes Facebook, Instagram, TikTok, YouTube and X

Twitch, Roblox, Steam, Lego Play, X and Reddit are among the companies eSafety has approached about whether the under 16s social media ban applies to them from December.

Companies approached by the eSafety commissioner this month about the requirement to prevent under 16s from holding social media accounts from 10 December have conducted a self-assessment that the commissioner will use to decide if they need to comply with the ban.

eSafety will not be formally declaring which service meets the criteria but companies that eSafety believes meet the criteria will be expected to comply.

The eSafety commissioner's office initially declined to release the list of companies contacted earlier this month but on Wednesday named the companies.

The full list of companies initially approached by eSafety to ask to assess if they need to comply with the ban included:

  • Meta – Facebook, Instagram , WhatsApp

  • Snap

  • Tiktok

  • YouTube

  • X

  • Roblox

  • Pinterest

  • Discord

  • Lego Play

  • Reddit

  • Kick

  • GitHub

  • HubApp

  • Match

  • Steam

  • Twitch

Gaming platforms such as Roblox, Lego Play and Steam were unexpected additions to the list that was widely anticipated to include Facebook, Instagram, TikTok, YouTube and X. Platforms that have the sole or primary purpose of enabling users to play online games with other users are exempt from the ban.

"Any platform eSafety believes to be age-restricted will be expected to comply and eSafety will make this clear to the relevant platforms in due course," a spokesperson for the eSafety commissioner said.

[...] The eSafety commissioner, Julie Inman Grant, has previously expressed concerns about Roblox's communications features being used to groom children.

"We know that when it comes to platforms that are popular with children, they also become popular with adult predators seeking to prey on them," Inman Grant said earlier this month. "Roblox is no exception and has become a popular target for paedophiles seeking to groom children."

Earlier this month, Roblox committed to implementing age assurance by the end of this year, making accounts for users under 16 private by default and introducing tools to prevent adult users contacting under 16s without parental consent.

Direct chat will also be switched off by default until a user has gone through age estimation.


Original Submission #1Original Submission #2

posted by hubie on Tuesday September 30, @02:39PM   Printer-friendly
from the for-decades-for-freedoms-for-all-users dept.

The Free Software Foundation (FSF) turns forty on October 4, 2025. The Free Software Foundation will have then been defending the rights of all software users for the past 40 years. The long term goal is for all users have the freedom to run, edit, contribute to, and share software.

There will be an online event, with an in-person option for those that can get to Boston. In November there will also be a hackathon.


Original Submission

posted by hubie on Tuesday September 30, @09:57AM   Printer-friendly
from the OpenAI->$100B->Oracle->$100B->Nvidia->$100B->OpenAI dept.

"This is a giant project," Nvidia CEO said of new 10-gigawatt AI infrastructure deal:

On Monday, OpenAI and Nvidia jointly announced a letter of intent for a strategic partnership to deploy at least 10 gigawatts of Nvidia systems for OpenAI's AI infrastructure, with Nvidia planning to invest up to $100 billion as the systems roll out. The companies said the first gigawatt of Nvidia systems will come online in the second half of 2026 using Nvidia's Vera Rubin platform.

"Everything starts with compute," said Sam Altman, CEO of OpenAI, in the announcement. "Compute infrastructure will be the basis for the economy of the future, and we will utilize what we're building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale."

The 10-gigawatt project represents an astoundingly ambitious and as-yet-unproven scale for AI infrastructure. Nvidia CEO Jensen Huang told CNBC that the planned 10 gigawatts equals the power consumption of between 4 million and 5 million graphics processing units, which matches the company's total GPU shipments for this year and doubles last year's volume. "This is a giant project," Huang said in an interview alongside Altman and OpenAI President Greg Brockman.

To put that power demand in perspective, 10 gigawatts equals the output of roughly 10 nuclear reactors, which typically output about 1 gigawatt per facility. Current data center energy consumption ranges from 10 megawatts to 1 gigawatt, with most large facilities consuming between 50 and 100 megawatts. OpenAI's planned infrastructure would dwarf existing installations, requiring as much electricity as multiple major cities.

[...] Bryn Talkington, managing partner at Requisite Capital Management, noted the circular nature of the investment structure to CNBC. "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia," Talkington told CNBC. "I feel like this is going to be very virtuous for Jensen."

[...] Other massive AI infrastructure projects are emerging across the US. In July, officials in Cheyenne, Wyoming, announced plans for an AI data center that would eventually scale to 10 gigawatts—consuming more electricity than all homes in the state combined, even in its earliest 1.8 gigawatt phase. Whether it's connected to OpenAI's plans remains unclear.

[...] The planned infrastructure buildout would significantly increase global energy consumption, which also raises environmental concerns. The International Energy Agency estimates that global data centers already consumed roughly 1.5 percent of global electricity in 2024. OpenAI's project also faces practical constraints. Existing power grid connections represent bottlenecks in power-constrained markets, with utilities struggling to keep pace with rapid AI expansion that could push global data center electricity demand to 945 terawatt hours by 2030, according to the International Energy Agency.

The companies said they expect to finalize details in the coming weeks. Huang told CNBC the $100 billion investment comes on top of all Nvidia's existing commitments and was not included in the company's recent financial forecasts to investors.


Original Submission

posted by hubie on Tuesday September 30, @05:15AM   Printer-friendly

https://gist.github.com/probonopd/9feb7c20257af5dd915e3a9f2d1f2277

Wayland breaks everything! It is binary incompatible, provides no clear transition path with 1:1 replacements for everything in X11, and is even philosophically incompatible with X11. Hence, if you are interested in existing applications to "just work" without the need for adjustments, then you may be better off avoiding Wayland.

Wayland solves no issues I have but breaks almost everything I need. Even the most basic, most simple things (like xkill) - in this case with no obvious replacement. And usually it stays broken, because the Wayland folks mostly seem to care about Automotive, Gnome, maybe KDE - and alienating everyone else (e.g., people using just an X11 window manager or something like GNUstep) in the process.

What follows is a very well written "Feature comparison" between Xorg and Wayland.


Original Submission

posted by hubie on Tuesday September 30, @12:32AM   Printer-friendly

The deal could go through as early as next week:

As reported by the The Wall Street Journal, gaming giant EA is set to go private⁠—that is, no longer be traded on the stock market⁠—in a $50 billion deal with an investor group. This would be the largest such leveraged buyout ever recorded.

According to the WSJ's anonymous sources, EA could be sold for as much as $50 billion, though the final price has not yet been agreed on, and EA has an estimated market value of $43 billion. The group of investors reportedly includes the private equity firm Silver Lake and the government of Saudi Arabia's Public Investment Fund.

The deal could be announced as early as next week, and would be the largest leveraged buyout ever recorded. A leveraged buyout is when a private equity firm uses a significant amount of borrowed money to seal the deal, with the asset set to be acquired used as collateral in the debt.

This effectively leaves the acquired company liable for the debt⁠—if its income can't adequately service the debt, it will bear the consequences of a default, not the investors who made the purchase, and that usually means closures and layoffs. As reported by the Los Angeles Times, one such leveraged buyout eventually resulted in bankruptcy and closure for the once-ubiquitous toy retailer, Toys R Us.

The fact that the reported cost of the deal—up to $50 billion—is close to EA's estimated value (what's $7 billion between friends?) could give reason for optimism that EA's debt burden would be proportional to its means. Even aside from eventual bankruptcy, though, there's precedent for acquisitions like this causing massive disruptions to the company: Microsoft cut 1,900 jobs at Xbox in January 2024 shortly after its acquisition of Activision-Blizzard, and Blizzard Entertainment was heavily affected in particular.

[...] EA, much like its competitor Ubisoft, has struggled in recent years. Once formidable titans, both have been left behind as consolidation efforts have turned Microsoft and Sony into unassailable super heavyweights. At the same time, smaller publishers like DreadXP, Devolver, and Playstack have become ubiquitous at the other end of the budget spectrum.

EA lost the lucrative FIFA license, leading to its new, genericized EA FC series. Beloved RPG developer BioWare was sharply downsized after Dragon Age: The Veilguard proved a relative sales failure. The impending release of Battlefield 6, which has seen massive beta numbers and a positive critical reception, is looking like a much-needed win for the company.


Original Submission

posted by jelizondo on Monday September 29, @07:46PM   Printer-friendly

https://phys.org/news/2025-09-world-screwworm-parasite-northern-mexico.html

A dangerous parasite once eliminated in the United States has been detected in northern Mexico, close to the U.S. border.

Mexico's agriculture ministry confirmed Sunday that an 8-month-old cow in Nuevo León tested positive for New World screwworm. The animal was part of a shipment of 100 cattle from Veracruz, but only one showed signs of infestation.

The cow was treated, and all others received ivermectin, an antiparasitic medication, officials said.

The case was found in Sabinas Hidalgo, a small city less than 70 miles from Texas. It is the northernmost detection so far, moving much closer to the U.S. border than earlier outbreaks in other parts of Mexico.

Screwworm flies lay eggs in wounds and their larvae feed on living tissue, causing serious injury in livestock. The parasite was eradicated from the U.S. in the 1960s by mass-producing and releasing sterile flies to contain the flies' range, but recent outbreaks in Central America and Mexico have caused concerns again.

It is a "national security priority" U.S. Agriculture Secretary Brooke Rollins said in a statement.

The U.S. Department of Agriculture (USDA) and multiple other agencies are "executing a phased response strategy that includes early detection, rapid containment and long-term eradication efforts," the statement said.

Further, the USDA has invested nearly $30 million this year to expand sterile fly production in Panama and build a new facility in Texas, The New York Times reported.

Thousands of fly traps have also been placed along the border, with no infected flies detected so far.

Mexican President Claudia Sheinbaum said U.S. officials recently inspected local control measures and will issue a report soon. U.S. ports remain closed to livestock, bison and horse imports from Mexico until further notice, The Times said.


Original Submission

posted by jelizondo on Monday September 29, @03:02PM   Printer-friendly

8,000 years of human activities have caused wild animals to shrink and domestic animals to grow:

Humans have caused wild animals to shrink and domestic animals to grow, according to a new study out of the University of Montpellier in southern France. Researchers studied tens of thousands of animal bones from Mediterranean France covering the last 8,000 years to see how the size of both types of animals has changed over time.

Scientists already know that human choices, such as selective breeding, influence the size of domestic animals, and that environmental factors also impact the size of both. However, little is known about how these two forces have influenced the size of wild and domestic animals over such a prolonged period. This latest research, published in the Proceedings of the National Academy of Sciences , fills a major gap in our knowledge.

The scientists analyzed more than 225,000 bones from 311 archaeological sites in Mediterranean France. They took thousands of measurements of things like the length, width, and depth of bones and teeth from wild animals, such as foxes, rabbits and deer, as well as domestic ones, including goats, cattle, pigs, sheep and chickens.

But the researchers didn't just focus on the bones. They also collected data on the climate, the types of plants growing in the area, the number of people living there and what they used the land for. And then, with some sophisticated statistical modeling, they were able to track key trends and drivers behind the change in animal size.

The research team's findings reveal that for around 7,000 years, wild and domestic animals evolved along similar paths, growing and shrinking together in sync with their shared environment and human activity. However, all that changed around 1,000 years ago. Their body sizes began to diverge dramatically, especially during the Middle Ages.

Domestic animals started to get much bigger as they were being actively bred for more meat and milk. At the same time, wild animals began to shrink in size as a direct result of human pressures, such as hunting and habitat loss. In other words, human activities replaced environmental factors as the main force shaping animal evolution.

"Our results demonstrate that natural selection prevailed as an evolutionary force on domestic animal morphology until the last millennium," commented the researchers in their paper. "Body size is a sensitive indicator of systemic change, revealing both resilience and vulnerability within evolving human–animal–environment relationships."

This study is more than a look at ancient bones. By providing a long-term historical record of how our actions have affected the animal kingdom, the findings can also help with modern-day conservation efforts.


Original Submission

posted by jelizondo on Monday September 29, @10:17AM   Printer-friendly

Physicists nearly double speed of superconducting qubit readout in quantum computers

RIKEN physicists have found a way to speed up the readout of qubits in superconducting quantum computers, which should help to make them faster and more reliable.

After decades of being theoretical propositions, working quantum computers are just starting to emerge. For experimentalists such as Peter Spring of the RIKEN Center for Quantum Computing (RQC), it's an auspicious time to be working in the field.

"It's very exciting. It feels like this is a very fast-moving field that has a lot of momentum," says Spring. "And it really feels like experiments are catching up with theory."

When they come online, mature quantum computers promise to revolutionize computing, being able to perform calculations that are well beyond the capabilities of today's supercomputers. And it feels like that prospect is not so far off.

Currently, half a dozen technologies are jockeying to become the preferred platform for tomorrow's quantum computers. A leading contender is a technology based on superconducting electrical circuits. One of its advantages is the ability to perform calculations faster than other technologies.

Because of the very sensitive nature of quantum states, it is vital to regularly correct any errors that may have crept in. This necessitates repeatedly measuring a selection of qubits, the building blocks of quantum computers. But this operation is slower than quantum gate operations, making it a bit of a bottleneck.

"If qubit measurement is much slower than the other things you're doing, then basically it becomes a bottleneck on the clock speed," explains Spring. "So we wanted to see how fast we could perform qubit measurements in a superconducting circuit."

Now, Spring, Yasunobu Nakamura, also of RQC, and their co-workers have found a way to simultaneously measure four qubits in superconducting quantum computers in a little over 50 nanoseconds, which is about twice as fast as the previous record. The findings are published in the journal PRX Quantum.

A special filter ensures that the measurement line used to send the measurement signals doesn't interfere with the qubit itself. Spring and colleagues realized the filter by "coupling" a readout resonator with a filter resonator in such a way that energy from the qubits wasn't able to escape through the measurement line.

They were able to measure the qubits at very high accuracies, or "fidelities." "We were surprised at how high fidelity the readout turned out to be," says Spring. "On the best qubit, we achieved a fidelity of more than 99.9%. We hadn't expected that in such a short measurement time."

The team aims to achieve even faster qubit measurements by optimizing the shape of the microwave pulse used for the measurement.

More information: Peter A. Spring et al, Fast Multiplexed Superconducting-Qubit Readout with Intrinsic Purcell Filtering Using a Multiconductor Transmission Line, PRX Quantum (2025). DOI: 10.1103/prxquantum.6.020345
       


Original Submission

posted by jelizondo on Monday September 29, @05:35AM   Printer-friendly
from the data-goldmine dept.

The most alluring aspect of a CRM system, its centralized collection of customer data, is also its Achilles' heel:

Customer relationship management (CRM) systems sit at the heart of modern business. They store personal data, behavioral histories, purchase records, and every digital breadcrumb that shapes customer identity.

Yet while these platforms are marketed as engines of efficiency, they've become prime targets for cybercriminals.

The uncomfortable truth is that CRMs are often riddled with blind spots. Companies invest heavily in deployment, but treat cybersecurity as an afterthought. That oversight has left the door wide open to sophisticated attacks that exploit both technical gaps and human error. Let's take a look at how to fortify your defenses.

[...] More than anything, centralization multiplies risk. A breach doesn't just compromise one isolated dataset; it unlocks a holistic map of customer interactions. Sophisticated actors exploit these unified records to fuel identity theft and targeted phishing campaigns [PDF].

Worse still, because CRMs often integrate with marketing automation, billing, and support systems, a single compromise can cascade through multiple business-critical platforms.

The article goes on to discuss the human element of CRM insecurity, how integration fuels exploitation, and the costs of neglecting CRM security and convenience.


Original Submission