Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Roughly how much cash is in your pocket/wallet/purse right now?

  • None: why do I need cash anymore, grandpa?
  • Just enough for random small transactions
  • Enough for regular errands (grocery, fuel, etc.)
  • An unreasonably large amount
  • Normally none, but whatever amount my non-app-using acquantice paid me back for dinner
  • I'm all-in on crypto, you insensitive fiat-currency-loving clod!

[ Results | Polls ]
Comments:62 | Votes:252

posted by janrinok on Tuesday May 12, @12:41PM   Printer-friendly
from the would-you-like-to-play-a-game? dept.

The game asks players to find the least worst options for a shipping chokepoint:

It's no fun living through the global energy shock and growing economic crisis that has ensued since the conflict choked off shipping through the Strait of Hormuz. But it can be enlightening to play through the new game Bottleneck that forces players to choose among the 2,000 ships still stuck in and around the strait—all while actual news reports and real maritime transit data help tell the story of the unfolding events.

The free browser-based game challenges players to act as a fictional maritime coordinator by selecting a handful of ships that get to pass through the strait each day. Most decisions come with serious costs or trade-offs, whether it's paying the toll imposed by the Iranian government that has claimed authority over the strait or antagonizing Iran or the United States while pushing either side toward widening the war. Failure to push through enough specific shipments can spark individual crises involving the price of oil, food, and water security, and a countdown to famine in many countries.

"The game does not ask whether you are smart enough to solve the crisis," said Jakub Gornicki, the journalist and artist who developed the game, in a post. "It asks what kind of damage you choose when every option has a cost."

Players must also manage relations with factions beyond Tehran and Washington, such as the Gulf States, the United Nations World Food Programme, and the shipping industry. Prioritizing shipments of crude oil and liquefied natural gas may satisfy the US's interest in keeping energy prices in check, but it will erode the trust of the United Nations, which would rather see more ships carrying fertilizer to stave off future famine.

That may sound like a lot to wrap your head around for a game that is playable in 15 to 20 minutes, but it's a surprisingly accessible experience for the most part. The game serves up plenty of explanations and news articles that you can click on to better understand the real-world context and in-game consequences.

However, each ship approved for transit tends to carry a greater cost or trade-off as the game progresses over 10 playable days between March 3 and April 13, 2026. You have the choice of not sending any ships through the strait on any given day, but that can quickly lead to dismal endgame results such as "empty shelves" and "desalination collapse" for Gulf States facing food insecurity and a lack of fresh water from energy-starved desalination plants.

If you manage to muddle through and keep all the factions from spiraling, the endgame results still provide plenty of charts and numbers to remind you that the real-life Strait of Hormuz crisis is far from over. Even squeezing through several dozen ships over 10 days—the best-case shipping scenario in the game—remains a far cry from the pre-war average of 130 ships passing through the strait each day. The inadequacy of that shipping rate continues to have daily real-world consequences.

Gornicki designed and built the game by himself over 17 days while executing the game's underlying code with the help of an AI coding tool, which he described in a press kit as being "audited and corrected at every step." He also incorporated more than 125 verified and linked news articles, along with shipping data from sources such as Windward Maritime Intelligence and Lloyd's List.

"The chokepoint is not a story you read once and put down—it returns every week, in fuel prices, in fertilizer shortages, in food security in places far from any tanker," Gornicki said. "I wanted to give people a form of this reporting they could not skim past."


Original Submission

posted by janrinok on Tuesday May 12, @07:54AM   Printer-friendly

Link between pollinators and diverse landscapes is a two-way street:

Ecologists have long seen a strong connection between biodiversity and pollinators – the butterflies, birds, bats, bees and other insects that help the flowers they snack on by transferring pollen from male anthers to female stigma.

Previous research has shown diverse landscapes draw more pollinators, as a wider variety of pollen and nectar attracts attention from a wider variety of animals – some which only feed on certain plants. Essentially, pollinators go where the food is, said Brian Wilsey, a professor of ecology, evolution and organismal biology at Iowa State University.

A recent study by Wilsey and doctoral graduate Nathan Soley showed the converse is also true: Pollinators support diversity in plant communities. In an article published this month in Ecology, Wilsey and Soley described a four-year experiment they conducted in plots of restored prairie that examined how plant diversity was affected by purposely protecting wildflowers from pollinators. Among animal-pollinated plants, viable seed production fell by 50% and the diversity of species fell by 27%, they found.

"Our study is the first we are aware of to show that plant biodiversity at the community level can be limited by a lack of pollinators," Wilsey said.

[...] The study's results suggest significant declines in pollinators could cause biodiversity losses that further reduce pollinator populations, causing a self-reinforcing downward trend in both that the researchers call a "plant-pollinator extinction vortex."

"Before this study, I would have never thought that pollinators were this important to maintaining biodiversity. It really opened my eyes," Wilsey said.

Pollinators are essential because of their role in food production. According to the U.S. Department of Agriculture, about 35% of global food crops depend on animal pollination to reproduce, making the seeds and fruits that humans harvest.

In addition to providing critical support for pollinators and other wildlife, diverse landscapes improve water and soil quality. In prairies, which used to cover most of Iowa, a variety of life makes ecosystems more resilient to droughts, floods and invasive species. Beyond pollinators, the known pro-biodiversity factors include low nutrient availability, proximity to other quality habitat and a lack of human degradation, Wilsey said.

One major implication of knowing pollinators help maintain plant biodiversity is the need to consider the presence of pollinator habitat when establishing prairie restoration areas. That's especially true for urban projects, Wilsey said. The human-enhanced pollination plots in the study showed no change in biodiversity when compared to the control plots, an indication that there were sufficient bees and other pollinators in the area. But that's less likely to be the case in more human-impacted environments.

Journal Reference: Nathan M. Soley, Brian J. Wilsey, Pollinators maintain biodiversity in assembling plant communities https://doi.org/10.1002/ecy.70369


Original Submission

posted by hubie on Tuesday May 12, @03:09AM   Printer-friendly

A new study links the universe's expansion to quantum topology, suggesting that hidden mathematical structures may stabilize the cosmological constant in ways previously unrecognized:

The cosmological constant is a term physicists use to describe the energy pushing the universe to expand faster over time. Despite its simple definition, it represents one of the deepest unsolved problems in physics.

Measurements show that this energy exists, but its strength is astonishingly small. That is where the trouble begins. Quantum field theory (QFT), the framework that successfully explains particles and forces, predicts that empty space should contain an enormous amount of energy.

In fact, the theoretical value is so large it would cause the universe to rip itself apart almost instantly. Instead, the real universe expands at a much calmer pace, allowing galaxies, stars, and planets to form. This gap between theory and observation is often described as one of the worst predictions in physics.

Researchers at Brown University have proposed a new explanation for this mismatch.

The team found that the mathematics behind a simple model of quantum gravity closely mirrors the equations used to describe the quantum Hall effect, an unusual state of matter where electrical flow behaves with remarkable precision.

In the quantum Hall effect, electrical conductance remains fixed even when the material contains defects. This stability comes from topology, which refers to the mathematical structure or “shape” of a quantum state. The researchers identified a similar topological feature in the Chern-Simons-Kodama state, a proposed ground state for quantum gravity.

“What we’ve shown is that if space-time has this non-trivial topology, then it resolves one of the deadliest problems of the cosmological constant,” said study co-author Stephon Alexander, a professor of physics at Brown. “All the quantum perturbations that should blow up the value of the cosmological constant are rendered inert by this topology, which keeps the constant’s value stable.”

[...] Alexander has spent years studying Chern-Simons-Kodama (CSK) theory, a proposed state of quantum gravity that grows out of quantum field theory. Scientists have yet to settle on a quantum theory of gravity — a theory that explains how gravity works at the tiniest scales — but the CSK state is one of the more straightforward candidates, according to Alexander.

“It’s a really conservative approach to quantizing gravity,” he said. “This is the approach used by people like Dirac, Schrödinger, and Wheeler. It’s just good, old-fashioned quantization.”

Alexander had been aware of some mathematical similarities between CSK and the math behind the quantum Hall effect, but he wasn’t entirely sure what to make of them. That’s when he turned to Hui, an assistant professor at Brown who specializes in topological systems like those that emerge in the quantum Hall effect.

[...] Together, the researchers were able to show that the cosmological constant has a similar “topological protection” in the CSK state as electrical conductivity has in the quantum Hall effect. The quantum Hall effect emerges when electricity flows through very thin materials in the presence of a magnetic field. Imagine a flat, two-dimensional piece of metal cut into a rectangular strip with an electric current running longways down the strip. Introducing a magnetic field produces a second voltage that runs perpendicular to the original current. This is known as a Hall voltage (named after Edwin Hall, who discovered it).

[...] “What we find is that this quantization of the electrical conductance in quantum Hall has an analog with the cosmological constant,” Hui said. “It also ends up becoming quantized for topological reasons. There turn out to be constraints in the theory that force the cosmological constant to take certain allowed quantized values.”

There’s much more work to be done to fully flesh out a topological solution to the cosmological constant problem, Alexander says. But finding a potential solution to the gravitational aspect of the problem is a crucial start. At the very least, he says, the work bolsters the profile of the CSK state as a candidate for a long-sought theory of quantum gravity.

“We took something old, which is this conservative, canonical approach to quantum gravity, and discovered something new that had been there all along,” Alexander said. “Now we’re working on a bigger picture of how this phenomenon works.”

Reference: “Cosmological Constant from Quantum Gravitational 𝜃 Vacua and the Gravitational Hall Effect” by Stephon Alexander, Heliudson Bernardo and Aaron Hui, 17 April 2026, Physical Review Letters. DOI: https://doi.org/10.1103/rzz5-p4f4


Original Submission

posted by hubie on Monday May 11, @10:21PM   Printer-friendly

NASA is serious about taking more shots on goal, but some of them need to start landing:

NASA's goal of reaching the Moon's surface as many as 21 times over the next two and a half years will require an overhaul of the agency's approach to buying lunar landers and success in rectifying the myriad problems that have, so far, caused three of the last four US landing attempts to falter.

It will also require improved oversight of NASA's industrial base and better management of a supply chain that has often failed to deliver on time.

These landers are separate from NASA's Human Landing System program, which has contracts with SpaceX and Blue Origin to develop and deliver human-rated landers to ferry crews to and from the lunar surface for the agency's Artemis program. Alongside the crew landers, dozens of robotic and cargo landings will deliver payloads to scout for a future Moon base and demonstrate technologies for larger vehicles, mining and resource utilization, and sustained operations during the two-week-long lunar night.

The fundamentals for high-frequency missions to the lunar surface are in place. NASA's Commercial Lunar Payload Services (CLPS) program, announced eight years ago this week, has assembled a roster of commercial providers to design and build robotic Moon landers. Through CLPS, NASA has contracted with US companies for 13 missions since 2019. Four of them have launched, and just one has completed a fully successful landing. Four more commercial landers are under construction now for launches in the second half of this year, but as is common in the space industry, their schedules have a history of delays, and some are likely to move into 2027.

Eight years in, CLPS is still in its "infancy," said Brad Bailey, NASA's assistant deputy associate administrator for exploration, during a recent lunar science workshop. Now, NASA is asking its lander providers, still learning to crawl, to rapidly learn to walk and run over the next two years.

NASA has penciled in nine lunar landings for next year, followed by 10 in 2028. NASA and its commercial partners must pick up the pace to come anywhere close to that. Isaacman acknowledged this in a hearing last week before the Senate Appropriations Committee's Subcommittee on Commerce, Justice, and Science.

"We have to do more than talk," Isaacman said. "For a very long time across all of NASA, we've talked a really good game but then we kind of sit and wait for our vendors and partners to deliver outcomes, and as a result we tend to be late and it tends to cost more, so how do you change that?"

One way, Isaacman said, is for NASA to offer more aid to the companies it is paying to develop Moon landers. "You start to embed subject matter experts across the supply chain to drive outcomes," he said.

[...] Aware of the hazards, NASA leaders at the start of the CLPS program likened the approach to a soccer or hockey team taking "shots on goal." Their thinking was that numerous landing attempts would allow companies to wring out their technology and improve their chances of sticking the next landing. The program's progress has been slow.

Today, NASA is in a race. The agency is charged with landing astronauts on the Moon before China, perhaps as soon as 2028, and following that achievement with the build-out of a permanent base near the south pole. Future CLPS missions will carry more sophisticated payloads, such as expensive rovers, hopping drones, communications relay satellites, and other pioneering tech demos that will underpin the Moon base design. If they are to succeed, NASA and its commercial partners will have to turn the page from taking shots on goal to hitting the net almost every time.

Facing this new urgency, NASA officials are eager to transition from demonstrating reliable lunar landers to delivering tangible infrastructure to the Moon's surface. Today's reality is that none of the lander contractors are there yet. There's still a lot to learn about landing and operating on the Moon.

This means NASA will need to take risks. The agency is still in an "exploratory phase," Merancy continued. "How do we get these systems out there, test them, and learn from them? That means dissimilar systems because I don't know which one's going to work well."

Paradoxically, NASA must take more shots on goal in order to stop taking them. That means buying more CLPS missions—and doing so quickly. An update posted by NASA on a federal government procurement website last week signaled the agency's intent to raise the ceiling of the CLPS contract from $2.6 billion to $4.2 billion. There are 13 companies eligible to compete for CLPS missions, but three—Astrobotic, Firefly Aerospace, and Intuitive Machines—have won the lion's share of CLPS contracts to date.

[...] It's now up to NASA's other CLPS providers to show they can reach the Moon, and all of them—including Firefly—must prove they can do so repeatedly. NASA and its contractors must cut Firefly's four-year lead time in half to ramp up to a monthly cadence in the next two years.

NASA will take a more paternalistic approach with the next round of CLPS orders. "When you are building, we need to hear the things that are slowing you down, and we're going to try to help you with those things," Carlos Garcia-Galan, head of NASA's Moon base program, told representatives of the CLPS companies at last week's LSIC meeting.


Original Submission

posted by hubie on Monday May 11, @05:39PM   Printer-friendly

Good Job Dell and Lenovo! Hope Others Follow You

Only last week, we were talking about how LVFS, the firmware update service for Linux, had turned up the heat on vendors who didn't contribute their fair share.

To tackle that, the project has been going through a phased restrictions rollout that includes things like introducing fair-use download utilization graphs and removing detailed per-firmware analytics.

But that obviously wouldn't solve their lack of funding.

Luckily, two vendors have stepped up. Lenovo and Dell have both signed on as Premier sponsors for LVFS, each putting in $100,000 a year to help fund the project going forward.

They are also the first to reach this tier. Before now, only Framework Computer and the Open Source Firmware Foundation were on as Startup sponsors, contributing $10,000 a year.

Premier is the highest level of financial commitment any vendor can make to the project.

This update was announced yesterday, with the LVFS homepage already reflecting the update. Between the two of them, that's $200,000 a year going into a project that had been running almost entirely on the goodwill of the Linux Foundation and Red Hat.

[...] The vendors still treating LVFS like a free service they have no obligation to support should probably pay attention to what comes next. API access gets cut for non-Startup vendors in August. Automated upload limits follow in December.


Original Submission

posted by jelizondo on Monday May 11, @12:55PM   Printer-friendly

Survey polled 4,000 U.S. residents in late 2025:

This survey appears to highlight the growing opposition to data center construction in America. Around half of all previously announced data center projects have been delayed or cancelled entirely. Often this was for financial or component supply issues — such as Chinese power transformer shortages — but growing opposition from local lawmakers and communities about their impact on water and air quality, and electricity prices, has also been a factor.

In total, 47% of respondents said they opposed the construction of new AI data centers in their neighborhood, with just 38% saying they supported it. That support was spread differently throughout various age ranges, however.

Of those questioned, 50% of Millennial age respondents said they either somewhat or strongly supported the creation of new AI data centers in their neighborhood or local area. This was closely followed by 48% of Gen Z respondents. There was a large drop off after that, with only 38% of Gen X saying they supported their creation. Baby boomers were the least enthusiastic, with just 22% claiming they felt the same.

In what is perhaps an example of the current U.S. administration’s influence and its entanglement with top tech firms, 49% of surveyed Republicans claimed they would support new data center creation in their local area. This stood in stark contrast to just 36% of Democrats. A causal trend could also be drawn from the fact that Republican voting states and counties tend to be more rural, with less economic activity. Data center projects do require construction, and there is the potential for local job creation.

When it came to homeowners and renters, surprisingly, it was the homeowners who were more likely to support it, with 39% versus 36% of renters claiming they either somewhat or strongly supported new data center development in their neighborhood.

City councils that back data center projects are being voted out, other city councils are putting moratoriums on data center construction, and instances of more extreme violence towards AI companies and their employees are becoming more common.

Although this latest survey does show that there is some support for data center creation, the fact that the opposition is in the majority suggests that any data center projects that haven’t already been delayed or postponed are likely to face increasingly terse pushback that could derail their eventual development entirely.

Considering this survey was from November 2025, too, there has been further evidence of pushback against hyperscalers in recent months.

That’s a core component of many people’s misgivings with AI in general. Executives aren’t increasing their returns because of it, and companies are finding it hasn’t boosted productivity much either. It’s also becoming ever more expensive to run. Although there are outliers, the companies appearing to benefit the most from AI are the companies developing it, although they aren’t making anything close to a profit, aside from the chipmaking industry itself.

Even the hyperscalers like Oracle, which have received hundreds of billions of dollars worth of compute orders since the large-scale AI buildout began in 2025, are heavily reliant on AI developers like OpenAI paying their bills. Considering OpenAI specifically is struggling to make the kind of money that would allow it to make good on those orders, the list of beneficiaries of new data center developments could be small.


Original Submission

posted by jelizondo on Monday May 11, @08:04AM   Printer-friendly

https://scitechdaily.com/after-100-years-scientists-uncover-hidden-rule-governing-cosmic-rays/

More than 100 years after their discovery, cosmic rays continue to puzzle scientists. These extremely energetic particles travel across the universe from distant and powerful sources. The DAMPE (Dark Matter Particle Explorer) space telescope is working to better understand them, including whether dark matter plays a role in how they form.

This international project, which includes the University of Geneva (UNIGE), has now uncovered an important new clue. Researchers have identified a shared feature among these particles, and the findings were published in Nature.

Cosmic rays are the highest-energy particles ever detected, far exceeding anything produced by human-made accelerators on Earth. Their origins remain uncertain, though scientists suspect they are created in extreme environments such as supernova explosions, jets from black holes, or pulsars.

Launched in December 2015, the DAMPE space telescope was designed to investigate these questions. The mission includes major contributions from the astrophysics group at UNIGE’s Department of Nuclear and Particle Physics (DPNC). By analyzing highly precise data, researchers discovered a consistent pattern in the energy distribution of primary cosmic ray nuclei, from protons to iron.

“Cosmic rays are primarily composed of protons, but also of helium, carbon, oxygen, and iron nuclei,” explains Andrii Tykhonov, associate professor at the DPNC in the Faculty of Science at UNIGE, and co-author of the study. “These particles are also categorized according to their energy: low, up to a few billion electron-volts; intermediate, from a few billion to several hundred billion electron-volts; and high, from 1,000 billion electron-volts and beyond.”

The team found that the number of particles drops off more sharply after a certain energy level. This effect, known as “spectral softening,” reflects a steeper decline than the gradual decrease normally seen as energy increases.

This shift occurs at a rigidity of about 15 TV (teraelectron-volts) (about 15 trillion electron-volts). Rigidity describes how much a particle’s path is influenced by magnetic fields.

Finding the same pattern at this rigidity across different types of nuclei supports models where both the acceleration and movement of cosmic rays depend on rigidity. Competing ideas that focus on energy per nucleon (energy divided by the number of nucleons in the particle) are strongly challenged by the data, with a confidence level of 99.999%.

Researchers at UNIGE played a key role in this work. They developed advanced artificial intelligence methods to reconstruct particle events and contributed to precise measurements of proton and helium fluxes, along with carbon analysis. The team also led the development of a major DAMPE instrument, the Silicon-Tungsten Tracker (STK), which allows scientists to accurately trace particle paths and measure their charge.

These findings bring scientists closer to understanding where cosmic rays come from and how they travel through the galaxy. The results place new limits on theories about particle acceleration in extreme astrophysical environments and improve models of how these particles move through interstellar space.

Reference: The DAMPE Collaboration. Charge-dependent spectral softenings of primary cosmic rays below the knee. Nature 653, 52–55 (2026). https://doi.org/10.1038/s41586-026-10472-0


Original Submission

posted by jelizondo on Monday May 11, @03:21AM   Printer-friendly
from the Big-Brother-is-Watching-You dept.

https://www.techspot.com/news/112309-google-chrome-has-silently-pushing-4gb-ai-model.html

Google started turning Chrome, the world's most popular web browser, into an AI browser last year in response to threats from popular AI-native rivals such as OpenAI. Recent reports have uncovered that this transition includes silently installing a large cache of AI weights on an unknown but potentially significant number of devices.

Google Chrome users who have noticed unusual disk activity or unexplained drops in available storage should look for a folder called "OptGuideOnDeviceModel" inside their Chrome directory. It holds roughly 4GB of weights for Google's Gemini Nano LLM, downloaded by the browser without user consent.

Deleting the folder offers no lasting relief – Chrome will simply redownload it. On Windows 11, the folder resides at %LOCALAPPDATA%\Google\Chrome\User Data\OptGuideOnDeviceModel. It has also been confirmed on Apple Silicon and Ubuntu machines.

Uninstalling Chrome entirely is the most effective way to remove the weights. However, those who wish to continue using the browser might be able to disable the download by entering "chrome://flags" into the address bar, finding an item called "Enables optimization guide on device on Android," and selecting "Disabled" from the adjacent dropdown menu. This is also how users can determine whether their device is eligible for the feature.

Firefox has a single kill switch for all AI features. Go to Settings – AI Controls – Toggle on, "Block AI enhancements."


Original Submission

posted by janrinok on Sunday May 10, @10:37PM   Printer-friendly

Cybersecurity expert Tom Rønning finds Microsoft Edge loads all saved passwords into computer memory as cleartext, making them easy for hackers to steal.

Microsoft has recently come under fire for how its Edge browser handles your saved passwords. A security expert named Tom Jøran Sønstebyseter Rønning has shared a worrying discovery about the Microsoft Edge web browser. It turns out that when you use Edge to save your passwords, the browser turns them into plaintext as soon as the app starts.

For context, Plaintext means the passwords are not scrambled or hidden. They sit in the computer memory as plain words that anyone with administrative privileges or SYSTEM-level access can read.

Rønning shared these findings at a tech event in Oslo called Big Bite of Tech 26. The event was hosted by the research firm Palo Alto Networks Norway. He explained that Edge is the only browser he tested that works this way, whereas other browsers like Google Chrome are safer because they use a method called App-Bound Encryption (ABE).

This feature locks the passwords to the specific browser app and only unscrambles them when you actually need to log in to a site. Once you are done, the browser hides them again.

The main worry is that these passwords stay in the computer memory even if you never visit the websites they belong to. To show how easy it is to see this data, Rønning created a tool called EdgeSavedPasswordsDumper and put it on GitHub.

This tool proves that if a hacker or an infostealer gets control of a computer, they can scan the process memory of the browser to find these saved passwords.

This is a big deal for offices that use terminal servers, Citrix, or Virtual Desktop Infrastructure (VDI), where many people share one machine. In these shared setups, an attacker with administrative rights can perform cross-process memory access to see the data of every user who is logged in and then steal passwords from people who aren't even using the browser at that moment.

When Rønning told Microsoft about this, the company said the setup was by design. The company maintains that they have to balance how fast the browser works with how safe it is. They believe that if a hacker has already gained in-depth access to your computer to scan the memory, the device is already in big trouble.

Because Microsoft doesn't plan to change this soon, some experts suggest changing how you save your details. While Chrome uses better protection to stop other processes from stealing its keys, no browser is perfect. So, it's better to use a separate password app instead of saving them inside your web browser, as this will keep your data away from the browser's memory, where hackers can easily find it.

Experts shared their thoughts with Hackread.com, warning that this design choice creates a massive safety gap. Craig Lurey, from the Chicago-based firm Keeper Security, noted that while Windows tries to keep apps separate, one program can still often "pillage" the memory of another.

He added that since plaintext passwords exist in Edge's memory, other processes can read them "without restriction." To fight this, his firm created Keeper Forcefield, which uses kernel-level protection to block hackers from reading app memory even if the computer is already compromised.

Morey Haber, from the Atlanta-based firm BeyondTrust, also criticised the move. He explained that passwords should be "transient secrets" that are used and then quickly discarded. "The moment a password is retained in clear text memory... it stops being an authentication mechanism and becomes a liability," Haber warned. He added that if a password can be read in memory by a human or a malicious process, "it is already compromised."


Original Submission

posted by janrinok on Sunday May 10, @05:52PM   Printer-friendly

https://www.tomshardware.com/tech-industry/huawei-expects-12-billion-in-ai-chip-revenue-this-year-as-nvidias-china-market-share-hits-zero

These numbers describe a market that has bifurcated with unusual speed. Just 18 months ago, Nvidia supplied the vast majority of AI training and inference silicon used by Chinese cloud providers. Today, Huawei's Ascend 950PR is the primary procurement target for China's largest tech companies, and a training-focused successor named the 950DT is scheduled for Q4 this year.

The 950PR is currently the only Chinese-made AI processor that supports FP8, a compressed numerical format that allows more operations per second and lowers per-query costs. V4 uses a Mixture-of-Experts architecture with up to 1 trillion total parameters but activates only around 37 billion per inference pass. That favors inference-efficient hardware, which plays to the 950PR's strengths over its limitations in raw training throughput.

DeepSeek gave Huawei early optimization access, but didn’t extend the same to Nvidia or AMD. While V4's open weights are released in standard formats compatible with CUDA-based frameworks, DeepSeek's own infrastructure runs on Huawei Ascend silicon. The collaboration has pulled forward procurement timelines across the Chinese cloud industry, and chip prices for the 950PR have reportedly risen by about 20% as a result of the demand.

Meanwhile, SMIC has been working on expanding its advanced-node capacity for more than a year. The goal is a five-fold increase over a period of two years that’ll lift 7nm and 5nm production to 100,000 wafers per month and half a million by 2030. In addition, the combined capacity for 22nm and below could rise from 30,000-50,000 wafer starts per month in 2025 to 50,000-60,000 or higher this year. Huawei is adding two dedicated fabrication plants, though ownership structures remain unclear. Once fully operational, those facilities could exceed the current output of comparable lines at SMIC.

Yields remain a thorn in China’s side, with SMIC’s 7nm-class process delivering substantially fewer good dies per wafer than TSMC’s equivalent nodes, and the 950PR is likely to be a much larger chip than a TSMC equivalent. SMIC’s cycle time from wafer start to finished and packaged as an Ascend processor is also a problem, currently sitting at around eight months, according to estimates from JP Morgan. For similar nodes at TSMC, it’s around three months.

Then there’s HBM — Huawei announced in September that it had developed its own HBM chips with up to 1.6 TB/s bandwidth, HiBL 1.0, and HiZQ 2.0, in partnership with CXMT, but how quickly CXMT can ramp production of competitive HBM remains an open question.

The H200, which Nvidia received U.S. licenses to sell to China earlier this year, hasn’t shipped a single unit despite receiving orders. Contradictory regulatory requirements from Washington and Beijing created a stalemate at customs: U.S. regulators require that H200 chips ordered by Chinese customers be used only inside China, while Beijing has instructed domestic technology companies to limit Nvidia hardware to overseas operations.

Nvidia confirmed in its FY2026 10-K filing that it’s "effectively foreclosed from competing in China's data center computing market" and is not assuming any data center compute revenue from the region in its current outlook. Bernstein analysts estimated earlier this year that Nvidia’s share of the China AI GPU market could fall to roughly 8% in the coming years, down from 66% in 2024, both due to U.S. restrictions and because domestic vendors are being pushed to cover up to 80% of demand from domestic sources. TrendForce projected in December that China's high-end AI chip market would grow by more than 60% in 2026, with domestic suppliers capturing about half of the total.

Huawei compensates by linking large numbers of processors via optical interconnects. Its CloudMatrix 384 system combines twelve racks of Ascend modules into a 384-processor fabric delivering roughly 300 PFLOPS, though at nearly four times the power draw of Nvidia's comparable GB200-based configurations.

The 950PR is primarily an inference chip, though; the training-focused 950DT, expected in Q4, is designed for deep learning workloads and could narrow the gap with Nvidia's Hopper generation for model training tasks. Until it ships, Chinese firms that need to train large foundation models domestically face constraints that inference silicon can’t fully solve.

As for Huawei's CANN software ecosystem, it’s now thought to have more than four million developers, but it remains far smaller than Nvidia's CUDA install base. Whether CANN can attract enough third-party development to become self-sustaining remains to be seen. For now, commercial momentum is running in Huawei's favor inside China, driven by the simple absence of alternatives.


Original Submission

posted by janrinok on Sunday May 10, @01:05PM   Printer-friendly
from the key-battle-is-won-but-the-war-is-not-over dept.

[Ed's Comment - From Wikipedia, the free encyclopedia:

The French HADOPI law (French: Haute Autorité pour la Diffusion des Œuvres et la Protection des droits d'auteur sur Internet,[1][a] English: "Supreme Authority for the Distribution of Works and Protection of Copyright on the Internet") or Creation and Internet law (French: la loi Création et Internet) was introduced during 2009, providing what is known as a graduated response as a means to encourage compliance with copyright laws. HADOPI is the acronym of the government agency created to administer it.

Comment Ends --JR]

Today, the Conseil d’État (the French Administrative Supreme Court) ruled [PDF in French -Ed] in favor of La Quadrature du Net, French Data Network (FDN), Franciliens.net and Fédération FDN [sites in French -Ed]. It recognised that Hadopi's surveillance system (operated by Arcom since 2021) is a breach of fundamental rights protected by the European Union. As a result, it has ordered the government to repeal the core provisions of Hadopi key decree that organises the "graduated response" system. This fight against Hadopi, in which La Quadrature is involved since the first legislative debates in the National Assembly in 2009, is emblematic of the archaic view held by successive governments, both left-wing and right-wing, on the question of sharing online culture and knowledge. It is now up to the government to acknowledge the death of Hadopi and, instead of attempting to bring it back to life, to finally admit that online cultural sharing for non-commercial purposes must not be criminalised.

La Quadrature du Net started its challenge in court back in 2009 as to whether the law was actually compatible with European Union Law and human rights. The law was named after the The French Copyright Authority (HADOPI).

Previously:
(2026) France Keeps Breaking the Internet to Stop Piracy, Even Though It's Not Working
(2021) France Gets a New Anti-Piracy Agency in 2022


Original Submission

posted by janrinok on Sunday May 10, @08:21AM   Printer-friendly

Apple has agreed to pay $250 million to settle a class action lawsuit that accused it of misleading customers about the availability of its Apple Intelligence features. The proposed settlement would apply to people in the US who purchased all models of the iPhone 16 and the iPhone 15 Pro between June 10th, 2024 and March 29th, 2025.

People who submit qualifying claims can receive $25 for each eligible device, "which may decrease or increase up to $95 per device, depending on claim volume and other factors," according to Clarkson Law Firm, the legal team behind the class action lawsuit.

The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."

In a statement to The Verge, Apple spokesperson Marni Goldberg said the company "resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users." You can read Apple's full statement at the bottom of this article.

Apple previewed a series of AI-powered features coming to its iPhones during its June 2024 Worldwide Developers Conference, including a more personalized Siri. But when the iPhone 16 launched in September, Apple labeled it as "built for Apple Intelligence," as it lacked many of the capabilities it teased months earlier.

Instead, Apple gradually rolled out its new AI features, including Image Playground, Genmoji, and a ChatGPT integration in Siri. The company also delayed the launch of its more personalized Siri, which is now expected to arrive later this year.

Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for the Apple Intelligence page on its website. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

Apple denied any wrongdoing. Here's the company's full statement:

Since the launch of Apple Intelligence, we have introduced dozens of features across many languages that are integrated across Apple's platforms, relevant to what users do every day, and built with privacy protections at every step. These include Visual Intelligence, Live Translation, Writing Tools, Genmoji, Clean Up and many more.

        Apple has reached a settlement to resolve claims related to the availability of two additional features. We resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users.


Original Submission

posted by janrinok on Sunday May 10, @03:33AM   Printer-friendly
from the death-before-inconvenience-and-all-that dept.

Ah, nostalgia. The taste of Mum's secret-sauce pasta, the endless summers, that one time Fat Nadya was going to show her boobs in the bushes behind Ms Wolowitz house ... and soon, dear reader, the undescribable pleasure of wasting time selecting cars, fire hydrants, traffic lights and the like for the fourteenth time just to read or buy something online.

For Google has declared that the Olden Ways are over, as these are agentic times, and it is necessary to let your computer do the routine stuff for you, like booking a month-long cruise in the Caribbean or something. So, no more old captcha: it's ReCaptcha Version II now, and you, yes you, will from now on be obligated to prove you're not (another) machine by taking a picture with your smartphone (machine) which, of course, must be authenticated itself to the Google Machine, to prove you're not a, you guessed it, machine. (Oblig funny monkey clip here [Video not reviewed. -Ed])

Somehow I got the feeling that the only purpose of a human in the not so distant future will be to sign off (minute 21 and beyond) for a machine, and pay its bills.

I guess that's called winning -- by the machines.


Original Submission

posted by janrinok on Saturday May 09, @10:51PM   Printer-friendly

https://archive.ph/TCsXg (Actually a NYT article)

There is a moment when internet companies get the stink of death on them. For AOL, it was 2003, when it became clear that its users were abandoning its clunky dial-up internet service for far-faster broadband. For Yahoo, it was 2015, when their last-ditch acquisition spree failed, and they sold themselves to Verizon.

For Meta, that time is now. I believe the company — one of the most powerful media organizations in the world and one of the most valuable members of the S&P 500 — is at the start of a long, slow decline that will trigger aftershocks to our economy and our society.

It may be named Meta, but the company's biggest asset is still Facebook. Started from a Harvard dorm, the original online social network has dominated our world for two decades. Its three billion users are still bigger than any single country. Its platforms can help sway an election, fuel an insurrection or spark a genocide.

But if you look carefully, you can see chinks in the armor. Meta's earnings are starting to show the strain from years of growing consumer disaffection and reckless spending. The latest earnings, released on April 29, revealed a dip in user numbers for the first time since it started reporting these figures. And the slumping stock confirms what we have all known in our guts for a while: This is a company entering its zombie era.


Original Submission

posted by janrinok on Saturday May 09, @06:03PM   Printer-friendly

https://www.techradar.com/vpn/vpn-privacy-security/russias-censor-body-roskomnadzor-wants-to-block-92-percent-of-vpn-apps-by-2030-and-its-investing-20-billion-rubles-a-year-to-build-a-permanent-vpn-censorship-system

This directive — first uncovered by Russian independent journalist Maria Kolomychenko, and reported by the Russian version of Radio Free Europe — [site in Russian- Ed] marks a major escalation in the Kremlin's long-running effort to control what its citizens see online and cut them off from the open internet.

The subsidy document allocates roughly 20 billion rubles annually for the operation of ASBI. This figure corroborates a September 2024 report that authorities intended to spend 60 billion rubles (around $650 million) over the next five years to update its internet-blocking system.

A critical detail is that the Russian government hasn’t defined what "92% effectiveness" actually means. Kolomychenko noted it could refer to the number of VPN applications removed from stores, the volume of traffic blocked, or the percentage of people unable to connect.

This marks a fundamental shift in how Russia governs the internet. Rather than chasing down individual services one by one, the state is now pouring money into the underlying network layer to build a permanent filter.

By placing these filters directly in the network path, Roskomnadzor aims to make bypassing blocks a constant uphill battle for users.

Since the invasion of Ukraine, censorship has expanded from specific news outlets to targeting major social media platforms and messaging tools.

Millions of websites have been blocked, and as of 2025, authorities have started cutting off mobile internet across entire regions. They’ve also officially blocked major platforms like WhatsApp and Telegram.

So far, more than 400 VPN services have been banned, with over 1,000 restricted, according to another Russian journalist, Aleksandar Djokic. This, even though it’s still legal to use a VPN in Russia.

Starting April 15, 2026, major Russian service providers are legally required to detect whether a user is connected via a VPN, raising concerns about data privacy and potential future profiling.

At the same time, the Ministry of Digital Development is also pushing a new "foreign traffic tax". It would charge mobile users 150 rubles per gigabyte for any data over a 15GB monthly limit. This fee, which has been facing technical delays, hits the international routes VPNs rely on, making it too costly for most people to bypass the blocks.


Original Submission