Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:29 | Votes:97

posted by janrinok on Tuesday April 07, @08:28PM   Printer-friendly

https://techtoday.co/googles-new-compression-drastically-shrinks-ai-memory-use-while-quietly-speeding-up-performance-across-demanding-workloads-and-modern-hardware-environments/

As models scale, this memory demand becomes increasingly difficult to manage without compromising speed or accessibility in modern LLM deployments. Traditional approaches attempt to reduce this burden through quantization, a method that compresses numerical precision. However, these techniques often introduce trade-offs, particularly reduced output quality or additional memory overhead from stored constants.

This tension between efficiency and accuracy remains unresolved in many existing systems that rely on AI tools for large-scale processing.

Google’s TurboQuant introduces a two-stage process intended to address these long-standing limitations.

The first stage relies on PolarQuant, which transforms vectors from standard Cartesian coordinates into polar representations. Instead of storing multiple directional components, the system condenses information into radius and angle values, creating a compact shorthand, reducing the need for repeated normalization steps and limits the overhead that typically accompanies conventional quantization methods.

The second stage applies Quantized Johnson-Lindenstrauss, or QJL, which functions as a corrective layer. While PolarQuant handles most of the compression, it can leave small residual errors, as QJL reduces each vector element to a single bit, either positive or negative, while preserving essential relationships between data points.

This additional step refines attention scores, which determine how models prioritize information during processing.

According to reported testing, TurboQuant achieves efficiency gains across several long-context benchmarks using open models.

The system reportedly reduces key-value cache memory usage by a factor of six while maintaining consistent downstream results. It also enables quantization to as little as three bits without requiring retraining, which suggests compatibility with existing model architectures.

The reported results also include gains in processing speed, with attention computations running up to eight times faster than standard 32-bit operations on high-end hardware. These results indicate that compression does not necessarily degrade performance under controlled conditions, although such outcomes depend on benchmark design and evaluation scope.

This system could also lower operation costs by reducing memory demands, while making it easier to deploy models on constrained devices where processing resources remain limited. At the same time, freed resources may instead be redirected toward running more complex models, rather than reducing infrastructure demands.

While the reported results appear consistent across multiple tests, they remain tied to specific experimental conditions. The broader impact will depend on real-world implementation, where variability in workloads and architectures may produce different outcomes.


Original Submission

posted by janrinok on Tuesday April 07, @03:43PM   Printer-friendly
from the flip-flop-it-was-doing-the-bop dept.

Sediment cores from North Atlantic reveal pole reversal dragged on for 70,000 years—far longer than previously known:

Earth's magnetic field is generated by the churn of its liquid nickel-iron outer core, but it is not a constant feature.

Every so often, the magnetic north and south poles swap places in what are called geomagnetic reversals, and the record of these flips is preserved in rocks and sediments, including those from the ocean floor. These reversals don't happen suddenly, but over several thousand years, where the magnetic field fades and wobbles while the two poles wander and finally settle in the opposite positions of the globe.

Over the past 170 million years, the magnetic poles have reversed 540 times, with the reversal process typically taking around 10,000 years to complete each time, according to years of research. Now, a new study by a University of Utah geoscientist and colleagues from France and Japan has upended this scenario after documenting instances 40 million years ago where the process took far longer to complete, upwards of 70,000 years. These findings offer a new perspective on the geomagnetic phenomenon that envelops our planet and shields it from solar radiation and harmful particles from space.

Extended periods of reduced geomagnetic shielding likely influenced atmospheric chemistry, climate processes and the evolution of living organisms, according to co-author Peter Lippert, an associate professor in the U Department of Geology & Geophysics.

"The amazing thing about the magnetic field is that it provides the safety net against radiation from outer space, and that radiation is observed and hypothesized to do all sorts of things. If you are getting more solar radiation coming into the planet, it'll change organisms' ability to navigate," said Lippert, who heads the Utah Paleomagnetic Center. "It's basically saying we are exposing higher latitudes in particular, but also the entire planet, to greater rates and greater durations of this cosmic radiation and therefore it's logical to expect that there would be higher rates of genetic mutation. There could be atmospheric erosion."

[...] "This finding unveiled an extraordinarily prolonged reversal process, challenging conventional understanding and leaving us genuinely astonished," Yamamoto wrote in a summary posted by Springer Nature.

[...] While the finding was a surprise, it may not have been unexpected, according to the study. Computer models of Earth's geodynamo—in the swirling outer core that generates the electrical currents supporting the magnetic field—had indicated reversals' durations vary, with many short ones, but also occasional long, drawn-out transitions, some lasting up to 130,000 years.

In other words, Earth's geomagnetism may have always had this unpredictable streak, but scientists hadn't caught it in the rocks until now.

Journal Reference: Yamamoto, Y., Boulila, S., Takahashi, F. et al. Extraordinarily long duration of Eocene geomagnetic polarity reversals. Commun Earth Environ 7, 180 (2026). https://doi.org/10.1038/s43247-026-03205-8


Original Submission

posted by janrinok on Tuesday April 07, @11:01AM   Printer-friendly

$500 fiber optic HDMI cable delivers flawless 48 Gbps performance across a staggering 990 feet — crushes 8K at 60 Hz and 4K at 120 Hz over long distances

An expensive HDMI cable that's not snake oil?:

Now, these specs aren't special in a vacuum, but the fact that the cable can enable them over (up to) 990 feet — that's the impressive bit. The "entry-level" $116 version is only 3 feet long, and for that, it's quite expensive because you don't need fiber optic for this length. The best deal here is probably the 100-foot cable priced at $150, so only about $30 more for an extra 97 feet of fiber-optic goodness.

Ruipro has made the HDMI connectors on both ends removable, so you won't have to replace the entire cable if a plug breaks. When removed, the end of the cable can slot into keystone jacks and wall plates as well for easy storage. The cable itself is relatively thin for its size, and the connectors are made entirely of metal to ensure durability.

Another benefit of fiber optic is its resistance to electromagnetic interference, though that's not a huge issue to begin with for HDMI, and EMI is notoriously used as the bait to sell those aforementioned miracle cures. Regardless, this is still a solid HDMI 2.1 cable for those who value signal integrity, and even though the starting price is certainly not enticing, the subsequent options are priced rather fairly.


Original Submission

posted by janrinok on Tuesday April 07, @06:18AM   Printer-friendly

Contrary to long-standing beliefs, motion from eye movements helps the brain perceive depth—a finding that could enhance virtual reality:

When you go for a walk, how does your brain know the difference between a parked car and a moving car? This seemingly simple distinction is challenging because eye movements, such as the ones we make when watching a car pass by, make even stationary objects move across the retina—motion that has long been thought of as visual "noise" the brain must subtract out.

Now, researchers at the University of Rochester have discovered that instead of being meaningless interference, the visual motion of an image caused by eye movements helps us understand the world. The specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space.

"The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance," says Greg DeAngelis, [...] "But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world."

[...] "We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements," says DeAngelis. "Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object's motion and depth."

This research has important implications for understanding visual perception, which informs how the brain interprets everyday activities like reading and recognizing faces. But it could also provide insight and new applications for visual technologies, such as virtual reality headsets.

"VR headsets don't factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making," says DeAngelis. This could be what causes some people to experience motion sickness while using a VR headset.

Journal Reference: Xu, ZX., Pang, J., Anzai, A. et al. Flexible computation of object motion and depth based on viewing geometry inferred from optic flow. Nat Commun 17, 1092 (2026). https://doi.org/10.1038/s41467-025-67857-4


Original Submission

posted by hubie on Tuesday April 07, @01:33AM   Printer-friendly

Apple has finally discontinued its tower workstation:

While Apple is celebrating its upcoming 50th anniversary and looks forward to another 50 years, there’s one major product that has come to an end. The Mac Pro, as confirmed by Apple with Macworld, has been discontinued by the company. The Mac Pro section of Apple.com has been removed from the website, though Mac Pros are still available through Apple’s Certified Refurbished store.

It’s a quiet end for a product that was last updated in 2023 with an M2 Ultra chip. But it wasn’t a surprise; Bloomberg’s Mark Gurman reported last November that Apple had “largely written off” the Mac Pro, believing that the Mac Studio is a better product. Why it took so long to finally pull the plug isn’t clear, but Apple hadn’t done any updates to the hardware since the M2 Ultra upgrade nearly three years ago.

Apple has been rumored to have an update to the Mac Studio in the works, with an announcement likely between now and WWDC26. Apple positions the Mac Studio as the machine for production environments that demand workstation performance, and seemingly feels confident that the Mac Studio can fill the Mac Pro’s shoes.

The discontinuation of the Mac Pro leaves Apple without a modular tower computer, but it’s been moving away from those types of machines for a while. In response to those who think an expandable tower is a gaping hole in the Mac lineup, Apple often counters with confidence that its silicon can make up for the need for expansion cards, and Thunderbolt can handle storage needs just as well.

Apple introduced the Mac Pro in 2006, the same time Apple completed its transition from Motorola chips to Intel. It had two 64-bit, Intel Xeon 5100 (Woodcrest) processors, four hard drive bays, eight RAM slots, and started at $2,499.


Original Submission

posted by hubie on Monday April 06, @08:52PM   Printer-friendly

The Pentagon is spending $13.4 billion on AI this year alone:

The designation enters Maven into the Future Years Defense Program as a protected line item, giving it visibility and stability across budget cycles that experimental programs lack. The U.S. Army will manage all Maven contracts going forward, and oversight will transfer from the National Geospatial-Intelligence Agency to the Chief Digital and AI Officer within 30 days, with program-of-record status expected before the close of fiscal year 2026 on September 30.

Palantir took over and built a full command-and-control platform that ingests data from more than 150 sources, according to Palantir's public demonstrations: satellite imagery, drone video, radar, infrared sensors, signals intelligence, and geolocation data. Computer vision algorithms trained on millions of labeled images automatically detect and classify battlefield objects, with yellow-outlined boxes marking potential targets, blue outlines flagging friendly forces and no-strike zones, and an ‘AI Asset Tasking Recommender’ proposing which weapons platforms and munitions should be assigned to each target.

NGA Director Vice Admiral Frank Whitworth stated at Palantir's AIPCON 9 conference in March that Maven can generate 1,000 targeting recommendations per hour, as reported by The Register, with the 18th Airborne Corps reportedly achieving comparable targeting output to the 2,000-person cell used during Operation Iraqi Freedom with roughly 20 people. Maven now has more than 20,000 active users, a figure that has quadrupled since March 2024. The platform was used during the 2021 Kabul airlift, to supply target coordinates to Ukrainian forces in 2022, and most recently during Operation Epic Fury against Iran in 2026, where it reportedly enabled processing of 1,000 targets within the first 24 hours, according to SpaceNews. NATO acquired a version in March 2025.

Meanwhile, the FY2026 defense budget reached $1.01 trillion, representing a 13% increase over FY2025, and for the first time included a dedicated AI and autonomy budget line of $13.4 billion, according to MeriTalk's analysis of the Pentagon budget request. That allocation covers unmanned aerial vehicles ($9.4 billion), maritime autonomous systems ($1.7 billion), and supporting AI software ($1.2 billion). The Pentagon now oversees more than 685 AI-related projects tied to weapons systems, per Congressional Research Service tracking.

[...] The Brennan Center for Justice, in a March 2026 report titled "The Business of Military AI," documented that Hegseth halved staffing at the Office of the Director of Operational Test and Evaluation and shuttered the Civilian Protection Center of Excellence. The center's researchers wrote that "the accelerating use of AI in warfighting has not been met with commensurate urgency to reckon with its dangers."

CSIS research has quantified AI-assisted targeting error propagation at 25% under variable conditions, according to a January 2026 analysis. Whitworth stated that by June 2026, Maven will begin transmitting "100 percent machine-generated" intelligence to combatant commanders. “No human hands actually participate in that particular template and that particular dissemination,” he added. “We want to use it for everything, not just targeting.”

Senator Elissa Slotkin introduced the AI Guardrails Act this month, which would prohibit the DoD from using autonomous weapons to kill without human authorization and bar AI use for domestic mass surveillance, The Hill reported. The FY2026 NDAA already declares targeting and launch authorization "inherently governmental" functions and requires reporting of autonomous weapons directive waivers to Congress.

[...] Meanwhile, a recent CSIS analysis documented Russian forces striking approximately 300 targets per day using unmanned systems in Ukraine, with data collection feeding AI platforms designated Platform-GNS and Avtomat. Russia voted against the December 2024 UN General Assembly draft resolution on lethal autonomous weapons alongside only North Korea and Belarus. That resolution passed 166-3 but remains non-binding; no international treaty currently governs lethal autonomous weapons systems. With AI reshaping the techonolgy industry, its influence has now begun to slip into the long shadow of military usage, and the implications of such deals remains to be seen.


Original Submission

posted by hubie on Monday April 06, @04:11PM   Printer-friendly

Claude source code leaked?

The date makes it suspicious, but both the accidental publishing of source and the tear down sounds all too plausible.

https://neuromatch.social/@jonny/116324676116121930

  • Claude code source "leaks" in a mapfile
  • people immediately use the code laundering machines to code launder the code laundering frontend
  • now many dubious open source-ish knockoffs in python and rust being derived directly from the source

What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???

The 1 Apr Download of 'Leaked' Claude Code Source Contains Malware

Source code with a side of Vidar stealer and GhostSocks

Tens of thousands of people eagerly downloaded the leaked Claude Code source code this week, and some of those downloads came with a side of credential-stealing malware.

A malicious GitHub repository published by idbzoomh uses the Claude Code exposure as a lure to trick people into downloading malware, including Vidar, an infostealer that snarfs account credentials, credit card data, and browser history; and GhostSocks, which is used to proxy network traffic. 

Zscaler's ThreatLabz researchers came across the repo while monitoring GitHub for threats, and said it's disguised as a leaked TypeScript source code for Anthropic's Claude Code CLI. 

"The README file even claims the code was exposed through a .map file in the npm package and then rebuilt into a working fork with 'unlocked' enterprise features and no message limits," the security sleuths said in a Thursday blog.

They added that the GitHub repository link appeared near the top of Google results for searches like "leaked Claude Code." While that was no longer the case at The Register's time of publication, at least two of the developer's trojanized Claude Code source leak repos remained on GitHub, and one of them had 793 forks and 564 stars.

[...] In March, security shop Huntress warned about a similar malware campaign using OpenClaw, the already risky AI agent platform, as a GitHub lure to deliver the same two payloads.

Both of these illustrate how quickly criminals move to take a buzzy new product or news event (like OpenClaw and the Claude Code leak) and then abuse it for online scams and financial gain. "That kind of rapid movement increases the chance of opportunistic compromise, especially through trojanized repositories," the Zscaler team wrote.

The blog also includes a list of indicators of compromise, including the GitHub repositories with the trojanized Claude Code leak and malware hashes to help defenders in their threat-hunting efforts, so be sure to check that out - and, as always, be careful what you download. ®


Original Submission #1Original Submission #2

posted by hubie on Monday April 06, @11:23AM   Printer-friendly
from the exploding-on-the-scene dept.

New fossils from the Ediacaran Period show that some animal groups are older than we thought:

More than 539 million years ago, soft, clarinet-shaped animals anchored themselves to the seafloor on disc-shaped bases, swaying alongside stalked animals resembling worms and baskets. These woodwindlike creatures are just a few of those coming to life from a treasure trove of newly discovered fossils in southwestern China.

It’s surprising to see some of these weird creatures this far back in the fossil record, and their discovery is unearthing crucial new details about one of the most notable explosions in the diversity of animals in fossil history, researchers report April 2 in Science.

“This paper is absolutely fascinating,” says paleontologist Emily Mitchell at the University of Cambridge. “It provides vital insights into life around the end of the Ediacaran Period.”

The Ediacaran preceded a pivotal moment in animal prehistory called the Cambrian explosion, which started around 539 million years ago and marked a dramatic and rapid diversification, an “explosion” of physical forms and complexity. How that explosion happened isn’t clear. Fossils from the late Ediacaran Period, from 575 million to 539 million years ago, show this is when the first unambiguous animal fossils appear but don’t offer many details about the animals’ bodies or biology. Many of the Cambrian animal groups also do not appear in the Ediacaran record, suggesting that Cambrian animal diversity may have exploded from only a small number of species.

Now, a new trove of fossil specimens collected near Jiangcheng, China, is challenging that idea.

[...] Among the more eyebrow-raising findings were the animals with bilateral symmetry — similar features on the right and left sides. Fossilized bodies of bilaterians this early is rare, with only four species known from the Ediacaran until now. Li and the team found more than 180 bugle worm fossils, along with fossils of other bilaterial creatures, including those that looked like sausages on skewers, with feathery appendages around their mouth ends.

Emmy Smith, a paleontologist at Johns Hopkins University in Baltimore, was struck by the abundance and diversity of bilaterian fossil finds. Many of these show structures specialized for feeding. These weren’t simple progenitors of later lineages; these animals were already quite physically complex, she says. “That strengthens the view that major animal lineages were already diversifying before the Cambrian.”

The results suggest the explosion of animal diversity in the Cambrian didn’t appear out of nowhere, Li says. Instead, a gradual buildup of complex animal life was underway millions of years before.

Journal Reference: G Li et al. The dawn of the Phanerozoic: A transitional fauna from the late Ediacaran of Southwest China. Science. Published online: April 2, 2026. doi: 10.1126/science.adu2291


Original Submission

posted by hubie on Monday April 06, @06:35AM   Printer-friendly

Turns out massive caches are good for more than games. House of Zen boasts 5-13% perf boost over prior-gen part:

AMD aims to extend its lead in desktop gaming with a new CPU, dubbed the Ryzen 9 9950X3D2 Dual Edition. This top-of-the-line part has 16 cores fed by an absolutely massive 208 MB pool of cache, with memory spread across both CCDs.

The hotly anticipated processor chip is essentially a modified version of the 9950X3D announced in 2025, only both of the chip's two compute dies are now equipped with a 64 MB SRAM tile, boosting the L3 cache from 128 MB to 192 MB.

Larger caches benefit data heavy workloads, in particular PC games, by keeping more of the working memory closer to the cores. Since the launch of the Ryzen 7 5800X3D in 2022, AMD has used advanced packing to expand its chips' L3 cache without needing to design a larger die. This technology helped AMD to overtake long-time rival Intel in gaming CPU performance. 

While 3D V-Cache's most obvious benefits accrue to gamers, the additional cache also benefits a lot of high-powered production workloads, like 3D rendering, code compilation, AI, and data science because frequently accessed data can stay resident on the CPU for longer. (This is one of the reasons why caches of server CPUs have increased so dramatically over the past few years.)

[...] According to Jack Huynh, SVP of AMD's computing and graphics group, with the 9950X3D2, customers "no longer have to choose between a gaming or creator CPU."

[...] The 9950X3D2 is slated to hit store shelves on April 22. Pricing for the new part hasn't been released just yet, though with the 9950X3D currently retailing for north of $649, we don't expect it to be cheap.

This may make it tough to sell to gamers at a time when the memory, storage, and GPU prices are at an all-time high. AMD's decision to launch a new flagship in the current climate comes in stark contrast to Intel's newly-launched Core Ultra 200S Plus series processors, which we reviewed earlier this week, which promise 18 to 24 cores at a price ranging from $200-$300. 

While AMD's X3D chips still hold the advantage in gaming, Intel's latest parts may see wider adoption because they're cheap and also perform exceptionally well in production workloads.


Original Submission

posted by hubie on Monday April 06, @01:54AM   Printer-friendly

The science of smartphone addiction:

This is huge news, a landmark verdict that will inform hundreds of cases to come. While the plaintiff, a 20-year-old identified only as KGM, has been awarded $6m in damages, it's the verdict itself that's most damaging, as it opens the door to many more lawsuits against tech companies.

KGM's lawyers, in their closing remarks, said: “How do you make a child never put down the phone? That’s called the engineering of addiction. They engineered it, they put these features on the phones. These are Trojan horses: they look wonderful and great … but you invite them in and they take over.”

One literature review by Italian pediatrists linked digital addiction in children with depression, diet, and psychological issues, as well as 'sleep, addiction, anxiety, sex related issues, behavioral problems, body image, physical activity, online grooming, sight, headache, and dental care'. KGM was six years old when she first got addicted to social media, according to her testimony.

Researchers in Germany, Sweden, and the Netherlands have also linked 'high social media usage' among adolescents to 'a statistically significant change in the developmental trajectory of cerebellum volumes', a part of the brain associated with emotional control. It could literally influence the brain's physical development.

Another report says: "frequent social media use may be associated with distinct changes in the developing brain in the amygdala (important for emotional learning and behavior) and the prefrontal cortex (important for impulse control, emotional regulation, and moderating social behavior), and could increase sensitivity to social rewards and punishments".

However, it's worth noting that none of these findings are yet conclusive.

They're not entirely wrong. The basis of addiction is all about hijacking the 'mesolimbic system', the part of the brain responsible for associating certain behaviors with rewards, both natural (food, sex, play) and artificial (drugs such as alcohol and nicotine, and notifications). Once a reward is achieved, dopamine is released.

One study on teen addiction linked activation of the mesolimbic pathway to social media use, stating children are "often victims of an unrelenting 'dopamine cycle' created in a loop of 'desire' induced by endless social media feeds, 'seeking and anticipating rewards' in the way of photo tagging, likes, and comments," the latter being the triggers that continue to reinstate the 'desire' behavior.

"The overactivation of the dopamine system in such individuals can further increase the risk of addictive behaviors or pathological changes that lead to a decline in pleasure from natural rewards." Essentially, all you want to do is keep scrolling, just like an addict looking for an endless fix because natural rewards no longer provide the same pleasure as scrolling.

According to CNN, KGM's lawyer Mark Lanier said in his opening statement: “This case is about two of the richest corporations who have engineered addiction in children’s brains,” Lanier said in his opening statement. “The swipe, for a child, like Kaley, this motion is a handle of a slot machine. But every time she swipes, it’s not for money, but for mental stimulation.”

KGM's lawyers mention the infinitely scrollable feeds and video autoplay as features designed to keep people on the apps, maintain attention, and encourage addictive behaviors. But it's ok, because the inventor of the scrollable feed, Aza Raskin, apologized when he unleashed this horror upon the world.

Combine this with the infinitely scrollable feed and addictive, casino-esque nature of social media platforms, and you get doomscrolling, a constant stream of bad news, enraging user-created content, and messaging that you're never going to be enough unless you do this, or buy that, or look like this.

[...] The bottom line? Children are easily impressionable, and if online negativity is more rewarding than positivity, unfettered access to an endless stream of content designed to make users feel worse to increase engagement is going to warp their worldview. According to the jury, in this case, the buck stops at the algorithm's designers.


Original Submission

posted by jelizondo on Sunday April 05, @09:04PM   Printer-friendly

'Shockingly bad': Nissan Leaf drivers voice anger over app shutdown:

Owners of some Nissan Leaf electric vehicles are angry after the carmaker announced it would shut down an app that lets them remotely control battery charging and other functions.

Drivers of Leaf cars made before May 2019 and the e-NV200 van (produced until 2022) have been told that the NissanConnect EV app linked to their vehicles will "cease operation" from 30 March. This means they will lose remote services, including turning on the heating, and some map features.

Experts said they expected other drivers to experience similar problems in future as "connected cars" – vehicles that can connect to the internet – get older.

One driver and Guardian Money reader, Alan Clucas, said he was upset by the switch-off, adding that some of the affected vehicles were less than four years old. "I think Nissan should do better," he said.

Talking about his seven-year-old Leaf, Clucas said the "most annoying thing will be not being able to smart-charge the car or remotely warm it up on frosty mornings". He added: "We could previously check the charge levels from a mobile phone."

Other affected motorists have been discussing the matter online. "Looks like going forward, only paid-for remote connectivity will be supported," said one, adding that it was "amazing" that Nissan "only supported a core EV feature for seven years. Considering [an] average car can last for 12-plus years, that is shockingly bad."

Another driver added: "My car is almost 10 years old now, but those with an early 2020 model won't be too happy that their not-even seven-year-old car is having remote access removed with a month's notice."

Nissan faced criticism in 2024 when it dropped the first generation of Leaf cars after the switch-off of the UK's 2G network. The carmaker said the latest move was because the app could not be "upgraded to support future enhancements".​

In-car services such as climate control and charging timers would still be available through the infotainment system, Nissan said, but remote services and some map-related features would not.

Steve Walker from the motoring magazine Auto Express said the situation was a preview of what would happen when "today's cars" get old.

"As modern cars that are even more reliant on connected services and updates than the Leaf age, it is likely that manufacturer support for their systems will drop away, too," he said.

This could mean other features including navigation systems, touchscreen controls and even subscriptions for features such as heated seats, autonomous driving aids or extra engine power could stop working or be turned off further down the line, he said.

"Nobody wants to see cars rendered obsolete before their time," Walker said. "The best way to minimise the environmental impact of cars is to build them to last. Software and digital systems need to be as durable and reliable as mechanical components."

Benjamin Gorman, a senior lecturer at Bournemouth University, said the tech world was shifting towards software-as-a service (Saas) models.

"A good example is software like Adobe Photoshop – historically, you could buy it once and use it for as long as you liked, whereas now it typically requires an ongoing subscription," said Gorman.

This worked well for things such as games and entertainment platforms, where people are used to subscriptions and shorter upgrade cycles, he said. However, it is more problematic when applied to expensive physical products such as cars, which people expect to keep working for a decade or more.

"I suspect we will see this issue more often in the coming years as vehicles become increasingly software-driven," said Gorman. "We are seeing more manufacturers experiment with subscription fees for connected features ... but it raises important questions about what consumers feel they should permanently own versus what they are effectively renting through software services."


Original Submission

posted by jelizondo on Sunday April 05, @04:13PM   Printer-friendly
from the heat-island dept.

The data centers at the heart of the AI boom are producing so much heat that they're spiking land temperatures for miles around them by up to 16 degrees Fahrenheit, new research suggests. The effect is so pronounced that the researchers say they're creating entire "heat islands:"

The data centers at the heart of the AI boom are producing so much heat that they're spiking land temperatures for miles around them by up to 16 degrees Fahrenheit, new research suggests. The effect is so pronounced that the researchers say they're creating entire "heat islands."

The findings, detailed in a study that's yet-to-be-peer-reviewed, add to an already grim picture of the environmental impact of these sprawling facilities, the largest of which consume enough energy to power entire cities. Their commensurate greenhouse gas emissions, however, apparently aren't the only way data centers are heating up the world around them.

The researchers focused on roughly 8,400 so-called "hyperscalers," the term used to describe data centers of incredible size that offer cloud computing and AI services. Their construction has surged in the past decade, and the AI boom has pushed their demand and scope to new heights; Meta's new "Hyperion" data center, for example, cost $27 billion to build and has an expected computing capacity of five gigawatts, an appetite that takes ten gas-powered plants to sate.

[...] The effects were local, but far reaching. The researchers found that the temperature increases were felt up to 6.2 miles away — though they dropped off with distance — in all affecting more than 340 million people. CNN's coverage notes that the trend held globally: Mexico's burgeoning data center hub in Bajio saw an uptick of around 3.6 degrees over the past 20 years, as did Aragon, Spain, itself a hot new hub for hyperscalers.

Link to Study: The data heat island effect: quantifying the impact of AI data centers in a warming world


Original Submission

posted by jelizondo on Sunday April 05, @11:41AM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/27/security_boffins_harvest_bumper_crop/

Computer security boffins have conducted an analysis of 10 million websites and found almost 2,000 API credentials strewn across 10,000 webpages.

The researchers detail their findings in a preprint paper titled "Keys on Doormats: Exposed API Credentials on the Web," and say they conducted the study because much of the attention on exposed credentials has focused on scouring code repositories and source code. They argue that dynamic analysis of production websites is essential to understand the scope of the problem.

"What we found were highly sensitive API credentials left publicly exposed on public webpages," Nurullah Demir, a PhD candidate at Stanford and corresponding author, told The Register in an email. "These act as access tokens that authorize applications to interact with third-party services, granting direct access to critical infrastructure like cloud platforms and payment providers."

Demir contends that API credentials are even more dangerous than exposed login details because they provide programmatic access to resources.

The researchers scanned approximately 10 million websites using a tool called TruffleHog, and found 1,748 valid credentials belonging to organizations including multinational corporations, critical infrastructure entities, and government agencies. The keys provide access to services like AWS, GitHub, Stripe, and OpenAI.

Demir said one of the affected organizations was a global bank. Another makes firmware for electronic devices.

"A 'Global Systemically Important Financial Institution' exposed its cloud credentials directly on its webpages," said Demir. "This gave direct access to multiple core cloud infrastructure services, including databases and key management systems."

The researchers also found repository credentials for a developer responsible for firmware used by various manufacturers of drones and remote-controlled devices. Attackers could use those credentials to modify source code and push malicious firmware updates to various devices, Demir said.

"Exposure is widespread across service categories, with cloud services (e.g., AWS, Cloudflare) and payment services (e.g., Stripe, Razorpay) accounting for the majority of verified credentials," the paper explains. "AWS credentials alone represent more than 16 percent of all verified exposures and were found on over 4,693 websites. Email and communication services such as SendGrid and Twilio also appear frequently, with a significant portion of their exposures originating from embedded third-party resources."

Most of the credentials the researchers found were present in JavaScript resources (84 percent), followed by HTML (eight percent) and JSON (seven percent) files. They also turned up unusual cases like a verified GitHub access token embedded in a CSS file.

In JavaScript files, 62 percent of credential exposures show up in bundles created by build tools like Webpack.

Demir said he and his co-authors – Yash Vekaria of UC Davis, Georgios Smaragdakis from TU Delft/Stanford, and Zakir Durumeric from Stanford – made a significant effort to contact affected organizations. The number of exposed credentials declined by half in about two weeks after the researchers started to report their findings.

"When we got feedback from the developers, we saw that a significant number of them were completely unaware of the exposures," he explained. "What is perhaps most concerning is that our historical analysis showed these credentials often remain exposed for an average of 12 months, in some cases for years."

Demir said that he and his co-authors only verified credentials for 14 different service providers, so the exposure figure represents a lower bound.

"We strongly believe that the actual number of exposed credentials across the web is much higher than what we captured in this study," he said. ®


Original Submission

posted by janrinok on Sunday April 05, @06:49AM   Printer-friendly

https://linuxiac.com/vitruvianos-0-3-debuts-as-haiku-inspired-linux-os/

VitruvianOS 0.3 has been released as the project’s first publicly available version, described by its developers as a pilot build. It is based on the Linux kernel and adopts a design inspired by Haiku OS and BeOS.

For reference, VitruvianOS’s development began in 2019, and now, in 2026, this version serves more as a functional foundation rather than a complete system. But before we go further, a few words about the project itself, since the name is probably unfamiliar to the general public.

VitruvianOS is not a Linux distribution in the usual sense. It uses the Linux kernel only for hardware support, while replacing the standard Linux userland and desktop stack with its own components. Its goal is to combine Linux compatibility with a BeOS-style architecture.

Let me explain. In a typical Linux desktop system, applications run on top of libraries and a display server such as X11 or Wayland. However, VitruvianOS removes this entire layer. It does not use X11 nor Wayland. Instead, it implements its own graphics system, input handling, and application runtime.

A key feature is Nexus, an internal communication layer that manages messaging between system components.

The system features native desktop elements modeled after BeOS, including a Deskbar and a Tracker-style file manager. It also offers a compatibility layer to support applications built for Haiku and BeOS APIs.

Moreover, the system uses a Linux kernel with real-time patches. Regarding filesystems, VitruvianOS 0.3 supports XFS and SquashFS, as well as extended attributes.

In the announcement, the developers have also outlined a short-term roadmap. Version 0.3.1 will add missing components and bug fixes based on initial testing. Version 0.3.2 aims to move the system toward self-hosting, enabling VitruvianOS to build itself.

Next, the upcoming 0.4 release will focus on stability and broader hardware support, including ongoing ARM port development. Planned improvements also include enhanced input handling, a complete keymap system, and further user interface refinements.

For more details, see the announcement.

Finally, once again: keep in mind that VitruvianOS 0.3 is an experimental release intended mainly for testing and development.

-- Related:

- The BeOS Faithful Haven’t Given Up: Inside VitruvianOS, the Audacious Attempt to Build a Desktop Operating System From Scratch


Original Submission

posted by janrinok on Sunday April 05, @05:37AM   Printer-friendly

The 2nd crew member of the F15E shot down over Iran has been successfully rescued. The Pentagon has released that he is a colonel and was the Weapons Systems Officer on board the aircraft. He has suffered 'minor injuries' but is otherwise reported to be in good condition.

A large operation involving over a 'dozen' aircraft was mounted and he was recovered from high ground in a mountainous region. The rescue was conducted under fire from Iranian forces. No casualties have been reported.