Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:437

posted by hubie on Friday December 05, @01:24AM   Printer-friendly
from the couldnt-make-it-never dept.

Windows takes a backseat on Dell's latest AI workstation as Linux gets the priority:

Dell has a solid track record with Linux-powered OSes, particularly Ubuntu. The company has been shipping developer-focused laptops with Ubuntu pre-installed for years.

Many of their devices come with compatible drivers working out of the box. Audio, Wi-Fi, Thunderbolt ports, and even fingerprint readers mostly work without hassle. My daily workhorse is a Dell laptop that hasn't had a driver-related issue for quite some time now.

And a recent launch just reinforces their Linux approach.

Dell just launched the Pro Max 16 Plus. It is being marketed as the first mobile workstation with an enterprise-grade discrete NPU, the Qualcomm AI 100 PC Inference Card. It packs 64GB of dedicated AI memory and dual NPUs on a single card.

Under the hood, you get Intel Core Ultra processors (up to Ultra 9 285HX), memory up to 256GB CAMM2 at 7200MT/s, GPU options up to NVIDIA RTX PRO 5000 Blackwell with 24GB VRAM, and storage topping out at 12TB with RAID support.

Interestingly, Phoronix has received word that the Windows 11 version of the Dell Pro Max 16 Plus won't ship until early 2026, while the validated Ubuntu 24.04 LTS version is already available.

With this, Dell is targeting professionals who can't rely on cloud inferencing. It says that the discrete NPU keeps data on-device while eliminating cloud latency, enabling work in air-gapped environments, disconnected locations, and compliance-heavy industries.

Dell Pro Max 16 Plus

[Ed. note: NPU is a neural processing unit designed to accelerate AI and machine learning tasks]


Original Submission

posted by hubie on Thursday December 04, @08:42PM   Printer-friendly
from the enshittification-will-continue-until-morale-improves dept.

Netflix Quietly Drops Support for Casting to Most TVs

Netflix will only support Google Cast on older devices without remotes:

Have you been trying to cast Stranger Things from your phone, only to find that your TV isn't cooperating? It's not the TV—Netflix is to blame for this one, and it's intentional. The streaming app has recently updated its support for Google Cast to disable the feature in most situations. You'll need to pay for one of the company's more expensive plans, and even then, Netflix will only cast to older TVs and streaming dongles.

The Google Cast system began appearing in apps shortly after the original Chromecast launched in 2013. Since then, Netflix users have been able to start video streams on TVs and streaming boxes from the mobile app. That was vital for streaming targets without their own remote or on-screen interface, but times change.

Today, Google has moved beyond the remote-free Chromecast experience, and most TVs have their own standalone Netflix apps. Netflix itself is also allergic to anything that would allow people to share passwords or watch in a new place. Over the last couple of weeks, Netflix updated its app to remove most casting options, mirroring a change in 2019 to kill Apple AirPlay.

The company's support site (spotted by Android Authority) now clarifies that casting is only supported in a narrow set of circumstances. First, you need to be paying for one of the ad-free service tiers, which start at $18 per month. Those on the $8 ad-supported plan won't have casting support.

Even then, Casting only appears for devices without a remote, like the earlier generations of Google Chromecasts, as well as some older TVs with Cast built in. For example, anyone still rocking Google's 3rd Gen Chromecast from 2018 can cast video in Netflix, but those with the 2020 Chromecast dongle (which has a remote and a full Android OS) will have to use the TV app. Essentially, anything running Android/Google TV or a smart TV with a full Netflix app will force you to log in before you can watch anything.

[...] Netflix has every reason to want people to log into its TV apps. After years of cheekily promoting password sharing, the company now takes a hardline stance against such things. By requiring people to log into more TVs, users are more likely to hit their screen limits. Netflix will happily sell you a more expensive plan that supports streaming to this new TV, though.

[...] So Netflix may have a good reason to think it can get away with killing casting. However, trying to sneak this one past everyone without so much as an announcement is pretty hostile to its customers.

Netflix Is Killing Casting From Your Phone

Unless you have older hardware, you can't cast Netflix to your TV anymore:

Smart TVs have undoubtedly taken over the streaming space, and it's not hard to see why. You download the apps you want to use, log into your accounts, and presto: You can stream anything with a few clicks of your remote.

But smart TV apps aren't the only way people watch shows and movies on platforms like Netflix. Among other methods, like plugging a laptop directly into the TV, many people still enjoying casting their content from small screens to big screens. For years, this has been a reliable way to switch from watching Netflix on your smartphone or tablet to watching on your TV—you just tap the cast button, select your TV, and in a few moments, your content is beamed to the proper place. Your device becomes its own remote, with search built right-in, and it avoids the need to sign into Netflix on TVs outside your home, such as when staying in hotels.

At least it did, but Netflix no longer wants to let you do it.

[...] Netflix doesn't explain why it's making the change, so I can only speculate. First, it's totally possible this is simply a tech obsolescence issue. Many companies drop support for older or underused technologies, and perhaps Netflix sees now as the time to largely drop support for casting. Streamlining the tech the app has to support means less work for Netflix developers, and it wouldn't be the first time the company dropped support for older platforms. However, that doesn't really explain why the company still supports some devices for casting. Maybe it took a look at its user base, and made the calculation that enough subscribers relied on Google Cast devices for casting, but not enough relies on newer hardware for casting. We might not really know unless Netflix decides to issue a statement.

That said, I can't help but feel like this is related to Netflix's crackdown on password sharing. The company clearly doesn't want you using its services unless you have your own paid account—or have another user pay extra to have you on their account. Casting, however, makes it easy to continue using someone else's account without paying for it. Since Netflix only requires mobile users to log into the account owner's home wifi once a month to continue watching on a device, you could theoretically cast Netflix from your smartphone to your TV to continue enjoying your shows and movies "for free." By removing casting as an option for most users, those users will either need to connect a device to the TV by wire—like a laptop connected via HDMI—or log into the smart TV app. And if those users don't actually have permission to access that account via that app, they won't be able to stream.

If this really is the company's intention, it's doing so at the inconvenience of paying users, too. If you're traveling, you now need to bother with signing into your account on a TV you don't own. If you don't like using your smart TV apps, you're kind of out of luck, unless you want to deal with connecting a computer to your TV whenever you want to catch up on Stranger Things.

Were any Soylentils doing this?


Original Submission

posted by hubie on Thursday December 04, @03:53PM   Printer-friendly

AI red-teamers in Korea show how easily the model spills dangerous biochemical instructions:

Google's newest and most powerful AI model, Gemini 3, is already under scrutiny. A South Korean AI-security team has demonstrated that the model's safety net can be breached, and the results may raise alarms across the industry.

Aim Intelligence, a startup that tests AI systems for weaknesses, decided to stress-test Gemini 3 Pro and see how far it could be pushed with a jailbreak attack. Maeil Business Newspaper reports that it took the researchers only five minutes to get past Google's protections.

The researchers asked Gemini 3 to provide instructions for making the smallpox virus, and the model responded quickly. It provided many detailed steps, which the team described as "viable."

This was not just a one-off mistake. The researchers went further and asked the model to make a satirical presentation about its own security failure. Gemini replied with a full slide deck called "Excused Stupid Gemini 3."

[...] The AI security testers say this is not just a problem with Gemini. Newer models are becoming so advanced so quickly that safety measures cannot keep up. In particular, these models do not just respond; they also try to avoid detection. Aim Intelligence states that Gemini 3 can use bypass strategies and concealment prompts, rendering simple safeguards far less effective.

[...] If a model strong enough to beat GPT-5 can be jailbroken in minutes, consumers should expect a wave of safety updates, tighter policies, and possibly the removal of some features. AI may be getting smarter, but the defenses protecting users don't seem to be evolving at the same pace.


Original Submission

posted by hubie on Thursday December 04, @11:04AM   Printer-friendly
from the tech-bros-doing-their-thing dept.

A blog post covers why datacenters in space are a terrible, horrible, no good idea. Thermal management is just the beginning of the long list of challenges which make space an inferior environment for data centers.

In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company including YouTube and the bit of Cloud responsible for deploying AI capacity, so I'm quite well placed to have an opinion here.

The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you've not worked specifically in this area before, I'll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious.

Previously:
(2025) The Data Center Resistance Has Arrived
(2025) Microsoft: the Company Doesn't Have Enough Electricity to Install All the AI GPUs in its Inventory
(2025) China Submerges a Data Center in the Ocean to Conserve Water, is That Even a Good Idea?
(2025) Data Centers Turn to Commercial Aircraft Jet Engines Bolted Onto Trailers as AI Power Crunch Bites
(2025) The Real (Economic) AI Apocalypse is Nigh
(2025) Real Datacenter Emissions Are A Dirty Secret
... and more.


Original Submission

posted by hubie on Thursday December 04, @06:17AM   Printer-friendly
from the get-the-flock-out-of-here dept.

An accidental leak revealed that Flock, which has cameras in thousands of US communities, is using workers in the Philippines to review and classify footage:

Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company.

The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business—creating a surveillance system that constantly monitors US residents' movements—means that footage might be more sensitive than other AI training jobs.

Flock's cameras continuously scan the license plate, color, brand, and model of all vehicles that drive by. Law enforcement are then able to search cameras nationwide to see where else a vehicle has driven. Authorities typically dig through this data without a warrant, leading the American Civil Liberties Union and Electronic Frontier Foundation to recently sue a city blanketed in nearly 500 Flock cameras.

Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race."

It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods.

The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website.

The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

One slide about audio told workers to "listen to the audio all the way through," then select from a drop-down menu including "car wreck," "gunshot," and "reckless driving." Another slide says tire screeching might be associated with someone "doing donuts," and another says that because it can be hard to distinguish between an adult and a child screaming, workers should use a second drop-down menu explaining their confidence in what they heard, with options like "certain" and "uncertain."

Another slide deck explains that workers should not label people inside cars but should label those riding motorcycles or walking.

After 404 Media contacted Flock for comment, the exposed panel became no longer available. Flock then declined to comment.


Original Submission

posted by hubie on Thursday December 04, @01:36AM   Printer-friendly
from the another-X11-bites-the-dust dept.

The KDE project has made the call:

Well folks, it's the beginning of a new era: after nearly three decades of KDE desktop environments running on X11, the future KDE Plasma 6.8 release will be Wayland-exclusive! Support for X11 applications will be fully entrusted to Xwayland, and the Plasma X11 session will no longer be included.
        ↫ The Plasma Team

They're following in the footsteps of the GNOME project, who will also be leaving the legacy windowing system behind. What this means in practice is that official KDE X11 support will cease once KDE Plasma 6.7 is no longer supported, which should be somewhere early 2026. Do note that the KDE developers intend to release a few extra bugfix releases in the 6.7 release cycle to stabilise the X11 session as much as possible for those people who are going to stick with KDE Plasma 6.7 to keep X11 around.

For people who wish to keep using X11 after that point, the KDE project advises them to switch to LTS distributions like Alma Linux, which intend to keep supporting Plasma X11 until 2032. Xwayland will handle virtually all X11 applications running inside the Wayland session, including X11 forwarding, with similar functionality implemented in Wayland through Waypipe. Also note that this only applies to Plasma as a whole; KDE applications will continue to support X11 when run in other desktop environments or on other platforms.

As for platforms other than Linux – FreeBSD already has relatively robust Wayland support, so if you intend to run KDE on FreeBSD in the near future, you'll have to move over to Wayland there, as well. The other BSD variants are also dabbling with Wayland support, so it won't be long before they, too, will be able to run the KDE Plasma Wayland session without any issues.

What this means is that the two desktop environments that probably make up like 95% of the desktop Linux user base will now be focusing exclusively on Wayland, which is great news. X11 is a legacy platform and aside from retrocomputing and artisanal, boutique setups, you simply shouldn't be using it anymore. Less popular desktop environments like Xfce, Cinnamon, Budgie, and LXQt are also adding Wayland support, so it won't be much longer before virtually no new desktop Linux installations will be using X11.

One X down, one more to go.


Original Submission

posted by hubie on Wednesday December 03, @08:49PM   Printer-friendly

The Japanese chipmaker is looking to take on established fabs:

Rapidus, Japan's homegrown challenger to Taiwan Semiconductor Manufacturing Company (TSMC), has announced that it will start building its next-generation 1.4-nanometer fab in fiscal year 2027, with production expected to commence in Hokkaido in 2029. According to Nikkei Asia, this move is expected to help the Japanese chipmaker close the gap with the Taiwanese chip-making giant, which has already revealed its 1.4-nm technology earlier this year. The company also said that it will begin full-scale research and development on the node starting next year.

The company is backed by several Japanese companies, including giants such as Toyota and Sony, as well as private financing institutions. Aside from this, the Japanese government has also invested heavily in the startup through subsidies and direct fiscal support. Rapidus has already received a commitment of JPY 1.7 trillion, or more than US$10 billion, with several hundred billion Yen expected to be infused into the company in the coming months.

Despite these massive inflows, Rapidus is still facing an uphill battle as it competes with established fabs like TSMC, Samsung, and Intel. Intel has already started production of 18A, its 2-nm class node, while TSMC is also moving up plans to output its latest node at its Arizona site due to strong AI data center demand. On the other hand, the Japanese chip maker is only expected to begin 2-nm mass production in the latter half of 2027 at its Chitose manufacturing plant. More than that, all the established foundries have struggled with yield issues before they were able to proceed with mass production, suggesting that Rapidus will experience the same problems.

Nevertheless, the company is still intent on pushing forward with its more advanced nodes even though it's playing catch-up with its 2-nm process. Aside from the expected 1.4-nm node that will be produced in the Hokkaido plant, Nikkei Asia also said that more advanced 1-nm chips may also be manufactured at the site.

Rapidus aims to compete against TSMC but has previously said that it's only targeting a handful of companies — around five to ten, initially. The Japanese chipmaker has also claimed that its advanced packaging technique will make the production cycle, allowing it to streamline its processes versus its competitors. Nevertheless, former Intel CEO Pat Gelsinger said that it needs to offer something more advanced than that to successfully compete with established chip makers.


Original Submission

posted by jelizondo on Wednesday December 03, @04:02PM   Printer-friendly

https://distrowatch.com/dwres.php?resource=showheadline&story=20099

People running the Thumbleweed branch of openSUSE will soon have the chance to try out the distribution's new bootloader package. An openSUSE blog post explains the change:


"openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.

This follows the trend started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, which is a very small and fast boot loader from the systemd project.

One of the reasons for this change is to simplify the integration of new features. Among them is full disk encryption based on systemd tools, which will make use of TPM2 or FIDO2 tokens if they are available.

What is GRUB2-BLS? GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, which includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu."

The change will allow full disk encryption and do away with some of the GRUB maintenance steps. Details are discussed in the project's blog post.


Original Submission

posted by jelizondo on Wednesday December 03, @11:20AM   Printer-friendly

The reflected infrared light of bone-loving lichen can be detected by drones

Tiny life-forms with bright colors might point the way to big dinosaur bone discoveries.

In the badlands of western Canada, two species of lichen prefer making their homes on dinosaur bones instead of on the surrounding desert rock, and their distinct orange color can be detected by drones, possibly aiding future dino discoveries, researchers report November 3 in Current Biology.

"Rather than finding new sites serendipitously, this approach can help paleontologists to locate new areas that are likely to have fossils at the surface and then go there to investigate," says paleontologist Brian Pickles at the University of Reading in England.

Lichens are photosynthetic organisms built by a symbiotic relationship between fungi and algae or cyanobacteria. They come in many colors. Some are white or near-black; others appear green, yellow, orange or red. They often grow in challenging environments, such as deserts or polar regions.

Lichens tend to be quite picky about where they grow, says AJ Deneka, a lichenologist at Carleton University in Ottawa, Canada, who was not involved with the research. Species that grow on granite do not grow on sandstone or limestone and species that grow on wood don't grow on rock.

Dinosaur bones covered in lichen have long been known to paleontologists working in desert fossil hotspots of western North America. In 1922, paleontologists found an Ankylosaurus fossil covered in orange lichen in the Canadian badlands. In 1979, a similarly colored lichen was reported growing over a Centrosaurus bonebed in the same area. The orange-colored symbiote is often the first thing researchers notice when working in these regions, with the discovery of bone coming second.

By scrutinizing vibrantly colored lichen and where it grows in Dinosaur Provincial Park in Alberta, Pickles and his colleagues found that two species of lichen, Rusavskia elegans and Xanthomendoza trachyphylla, had a strict preference for colonizing fossil bones and were almost entirely absent from surrounding ironstone rock.

"The porous texture of fossils probably plays a role in making them [a] suitable lichen habitat, perhaps by retaining moisture or providing tiny pockets where lichen [can] become trapped and established," Deneka says.

Pickles and his colleagues next measured light frequencies reflected by the rock, bones and bone-inhabiting lichen and tested whether they could distinguish the lichen from these surroundings using drones. Spectral analyses found the lichen primarily reflected certain infrared light frequencies, which the researchers then used to develop drone sensors that could detect this light from above.

Using these drones, the researchers were able to identify fossil bonebeds from a height of 30 meters. "We could only locate the fossils thanks to the lichen association," Pickles says.

The technique "has great potential for use in little-explored or difficult-to-access areas," says Renato García, a paleontologist at Universidad Nacional de Avellaneda in Buenos Aires, who was not involved with the research. In 2020, García and his colleagues uncovered a similar predilection of certain lichen toward fossil penguin bones in Antarctica, hinting at another region where this work may be fruitful.

Pickles and his team have their own plan: "Other badlands are our next target."

Journal Reference: Pickles, Brian J. et al. Remote sensing of lichens with drones for detecting dinosaur bones. [OPEN] Current Biology, Volume 35, Issue 21, R1044 - R1045 https://doi.org/10.1016/j.cub.2025.09.036


Original Submission

posted by jelizondo on Wednesday December 03, @06:58AM   Printer-friendly

https://www.osnews.com/story/143922/dell-about-1-billion-pcs-will-not-or-cannot-be-upgraded-to-windows-11/

During a Dell earnings call, the company mentioned some staggering numbers regarding the amount of PCs that will not or cannot be upgraded to Windows 11.

"We have about 500 million of them capable of running Windows 11 that haven't been upgraded," said Dell COO Jeffrey Clarke on a Q3 earnings call earlier this week, referring to the overall PC market, not just Dell's slice of machines. "And we have another 500 million that are four years old that can't run Windows 11." He sees this as an opportunity to guide customers towards the latest Windows 11 machines and AI PCs, but warns that the PC market is going to be relatively flat next year.
        ↫ Tom Warren at The Verge

The monumental scale of the Windows 10 install base that simply won't or cannot upgrade to Windows 11 is massive, and it's absolutely bonkers to me that we're mostly just letting them get away with leaving at least a billion users out in the cold when it comes to security updates and bug fixes. The US government (in better times) and the EU should've 100% forced Microsoft's hand, as leaving this many people on outdated, unsupported operating system installations is several disasters waiting to happen.

Aside from the dangerous position Microsoft is forcing its Windows 10 users into, there's also the massive environmental and public health impact of huge swaths of machines, especially in enterprise environments, becoming obsolete overnight. Many of these will end up in landfills, often shipped to third-world countries so we in the west don't have to deal with our e-waste and its dangerous consequences directly. I can get fined for littering – rightfully so – but when a company like Microsoft makes sweeping decisions which cause untold amounts of dangerous chemicals to be dumped in countless locations all over the globe, governments shrug it off and move on.

At least we will get some cheap eBay hardware out of it, I guess.


Original Submission

posted by jelizondo on Wednesday December 03, @01:53AM   Printer-friendly

https://phys.org/news/2025-11-scientists-mountain-climate-faster-billions.html

Mountains worldwide are experiencing climate change more intensely than lowland areas, with potentially devastating consequences for billions of people who live in and/or depend on these regions, according to a major global review.

The international study, published in Nature Reviews Earth & Environment, examines what scientists call "elevation-dependent climate change" (EDCC)—the phenomenon where environmental changes can accelerate at higher altitudes.

It represents the most thorough analysis to date of how temperature, rainfall, and snowfall patterns are shifting across the world's mountain ranges.

Led by Associate Professor Dr. Nick Pepin from the University of Portsmouth, the research team analyzed data from multiple sources including global gridded datasets, alongside detailed case studies from specific mountain ranges including the Rocky Mountains, the Alps, the Andes, and the Tibetan Plateau.

The findings reveal alarming trends between 1980 and 2020:

  • Temperature: Mountain regions, on average, are warming 0.21°C per century faster than surrounding lowlands
  • Precipitation and snow: Mountains are experiencing more unpredictable rainfall and a significant change from snow to rain

"Mountains share many characteristics with Arctic regions and are experiencing similarly rapid changes," said Dr. Pepin from the University of Portsmouth's Institute of the Earth and Environment.

"This is because both environments are losing snow and ice rapidly and are seeing profound changes in ecosystems. What's less well known is that as you go higher into the mountains, the rate of climate change can become even more intense."

The implications extend far beyond mountain communities. Over one billion people worldwide depend on mountain snow and glaciers for water, including in China and India—the world's two largest countries by population—who receive water from the Himalayas.

Dr. Pepin added, "The Himalayan ice is decreasing more rapidly than we thought. When you transition from snowfall to rain because it has become warmer, you're more likely to get devastating floods. Hazardous events also become more extreme."

"As temperatures rise, trees and animals are moving higher up the mountains, chasing cooler conditions. But eventually, in some cases, they'll run out of mountain and be pushed off the top. With nowhere left to go, species may be lost and ecosystems fundamentally changed."

Recent events highlight the urgency. Dr. Pepin points to this summer in Pakistan, which experienced some of its deadliest monsoon weather in years, with cloudbursts and extreme mountain rainfall killing over 1,000 people.

This latest review builds on the research team's 2015 paper in Nature Climate Change, which was the first to provide comprehensive evidence that mountain regions were warming more rapidly higher up in comparison to lower down. That study identified key drivers including the loss of snow and ice, increased atmospheric moisture, and aerosol pollutants.

Ten years on, scientists have made progress understanding the controls of such change and the consequences, but the fundamental problem remains.

"The issue of climate change has not gone away," explained Dr. Pepin. "We can't just tackle mountain climate change independently of the broader issue of climate change."

A major obstacle remains the scarcity of weather observations in the mountains. "Mountains are harsh environments, remote, and hard to get to," said Dr. Nadine Salzmann from the WSL Institute for Snow and Avalanche Research SLF in Davos, Switzerland. "Therefore, maintaining weather and climate stations in these environments remains challenging."

This data gap means scientists may be underestimating how quickly temperatures are changing and how fast snow will disappear. The review also calls for better computer models with higher spatial resolution—typically most current models can only track changes every few kilometers, but conditions can vary dramatically between slopes just meters apart.

Dr. Emily Potter from the University of Sheffield added, "The good news is that computer models are improving. But better technology alone isn't enough—we need urgent action on climate commitments and significantly improved monitoring infrastructure in these vulnerable mountain regions."

More information: Elevation-dependent climate change in mountain environments, Nature Reviews Earth & Environment (2025). DOI: 10.1038/s43017-025-00740-4


Original Submission

posted by jelizondo on Tuesday December 02, @09:29PM   Printer-friendly

Social ills solved:

Folks, we have some revolutionary sociological research to share with you today.

After making a guy dressed as Batman stand around in a subway car, a team of researchers found that the behavior of people around him suddenly improved the moment he showed up. No longer was everyone completely self-involved; with the presence of a superhero, commuters started helping each other more than they would've without him around.

Behold: the "Batman effect."

The findings of the unorthodox study, published in the journal npj Mental Health Research, demonstrate the power of introducing something offbeat into social situations to jolt people out of the mental autopilot they slip into to navigate the drudgery of everyday life.

Batman showing up is just one — albeit striking — way of promoting what's called "prosocial behavior," or the act of helping others around you, via introducing an unexpected event, the researchers write.

"Our findings are similar to those of previous research linking present-moment awareness (mindfulness) to greater prosociality," said study lead author Francesco Pagnini, a professor of clinical psychology at the Università Cattolica in Milan, in a statement about the work. "This may create a context in which individuals become more attuned to social cues."

In a series of experiments, the researchers had a woman who visibly appeared pregnant enter a busy train, and observed how often people offered to give up their seats. They then repeated this scenario with a crucial change: when the pregnant woman entered the train from one door, a man dressed as Batman entered from another.

In all, the team observed 138 passengers, and the results were clear-cut. Over 67 percent of passengers offered their seats when Batman was present, compared to just over 37 percent when Batman wasn't there. Most, in both cases, were women: 68 percent with Batman there, and 65 without him.

But the most strange detail? 44 percent of the people who offered their seats later reported that they didn't even notice Batman was there in the first place, suggesting that they don't need to be consciously aware of the offbeat event itself to, in colloquial terms, pick up the prosocial vibes.

"Unlike traditional mindfulness interventions that require active engagement, this study highlights how situational interruptions alone may be sufficient to produce similar effects," Pagnini said.

In the study, he added the findings "could inform strategies to promote altruistic behaviors in daily life, from public art installations to innovative social campaigns.

Journal Reference: Pagnini, F., Grosso, F., Cavalera, C. et al. Unexpected events and prosocial behavior: the Batman effect. npj Mental Health Res 4, 57 (2025).

See also: The 'Batman Effect' -- How Having an Alter Ego Empowers You


Original Submission

posted by hubie on Tuesday December 02, @04:22PM   Printer-friendly

Blender 5.0 Open-Source 3D Graphics App Is Now Available for Download

This release introduces support for displaying HDR and wide gamut colors on Linux when using Wayland and the Vulkan backend:

Blender 5.0, a free and open-source 3D computer graphics software, is now available for download as a major update that introduces numerous new features and improvements.

Highlights of Blender 5.0 include support for displaying HDR and wide gamut colors, which requires a HDR or wide gamut capable monitor. On Linux systems, this works only when using Wayland and setting the Vulkan backend in Blender's system preferences.

Blender 5.0 also introduces a working color space for Blend files, a new AgX HDR view, a new Convert to Display compositor node, new Rec.2100-PQ and Rec.2100-HLG displays that can be used for color grading for HDR video export, and new ACES 1.3 and 2.0 views as an alternative to AgX and Filmic.

[...] There are also many UI changes in Blender 5.0, including drag and drop support within the Shape Keys list, snapping support for sidebars, a new "Delete Other Workspaces" context menu entry for workspace tabs, the ability to collapse paint pressure curves, and per-camera composition guide overlay color.

Moreover, theme settings have changed significantly in Blender 5.0 to make creating custom themes easier, while numerous theme settings have been unified, and more than 300 settings have been removed. On top of that, Blender 5.0 introduces a new Storyboarding template and workspace.

Among other noteworthy changes, this release adds a human base mesh bundle for realistic skeleton assets, six new Geometry Nodes-based modifiers, a new volume rendering algorithm based on null scattering, and a new "Working Space" choice in the Convert Color Space compositor node to convert to and from the current working space that images are in by default.

Being a major update, Blender 5.0 removes support for LZMA or LZO compressed point caches, support for Intel Macs, support for pre-2.50 animation, big-endian support, as well as the unsupported access to runtime-defined properties storage data in the Python API.

[...] Blender 5.0 requires NVIDIA GeForce 900 and newer GPUs, as well as Quadro Tesla GPU architecture and newer, including RTX-based cards, with the official NVIDIA drivers, AMD GCN 4th gen and newer GPUs, and Intel Kaby Lake architecture and newer GPUs.

Check out the release notes for more details about the changes included in Blender 5.0, which you can download right now from the official website as a universal binary that you can run on virtually any GNU/Linux distribution without installing anything on your personal computer.

Bottles 60.0 Launches with Native Wayland Support

Bottles 60.0, a Wine prefix manager for running Windows apps on Linux, adds native Wayland support, a refreshed UI, and more:

Bottles, an open-source software tool built on top of Wine that helps users run Windows applications and games on Linux systems by providing a user-friendly GUI, has just released its latest version, 60.0.

The update introduces a native Wayland option directly in the bottle settings, giving users a more predictable experience on modern Linux desktops that have already shifted away from X11.

Alongside this, the new WineBridge features expand how processes can be spawned and managed, supported by a consent prompt to ensure users maintain control over updates to that component.

For Steam Deck users, the release includes a fix for broken controls in Gaming Mode, resolving a regression that made some titles unusable. Several environment-related issues are also addressed, including problems with working directories not persisting, unclear environment variable creation, and cases where easyterm failed to run due to missing GTK variables.

[...] For more information on all the changes, visit the project's GitHub changelog.


Original Submission #1Original Submission #2

posted by hubie on Tuesday December 02, @11:40AM   Printer-friendly
from the is-this-your-card? dept.

Ethicists say AI-powered advances will threaten the privacy and autonomy of people who use neurotechnology:

Before a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as 'Twinkle, Twinkle, Little Star', rang out

But there was a twist. For Smith, it seemed as if the piano played itself. "It felt like the keys just automatically hit themselves without me thinking about it," she said at the time. "It just seemed like it knew the tune, and it just did it on its own."

Smith's BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.

[...] Andersen's research also illustrates the potential of BCIs that access areas outside the motor cortex. "The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas," says Andersen. "There's a wide variety of things that we can decode."

The ability of these devices to access aspects of a person's innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private. It also poses ethical questions about how neurotechnologies might shape people's thoughts and actions — especially when paired with artificial intelligence.

Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people's internal reactions to online and other content.

Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. "Whole-brain interfacing is going to be the future," says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. "It leads you to the final question: how do we make that safe?"

[...] Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023.

Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. "There's a wild west when it comes to the regulatory standards," he says.

In 2018, Ienca and his colleagues found that most consumer BCIs don't use secure data-sharing channels or implement state-of-the-art privacy technologies2. "I believe that has not changed," Ienca says. What's more, a 2024 analysis3 of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it.

Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person's mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person.

"The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating," Ienca says. Adding neural data, he says, "is like giving steroids to the existing data economy".

Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development, have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected.
Heading to the clinic

While their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron's device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn't require neurosurgery. It has proved safe, robust and effective in initial trials4, and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval.

Elon Musk's neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials.

At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology.

As for what's next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. "All of them hope to go back further in time in the brain," she says, "and to get to that subconscious precursor to thought."


Original Submission

posted by hubie on Tuesday December 02, @06:57AM   Printer-friendly

This would involve Meta renting Google Cloud TPUs next year and outright purchasing them in 2027:

Meta may be on the cusp of spending billions on Google AI chips to power its future developments, as the social-media giant is reportedly in talks to both buy and rent Google compute power for its future AI endeavours, as reported by The Information, via Reuters. The ongoing negotiations reportedly involve Meta renting Google Cloud Tensor Processing Units (TPU) in 2026, before purchasing them outright in 2027.

This news shows continuing collaboration between the companies, despite a recent pause on their undersea cable projects.

To date, Google has mostly leveraged its TPUs for its internal efforts, so this move, if it comes to fruition, would be a change of tactic that could help it capture a sizeable portion of the AI chip business. Considering that few, if any, companies have figured out how to turn a profit from developing AI just yet, Google may be looking to get in on Nvidia's act. The long-time GPU maker has made untold billions since the start of the AI craze, propelling it to become the world's most valuable company within a short timeframe.

Indeed, Reuters reports some Google Cloud executives believe that the shifting strategy would give it the chance to capture as much as a 10% slice of Nvidia's data center revenue. Considering Nvidia made over $51 billion from data centers in Q2 2025 alone, Google cornering that much of Nvidia's revenue would be worth 10s of billions of dollars.

Markets reacted to the rumors of this deal, sending Meta and Google stock upwards. Alphabet rose several percent in pre-market trading, and Reuters has it on track to become the next $4 trillion company potentially as soon as later today. Meta stock prices are up, too, but Nvidia took a 3% hit.

Even if Google does clinch this deal and secures a huge order and long-term revenue stream for its TPUs outside of internal use, it's still going to be swallowed up by the AI industry as a whole. There isn't enough compute power, fabrication capacity, or supply-chain logistics to provide the enormous uptick in demand for AI data center buildouts that have been ongoing this year.

Memory prices are skyrocketing, GPU prices are expected to jump up next year, and just about everything electronic could be more expensive this time next year.

That's if the bubble doesn't burst, of course. Even 2026 feels a long way off when it comes to this ever-changing industry, but 2027 is a lifetime away. Who knows what the state of AI hardware will be like then, and there's no telling whether Google's TPUs will have any longer shelf life than Nvidia's top GPUs. Especially with an aggressive annual release schedule.


Original Submission