Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The Japanese chipmaker is looking to take on established fabs:
Rapidus, Japan's homegrown challenger to Taiwan Semiconductor Manufacturing Company (TSMC), has announced that it will start building its next-generation 1.4-nanometer fab in fiscal year 2027, with production expected to commence in Hokkaido in 2029. According to Nikkei Asia, this move is expected to help the Japanese chipmaker close the gap with the Taiwanese chip-making giant, which has already revealed its 1.4-nm technology earlier this year. The company also said that it will begin full-scale research and development on the node starting next year.
The company is backed by several Japanese companies, including giants such as Toyota and Sony, as well as private financing institutions. Aside from this, the Japanese government has also invested heavily in the startup through subsidies and direct fiscal support. Rapidus has already received a commitment of JPY 1.7 trillion, or more than US$10 billion, with several hundred billion Yen expected to be infused into the company in the coming months.
Despite these massive inflows, Rapidus is still facing an uphill battle as it competes with established fabs like TSMC, Samsung, and Intel. Intel has already started production of 18A, its 2-nm class node, while TSMC is also moving up plans to output its latest node at its Arizona site due to strong AI data center demand. On the other hand, the Japanese chip maker is only expected to begin 2-nm mass production in the latter half of 2027 at its Chitose manufacturing plant. More than that, all the established foundries have struggled with yield issues before they were able to proceed with mass production, suggesting that Rapidus will experience the same problems.
Nevertheless, the company is still intent on pushing forward with its more advanced nodes even though it's playing catch-up with its 2-nm process. Aside from the expected 1.4-nm node that will be produced in the Hokkaido plant, Nikkei Asia also said that more advanced 1-nm chips may also be manufactured at the site.
Rapidus aims to compete against TSMC but has previously said that it's only targeting a handful of companies — around five to ten, initially. The Japanese chipmaker has also claimed that its advanced packaging technique will make the production cycle, allowing it to streamline its processes versus its competitors. Nevertheless, former Intel CEO Pat Gelsinger said that it needs to offer something more advanced than that to successfully compete with established chip makers.
https://distrowatch.com/dwres.php?resource=showheadline&story=20099
People running the Thumbleweed branch of openSUSE will soon have the chance to try out the distribution's new bootloader package. An openSUSE blog post explains the change:
"openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.
This follows the trend started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, which is a very small and fast boot loader from the systemd project.
One of the reasons for this change is to simplify the integration of new features. Among them is full disk encryption based on systemd tools, which will make use of TPM2 or FIDO2 tokens if they are available.
What is GRUB2-BLS? GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, which includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu."
The change will allow full disk encryption and do away with some of the GRUB maintenance steps. Details are discussed in the project's blog post.
The reflected infrared light of bone-loving lichen can be detected by drones
Tiny life-forms with bright colors might point the way to big dinosaur bone discoveries.
In the badlands of western Canada, two species of lichen prefer making their homes on dinosaur bones instead of on the surrounding desert rock, and their distinct orange color can be detected by drones, possibly aiding future dino discoveries, researchers report November 3 in Current Biology.
"Rather than finding new sites serendipitously, this approach can help paleontologists to locate new areas that are likely to have fossils at the surface and then go there to investigate," says paleontologist Brian Pickles at the University of Reading in England.
Lichens are photosynthetic organisms built by a symbiotic relationship between fungi and algae or cyanobacteria. They come in many colors. Some are white or near-black; others appear green, yellow, orange or red. They often grow in challenging environments, such as deserts or polar regions.
Lichens tend to be quite picky about where they grow, says AJ Deneka, a lichenologist at Carleton University in Ottawa, Canada, who was not involved with the research. Species that grow on granite do not grow on sandstone or limestone and species that grow on wood don't grow on rock.
Dinosaur bones covered in lichen have long been known to paleontologists working in desert fossil hotspots of western North America. In 1922, paleontologists found an Ankylosaurus fossil covered in orange lichen in the Canadian badlands. In 1979, a similarly colored lichen was reported growing over a Centrosaurus bonebed in the same area. The orange-colored symbiote is often the first thing researchers notice when working in these regions, with the discovery of bone coming second.
By scrutinizing vibrantly colored lichen and where it grows in Dinosaur Provincial Park in Alberta, Pickles and his colleagues found that two species of lichen, Rusavskia elegans and Xanthomendoza trachyphylla, had a strict preference for colonizing fossil bones and were almost entirely absent from surrounding ironstone rock.
"The porous texture of fossils probably plays a role in making them [a] suitable lichen habitat, perhaps by retaining moisture or providing tiny pockets where lichen [can] become trapped and established," Deneka says.
Pickles and his colleagues next measured light frequencies reflected by the rock, bones and bone-inhabiting lichen and tested whether they could distinguish the lichen from these surroundings using drones. Spectral analyses found the lichen primarily reflected certain infrared light frequencies, which the researchers then used to develop drone sensors that could detect this light from above.
Using these drones, the researchers were able to identify fossil bonebeds from a height of 30 meters. "We could only locate the fossils thanks to the lichen association," Pickles says.
The technique "has great potential for use in little-explored or difficult-to-access areas," says Renato García, a paleontologist at Universidad Nacional de Avellaneda in Buenos Aires, who was not involved with the research. In 2020, García and his colleagues uncovered a similar predilection of certain lichen toward fossil penguin bones in Antarctica, hinting at another region where this work may be fruitful.
Pickles and his team have their own plan: "Other badlands are our next target."
Journal Reference: Pickles, Brian J. et al. Remote sensing of lichens with drones for detecting dinosaur bones. [OPEN] Current Biology, Volume 35, Issue 21, R1044 - R1045 https://doi.org/10.1016/j.cub.2025.09.036
During a Dell earnings call, the company mentioned some staggering numbers regarding the amount of PCs that will not or cannot be upgraded to Windows 11.
"We have about 500 million of them capable of running Windows 11 that haven't been upgraded," said Dell COO Jeffrey Clarke on a Q3 earnings call earlier this week, referring to the overall PC market, not just Dell's slice of machines. "And we have another 500 million that are four years old that can't run Windows 11." He sees this as an opportunity to guide customers towards the latest Windows 11 machines and AI PCs, but warns that the PC market is going to be relatively flat next year.
↫ Tom Warren at The Verge
The monumental scale of the Windows 10 install base that simply won't or cannot upgrade to Windows 11 is massive, and it's absolutely bonkers to me that we're mostly just letting them get away with leaving at least a billion users out in the cold when it comes to security updates and bug fixes. The US government (in better times) and the EU should've 100% forced Microsoft's hand, as leaving this many people on outdated, unsupported operating system installations is several disasters waiting to happen.
Aside from the dangerous position Microsoft is forcing its Windows 10 users into, there's also the massive environmental and public health impact of huge swaths of machines, especially in enterprise environments, becoming obsolete overnight. Many of these will end up in landfills, often shipped to third-world countries so we in the west don't have to deal with our e-waste and its dangerous consequences directly. I can get fined for littering – rightfully so – but when a company like Microsoft makes sweeping decisions which cause untold amounts of dangerous chemicals to be dumped in countless locations all over the globe, governments shrug it off and move on.
At least we will get some cheap eBay hardware out of it, I guess.
https://phys.org/news/2025-11-scientists-mountain-climate-faster-billions.html
Mountains worldwide are experiencing climate change more intensely than lowland areas, with potentially devastating consequences for billions of people who live in and/or depend on these regions, according to a major global review.
The international study, published in Nature Reviews Earth & Environment, examines what scientists call "elevation-dependent climate change" (EDCC)—the phenomenon where environmental changes can accelerate at higher altitudes.
It represents the most thorough analysis to date of how temperature, rainfall, and snowfall patterns are shifting across the world's mountain ranges.
Led by Associate Professor Dr. Nick Pepin from the University of Portsmouth, the research team analyzed data from multiple sources including global gridded datasets, alongside detailed case studies from specific mountain ranges including the Rocky Mountains, the Alps, the Andes, and the Tibetan Plateau.
The findings reveal alarming trends between 1980 and 2020:
- Temperature: Mountain regions, on average, are warming 0.21°C per century faster than surrounding lowlands
- Precipitation and snow: Mountains are experiencing more unpredictable rainfall and a significant change from snow to rain
"Mountains share many characteristics with Arctic regions and are experiencing similarly rapid changes," said Dr. Pepin from the University of Portsmouth's Institute of the Earth and Environment.
"This is because both environments are losing snow and ice rapidly and are seeing profound changes in ecosystems. What's less well known is that as you go higher into the mountains, the rate of climate change can become even more intense."
The implications extend far beyond mountain communities. Over one billion people worldwide depend on mountain snow and glaciers for water, including in China and India—the world's two largest countries by population—who receive water from the Himalayas.
Dr. Pepin added, "The Himalayan ice is decreasing more rapidly than we thought. When you transition from snowfall to rain because it has become warmer, you're more likely to get devastating floods. Hazardous events also become more extreme."
"As temperatures rise, trees and animals are moving higher up the mountains, chasing cooler conditions. But eventually, in some cases, they'll run out of mountain and be pushed off the top. With nowhere left to go, species may be lost and ecosystems fundamentally changed."
Recent events highlight the urgency. Dr. Pepin points to this summer in Pakistan, which experienced some of its deadliest monsoon weather in years, with cloudbursts and extreme mountain rainfall killing over 1,000 people.
This latest review builds on the research team's 2015 paper in Nature Climate Change, which was the first to provide comprehensive evidence that mountain regions were warming more rapidly higher up in comparison to lower down. That study identified key drivers including the loss of snow and ice, increased atmospheric moisture, and aerosol pollutants.
Ten years on, scientists have made progress understanding the controls of such change and the consequences, but the fundamental problem remains.
"The issue of climate change has not gone away," explained Dr. Pepin. "We can't just tackle mountain climate change independently of the broader issue of climate change."
A major obstacle remains the scarcity of weather observations in the mountains. "Mountains are harsh environments, remote, and hard to get to," said Dr. Nadine Salzmann from the WSL Institute for Snow and Avalanche Research SLF in Davos, Switzerland. "Therefore, maintaining weather and climate stations in these environments remains challenging."
This data gap means scientists may be underestimating how quickly temperatures are changing and how fast snow will disappear. The review also calls for better computer models with higher spatial resolution—typically most current models can only track changes every few kilometers, but conditions can vary dramatically between slopes just meters apart.
Dr. Emily Potter from the University of Sheffield added, "The good news is that computer models are improving. But better technology alone isn't enough—we need urgent action on climate commitments and significantly improved monitoring infrastructure in these vulnerable mountain regions."
More information: Elevation-dependent climate change in mountain environments, Nature Reviews Earth & Environment (2025). DOI: 10.1038/s43017-025-00740-4
Folks, we have some revolutionary sociological research to share with you today.
After making a guy dressed as Batman stand around in a subway car, a team of researchers found that the behavior of people around him suddenly improved the moment he showed up. No longer was everyone completely self-involved; with the presence of a superhero, commuters started helping each other more than they would've without him around.
Behold: the "Batman effect."
The findings of the unorthodox study, published in the journal npj Mental Health Research, demonstrate the power of introducing something offbeat into social situations to jolt people out of the mental autopilot they slip into to navigate the drudgery of everyday life.
Batman showing up is just one — albeit striking — way of promoting what's called "prosocial behavior," or the act of helping others around you, via introducing an unexpected event, the researchers write.
"Our findings are similar to those of previous research linking present-moment awareness (mindfulness) to greater prosociality," said study lead author Francesco Pagnini, a professor of clinical psychology at the Università Cattolica in Milan, in a statement about the work. "This may create a context in which individuals become more attuned to social cues."
In a series of experiments, the researchers had a woman who visibly appeared pregnant enter a busy train, and observed how often people offered to give up their seats. They then repeated this scenario with a crucial change: when the pregnant woman entered the train from one door, a man dressed as Batman entered from another.
In all, the team observed 138 passengers, and the results were clear-cut. Over 67 percent of passengers offered their seats when Batman was present, compared to just over 37 percent when Batman wasn't there. Most, in both cases, were women: 68 percent with Batman there, and 65 without him.
But the most strange detail? 44 percent of the people who offered their seats later reported that they didn't even notice Batman was there in the first place, suggesting that they don't need to be consciously aware of the offbeat event itself to, in colloquial terms, pick up the prosocial vibes.
"Unlike traditional mindfulness interventions that require active engagement, this study highlights how situational interruptions alone may be sufficient to produce similar effects," Pagnini said.
In the study, he added the findings "could inform strategies to promote altruistic behaviors in daily life, from public art installations to innovative social campaigns.
Journal Reference: Pagnini, F., Grosso, F., Cavalera, C. et al. Unexpected events and prosocial behavior: the Batman effect. npj Mental Health Res 4, 57 (2025).
See also: The 'Batman Effect' -- How Having an Alter Ego Empowers You
Blender 5.0, a free and open-source 3D computer graphics software, is now available for download as a major update that introduces numerous new features and improvements.
Highlights of Blender 5.0 include support for displaying HDR and wide gamut colors, which requires a HDR or wide gamut capable monitor. On Linux systems, this works only when using Wayland and setting the Vulkan backend in Blender's system preferences.
Blender 5.0 also introduces a working color space for Blend files, a new AgX HDR view, a new Convert to Display compositor node, new Rec.2100-PQ and Rec.2100-HLG displays that can be used for color grading for HDR video export, and new ACES 1.3 and 2.0 views as an alternative to AgX and Filmic.
[...] There are also many UI changes in Blender 5.0, including drag and drop support within the Shape Keys list, snapping support for sidebars, a new "Delete Other Workspaces" context menu entry for workspace tabs, the ability to collapse paint pressure curves, and per-camera composition guide overlay color.
Moreover, theme settings have changed significantly in Blender 5.0 to make creating custom themes easier, while numerous theme settings have been unified, and more than 300 settings have been removed. On top of that, Blender 5.0 introduces a new Storyboarding template and workspace.
Among other noteworthy changes, this release adds a human base mesh bundle for realistic skeleton assets, six new Geometry Nodes-based modifiers, a new volume rendering algorithm based on null scattering, and a new "Working Space" choice in the Convert Color Space compositor node to convert to and from the current working space that images are in by default.
Being a major update, Blender 5.0 removes support for LZMA or LZO compressed point caches, support for Intel Macs, support for pre-2.50 animation, big-endian support, as well as the unsupported access to runtime-defined properties storage data in the Python API.
[...] Blender 5.0 requires NVIDIA GeForce 900 and newer GPUs, as well as Quadro Tesla GPU architecture and newer, including RTX-based cards, with the official NVIDIA drivers, AMD GCN 4th gen and newer GPUs, and Intel Kaby Lake architecture and newer GPUs.
Check out the release notes for more details about the changes included in Blender 5.0, which you can download right now from the official website as a universal binary that you can run on virtually any GNU/Linux distribution without installing anything on your personal computer.
Bottles, an open-source software tool built on top of Wine that helps users run Windows applications and games on Linux systems by providing a user-friendly GUI, has just released its latest version, 60.0.
The update introduces a native Wayland option directly in the bottle settings, giving users a more predictable experience on modern Linux desktops that have already shifted away from X11.
Alongside this, the new WineBridge features expand how processes can be spawned and managed, supported by a consent prompt to ensure users maintain control over updates to that component.
For Steam Deck users, the release includes a fix for broken controls in Gaming Mode, resolving a regression that made some titles unusable. Several environment-related issues are also addressed, including problems with working directories not persisting, unclear environment variable creation, and cases where easyterm failed to run due to missing GTK variables.
[...] For more information on all the changes, visit the project's GitHub changelog.
Before a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as 'Twinkle, Twinkle, Little Star', rang out
But there was a twist. For Smith, it seemed as if the piano played itself. "It felt like the keys just automatically hit themselves without me thinking about it," she said at the time. "It just seemed like it knew the tune, and it just did it on its own."
Smith's BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.
[...] Andersen's research also illustrates the potential of BCIs that access areas outside the motor cortex. "The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas," says Andersen. "There's a wide variety of things that we can decode."
The ability of these devices to access aspects of a person's innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private. It also poses ethical questions about how neurotechnologies might shape people's thoughts and actions — especially when paired with artificial intelligence.
Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people's internal reactions to online and other content.
Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. "Whole-brain interfacing is going to be the future," says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. "It leads you to the final question: how do we make that safe?"
[...] Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023.
Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. "There's a wild west when it comes to the regulatory standards," he says.
In 2018, Ienca and his colleagues found that most consumer BCIs don't use secure data-sharing channels or implement state-of-the-art privacy technologies2. "I believe that has not changed," Ienca says. What's more, a 2024 analysis3 of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it.
Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person's mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person.
"The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating," Ienca says. Adding neural data, he says, "is like giving steroids to the existing data economy".
Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development, have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected.
Heading to the clinicWhile their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron's device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn't require neurosurgery. It has proved safe, robust and effective in initial trials4, and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval.
Elon Musk's neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials.
At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology.
As for what's next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. "All of them hope to go back further in time in the brain," she says, "and to get to that subconscious precursor to thought."
This would involve Meta renting Google Cloud TPUs next year and outright purchasing them in 2027:
Meta may be on the cusp of spending billions on Google AI chips to power its future developments, as the social-media giant is reportedly in talks to both buy and rent Google compute power for its future AI endeavours, as reported by The Information, via Reuters. The ongoing negotiations reportedly involve Meta renting Google Cloud Tensor Processing Units (TPU) in 2026, before purchasing them outright in 2027.
This news shows continuing collaboration between the companies, despite a recent pause on their undersea cable projects.
To date, Google has mostly leveraged its TPUs for its internal efforts, so this move, if it comes to fruition, would be a change of tactic that could help it capture a sizeable portion of the AI chip business. Considering that few, if any, companies have figured out how to turn a profit from developing AI just yet, Google may be looking to get in on Nvidia's act. The long-time GPU maker has made untold billions since the start of the AI craze, propelling it to become the world's most valuable company within a short timeframe.
Indeed, Reuters reports some Google Cloud executives believe that the shifting strategy would give it the chance to capture as much as a 10% slice of Nvidia's data center revenue. Considering Nvidia made over $51 billion from data centers in Q2 2025 alone, Google cornering that much of Nvidia's revenue would be worth 10s of billions of dollars.
Markets reacted to the rumors of this deal, sending Meta and Google stock upwards. Alphabet rose several percent in pre-market trading, and Reuters has it on track to become the next $4 trillion company potentially as soon as later today. Meta stock prices are up, too, but Nvidia took a 3% hit.
Even if Google does clinch this deal and secures a huge order and long-term revenue stream for its TPUs outside of internal use, it's still going to be swallowed up by the AI industry as a whole. There isn't enough compute power, fabrication capacity, or supply-chain logistics to provide the enormous uptick in demand for AI data center buildouts that have been ongoing this year.
Memory prices are skyrocketing, GPU prices are expected to jump up next year, and just about everything electronic could be more expensive this time next year.
That's if the bubble doesn't burst, of course. Even 2026 feels a long way off when it comes to this ever-changing industry, but 2027 is a lifetime away. Who knows what the state of AI hardware will be like then, and there's no telling whether Google's TPUs will have any longer shelf life than Nvidia's top GPUs. Especially with an aggressive annual release schedule.
Kyivstar begins trials offering SMS connectivity when ground networks fail:
Ukrainian telco Kyivstar has launched Starlink's Direct to Cell satellite service for its subscribers, making the war-torn nation the first in Europe to offer it.
The technology provides phone connectivity when terrestrial networks are unavailable and is currently in trial for all Kyivstar customers. It initially supports only SMS messaging, but the company plans to add "light data with voice and video capabilities."
Access to this service will be provided to all Kyivstar subscribers at no additional cost, the firm says.
As Direct to Cell works with existing smartphones, subscribers should not have to upgrade their devices to use it. However, Kyivstar says access is currently only available to those with Android handsets, with Apple support promised later.
A Direct to Cell satellite service is important for Ukrainians in areas near the front line and regions where the terrestrial network is damaged or under restoration, as well as for rescuers and humanitarian missions. It will allow them to stay connected during blackouts, in hard-to-reach areas, and in remote villages.
Kyivstar already has coverage in almost all parts of Ukraine that are still free from occupation, although there are "not-spots" in some rural areas. It serves about 22.5 million customers.
Chief exec Oleksandr Komarov said Kyivstar has already equipped its cell network with batteries and generators to provide up to ten hours of coverage when grid power is not available, and the Starlink support extends availability for customers.
"Today we are introducing the cutting-edge Direct to Cell technology which will increase this resilience significantly, starting with a vital functionality that is critical for our people," he said in a statement.
Elsewhere in Europe, Virgin Media O2 (VMO2) recently confirmed it will offer a satellite service for UK customers, also using Starlink's Direct to Cell. It is scheduled for release during the first half of 2026.
It will be called O2 Satellite and initially provide messaging and data services, with "further improvements and applications to follow" across a range of handsets.
It will work automatically in not-spots with no existing mobile coverage, we're told, with the aim of expanding VMO2's footprint in the UK to more than 95 percent within 12 months of launch. This will increase further when next-generation Starlink satellites are deployed, VMO2 claims. Charges have yet to be disclosed.
Orange also announced it is launching a "Message Satellite" service, allowing customers in mainland France to send and receive SMS messages as well as their geolocation via satellite, when mobile coverage is unavailable.
It is partnering with satellite biz Skylo and the service is initially only available to customers with a Google Pixel 9 or 10 smartphone.
This will be offered from December 11 for consumers, and during 2026 for professional and corporate customers, Orange says. It will be free for the first six months, then €5 per month.
Vodafone is also aiming to offer a commercial direct-to-cell satellite service in Europe this year, using the satellite network operated by AST SpaceMobile. This follows trials in which it claimed to have made the first mobile video call using a satellite connection with standard smartphones.
The root cause of the collapse of Baltimore's Francis Scott Key Bridge when hit by container ship Dali has been identified. It was the wrong placement, by a few millimeters, of the label on one wire. As usual, the National Transportation Safety Board has taken their time and done a detailed investigation--summarized in this short video
https://www.youtube.com/watch?app=desktop&v=bu7PJoxaMZg
tl;dr - the wire was not completely inserted into a terminal block, due to the wire label wrapped over the ferrule. Over time the connection became intermittent and eventually shut off power on the ship...after which it drifted into the bridge. Of course there were additional contributing problems as well.
The YT video comments include some more interesting details.
[Ed. note: For those not inclined to watch the YouTube video, the narrative summary of the video is listed in the spoiler below.]
1. The Dali electrical system distributes power and control signals throughout the vessel.
2. The control circuits contain hundreds of terminal blocks that organize thousands of wires.
3. The wires on the Dali were terminated with metal sleeves called ferrules that allowed for easier assembly into the terminal blocks.
4. Each wire was identified with a labeling band.
5. This image shows several terminal blocks on the Dali with wires connected.
6. To assemble a wire into a terminal block, a tool inserted into a side port opens a spring clamp, which allows the wire's ferrule to slide into place.
7. Removing the tool closes the spring clamp, securing the ferrule firmly against the terminal block's internal conductor bar.
8. Labeling bands identify wires and are typically positioned on the wire insulation.
9. However, many labeling bands on the Dali wires were placed partially on the ferrules, which increased the ferrules' overall circumference.
10. As a result, during vessel construction, some of the ferrules could not be fully inserted in the terminal blocks, including the ferrule on wire 1 from Terminal Block 381.
11. On that wire, the labeling band prevented full insertion of the ferrule, so the spring clamp gripped only the ferrule's tip, resulting in an inadequate connection.
12. Due to this unstable connection, over time the ferrule on wire 1 slipped out of the spring clamp to rest atop the spring clamp face, resulting in a precarious electrical connection.
13. When a gap occurred between the ferrule and the spring clamp face, the electrical circuit was interrupted, leading to a blackout on the Dali.
Eric Migicovsky wants to ensure Pebble can't be killed again, and DIYers benefit most:
Pebble, the e-ink smartwatch with a tumultuous history, is making a move sure to please the DIY enthusiasts that make up the bulk of its fans: Its entire software stack is now fully open source, and key hardware design files are available too.
Pebble creator Eric Migicovsky announced the move on Monday in a blog post and video detailing the changes his reborn Pebble watchmaking firm has undertaken, and they're considerable.
For those unfamiliar with the saga of Pebble, the budget e-ink smartwatches are Migicovsky's brainchild, and first became widely available in 2013. Color models came later, but by 2016 the company had been acquired by Fitbit, which canned hardware sales and put the Pebble software ecosystem out to pasture. Support for the devices disappeared with the Fitbit acquisition too, leaving independent tinkerers operating under the name Rebble to take up support for the devices of their own accord.
Fitbit was later acquired by Google, which open sourced Pebble's operating system in January 2025. Migicovsky launched a new company, Core Devices, in March, with plans to release two new Pebble watches. Google's trademark on the Pebble brand had expired, Migicovsky told us, and he now owns it under a new filing.
First off, all the electrical and mechanical schematics for Pebble's one currently available device, the black-and-white Pebble 2 Duo, are now available on Github for anyone to tinker with or to build their own Pebble 2 Duo.
The schematics for the Core Devices' other new watch, the yet-to-be-released Pebble Time 2, aren't available on Github, naturally. That device is going to begin shipping sometime early next year, Migicovsky said in his blog post, but he told us in an email that he hasn't decided whether to publish the schematics for that device yet.
Things are getting just as open on the software side, with the entirety of PebbleOS and the mobile apps used to push notifications and manage the device on iOS and Android both now available on Github for your own compilation and modification purposes, joining the Pebble SDK and other dev tools in open source software land.
Migicovsky noted in his video that he hopes the opening of PebbleOS to anyone who wants to tinker with it will lead to a new generation of products, both watches and beyond.
"I am excited that there may be people crazy enough to take Pebble OS and make it work in other products or other watches," Migicovsky said.
[...] Later this week, once Google and Apple approve the change, the Pebble mobile apps will have multiple app feeds that users can subscribe to. Additionally, anyone can create their own feed, Migicovsky explained. Core is also opening its own Pebble Appstore feed that will be packed up to Archive.org daily, Migicovsky added.
"This makes us not reliant on our servers, and at any point if our servers were to disappear you could download a copy of that, stand up your own Pebble app store feed, and continue to use it," the Pebble creator said. "We hope this sets a standard for openness. We encourage all app store feeds to publish a freely and publicly available archive of all the apps on their feed."
Monetization features are also being added to the Pebble app so that developers can make money off their creations, Migicovsky explained.
Whether this new model of openness will be enough to take Pebble from being a footnote in the wearable space now dominated by Apple, Samsung and others is far from a sure thing, but hey: for those that want more control over their device, you can't get better than this new generation of entirely open source hardware and software.
https://blog.clamav.net/2025/11/clamav-signature-retirement-announcement.html
ClamAV was first introduced in 2002; since then, the signature set has grown without bound, delivering as many detections as possible to the community. Due to continually increasing database sizes and user adoption, we are faced with significantly increasing costs of distributing the signature set to the community.
To address the issue, Cisco Talos has been working to evaluate the efficacy and relevance of older signatures. Signatures which no longer provide value to the community, based on today's security landscape, will be retired.
We are making this announcement as an advisory that our first pass of this retirement effort will affect a significant drop in database size for both the daily.cvd and main.cvd.
Our goal is to ensure that detection content is targeted to currently active threats and campaigns. We will judge this based on signature matches seen in our, and our partners, data feeds over an extended period of time. We will continue to evaluate detection prevalence for retired signatures and will restore any signatures to the active signature set as needed to protect the community. Going forwards, we will continue to curate the signature set to match the security landscape. This may result in further reductions in the total number of signatures included in the signature set alongside the normal growth that comes from new added coverage.
[...]
In addition to the reduction in size of the signature set, we will also begin to remove container images from Docker Hub. We are doing this to remove container images which may contain vulnerabilities either in ClamAV or in the base image, and to reduce the burden on Docker Hub itself, which presently hosts over 300 GiB of ClamAV container images.
When complete, we will only provide container images on Docker Hub for the supported versions of ClamAV.
[...]
We recommend that ClamAV container image users select a feature release tag rather than a specific minor release tag in order to stay up to date with security and bug fixes.
ClamAV Signature Retirement Open Source FAQ:
What if bad actors begin to reuse old malware and old exploits?
Our team is committed to reintroducing any signature based on the activity of bad actors in a timely fashion.Can open-source users access the signatures that have been retired from main.cvd?
We intend to make the retired signatures available at a later date for researchers and corner casesIs this an ongoing process?
Cisco Talos will continue to curate the signature set and may retire signatures as they lose relevance to today's security landscape.How will open source Users benefit from these changes?
Smaller file downloads come with inherent advantages, but unbound growth is not sustainable and we already have outgrown resource needs for scanning on some server configurations. We anticipate a noticeable RAM usage reduction for the ClamAV engine, possibly by as much as 25%.When will users see a change in file sizes?
Signature retirement and the file size reduction will begin on December 16th , 2025.
Users will notice that the main.cvd and daily.cvd will be roughly 50% smaller than they have seen prior to that date.
Roblox has plans to implement AI to guess user ages but the Australian Labour Government thinks more should be done to protect young people and that the current solution offered by Roblox is insufficient. There is still debate for whether or not Roblox should count as "social media" and be included in the new age restriction laws.
Roblox rolling out new safety measures to stop kids chatting with adults has done little to win favour with Labor, with the Albanese government saying all digital platforms should be proactively protecting "young Australians".
[...] The new measurers, which start in the first week of December, include age-based chats that restrict players from speaking to people outside their age group.
[...] Despite having social elements, Roblox insists it is not a social media.
The eSafety Commissioner agrees but is reviewing whether to include it in the social media ban.
https://phys.org/news/2025-11-particle-cancer-materials.html
Energy that would normally go to waste inside powerful particle accelerators could be used to create valuable medical isotopes, scientists have found.
Researchers at the University of York have shown that intense radiation captured in particle accelerator "beam dumps" could be repurposed to produce materials used in cancer therapy. The study is published in the journal Physical Review C.
Scientists have now found a way to make those leftover photons do a second job, without affecting the main physics experiments.
A beam of photons designed to investigate things like the matter that makes up our universe, could at the same time, be used to create useful medical isotopes in the diagnosis and treatment of cancer.
Dr. Mamad Eslami, a nuclear physicist from the University of York's School of Physics, Engineering and Technology, said, "We have shown the potential to generate copper-67, a rare isotope used in both diagnosing and treating cancers, by demonstrating that what we might view as waste from a particle accelerator experiment can be turned into something that can save lives.
"Our method lets high-energy accelerators support cancer medicine while continuing their core scientific work."
Copper-67 emits radiation that both destroys cancer cells and enables doctors to monitor treatment progress. Clinical trials are already exploring its use against conditions such as prostate cancer and neuroblastoma, but global supplies remain limited due to production challenges.
Because large research particle accelerators often run for long periods, the process could build up useful amounts of isotopes gradually in parallel with other experiments, rather than requiring dedicated beam time. This approach could allow existing physics facilities to double as sources of medical materials, helping in the creation of life-saving treatments while making better use of accelerator energy.
The next step for the team is to work with accelerator laboratories and medical partners to apply the method at other facilities and to explore how it could be scaled up to deliver clinically useful quantities of copper-67 and other useful isotopes in a reliable, cost-effective way.
More information: M. Eslami et al, Unconventional 67Cu production using high-energy bremsstrahlung and cross section evaluation, Physical Review C (2025). DOI: 10.1103/954z-cn34