Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 5 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:166 | Votes:302

posted by janrinok on Friday October 10, @09:14PM   Printer-friendly
from the messy-wires dept.

Qualcomm on Tuesday said it has acquired Arduino, an Italian not-for-profit firm that makes hardware and software for developing prototypes of robots and other electronic gadgets, Reuters reports at https://www.reuters.com/world/asia-pacific/qualcomm-buys-open-source-electronics-firm-arduino-2025-10-07/.

Arduino's own announcement can be found at https://blog.arduino.cc/2025/10/07/a-new-chapter-for-arduino-with-qualcomm-uno-q-and-you/.

Along with the news that might confuse those that could not imagine "Arduino" itself as a tangible sales item, Arduino introduced a new model in the Uno form factor that comprises a Qualcomm Dragonwing QRB2210 to run Linux, an STM32U585 microcontroller for hardware interfacing, and some new high density connector on the bottom side. It is priced at $44 in the Arduino store.

Reception of the news seems to be mixed in various channels, many doubt Qualcomm with its history would be a good steward for an ecosystem like Arduino.

The new Arduino Q moves squarely into Raspberry Pi territory, where the Pi 5 currently sells for around $55 with mostly comparable features, at least if the RP2040-like features in the RP1 I/O controller are counted in.


Original Submission

posted by jelizondo on Friday October 10, @04:31PM   Printer-friendly
from the trading-climate-abatement-for-microplastics-infiltration dept.

Turning dissolved carbon dioxide from seawater to biodegradable plastic is an especially powerful way to clean up the ocean:

Not-so-fun fact: our oceans hold 150 times more carbon dioxide than the Earth's atmosphere. Adding to that causes ocean acidification, which can disrupt marine food chains and reduce biodiversity.

Addressing this could not only help restore balance to underwater ecosystems, but also take advantage of an opportunity to sustainably use this stored CO2 for a variety of purposes – including producing the industrial chemicals needed to make plastic.

The first towards this is called Direct Ocean Capture – which refers to removing dissolved carbon directly from seawater – happens through electrochemical processes. While there are a bunch of companies working on this, it hasn't extensively been applied at scale yet, and the cost benefit doesn't look great at the moment (it's estimated that removing 1 ton of CO2 from the ocean could cost at least US$373, according to Climate Interventions).

Scientists from the Chinese Academy of Sciences and the University of Electronic Science and Technology of China – both in Shenzhen, China – have devised a DOC method which involves converting the captured CO2 into biodegradable plastic precursors. This approach is also described as operating at 70% efficiency, while consuming a relatively small amount of energy (3 kWh per kg of CO2), and working out to an impressive $230 per ton of CO2.

What's also worth noting is the use of modified marine bacteria for the last step. Here's a breakdown of the process, described in a paper appearing in Nature Catalysis:

First, electricity is used in a special reactor to acidify natural seawater. This converts the invisible, dissolved carbon into pure gas, which is collected. The system then restores the water's natural chemistry before returning it to the ocean.

Next, the captured CO2 gas is fed into a second reactor containing a bismuth-based catalyst to yield a concentrated, pure liquid called formic acid. Formic acid is a critical intermediate because it is an energy-rich food source for microbes.

Engineered marine microbes, specifically Vibrio natriegens, are fed the pure formic acid as their sole source of carbon. The microbes metabolize the formic acid and efficiently produce succinic acid, which is then used directly as the essential precursor to synthesize biodegradable plastics, such as polybutylene succinate (PBS).

That's a pretty good start. The researchers note there's room for optimization to boost yields and integrate this system into industrial processes. It could also be altered to produce chemicals for use in fuels, drugs, and foods.

It also remains to be seen how quickly the team can commercialize this DOC method, because it may have formidable competition. For example, Netherlands-based Brineworks says it will get to under $200/ton by 2030 with its electrolysis-based solution. The next couple of years will be worth watching in this fascinating niche of decarbonization.

Journal Reference: Li, C., Guo, M., Yang, B. et al. Efficient and scalable upcycling of oceanic carbon sources into bioplastic monomers. Nat Catal (2025). https://doi.org/10.1038/s41929-025-01416-4


Original Submission

posted by jelizondo on Friday October 10, @11:47AM   Printer-friendly
from the better-late-than-never-news dept.

The transistor was patented 75 years ago today:

75 years ago, the three Bell Labs scientists behind the invention of the transistor would, at last, have the U.S. Patent in their hands. This insignificant-looking semiconductor device with three electrodes sparked the third industrial revolution. Moreover, it ushered in the age of silicon and software, which still dominates business and human society to this day.

The first working transistor was demonstrated in 1947, but it wasn't until October 3, 1950, that the patent was secured by John Bardeen, Walter Brattain, and William Shockley. The patent was issued for a "three-electrode circuit element utilizing semiconductor materials." It would take several more years before the significant impacts transistors would have on business and society were realized.

Transistors replaced the bulky, fragile and power-hungry valves, that stubbornly remain present in some guitar amplifiers, audiophile sound systems, studio gear, where their 'organic' sound profile is sometime preferred. We also still see valves in some military, scientific, and microwave/RF applications, where transistors might be susceptible to radiation or other interference. There are other niche use cases.

Beyond miniaturization, transistors would deliver dramatic boosts in - computational speed, energy efficiency, and reliability. Moreover, they became the foundation for integrated circuits and processors, where billions of transistors could operate reliably in a much smaller footprint than taken up by a single valve. Processors featuring a trillion transistors are now on the horizon.

For PC enthusiasts, probably the best known piece of transistor lore comes from Intel co-founder Gordon Moore. Of course, we are talking about Moore's Law, which was an observation by the pioneering American engineer. Moore's most famous prediction was that "the number of transistors on an integrated circuit will double every two years with minimal rise in cost." (Law was revised from one to two years in 1975).

Obviously, prior to 1965, when Moore's Law was set out, the startling advance in transistor technology indicated that such an extrapolation would be reasonable. Even, now, certain semiconductor companies, engineers, and commentators reckon that Moore's Law is still alive and well. You can see Intel's position in the slides, above.

Whatever the case, it can't be denied that since the patenting of the transistor, we have seen incredible miniaturization and advances in computing and software, expanding the possibilities of minds and machines. The current tech universe is actually buzzing with firms that reckon they can make machines with minds - artificial intelligence.


Original Submission

posted by janrinok on Friday October 10, @11:11AM   Printer-friendly
from the party-in-stockholm-whoooo! dept.

Venezuelan opposition leader María Corina Machado has been awarded this year's Nobel Peace Prize

Somewhat better then the IG Nobel is the actual Nobel prizes. Winners started to be announced this week. So far Medicine and Physics, others to be revealed in the following days as I write this.

Physics. "for the discovery of macroscopic quantum mechanical tunnelling and energy quantisation in an electric circuit"
Medicine. "for their discoveries concerning peripheral immune tolerance"

Chemistry.
Literature.

Peace.
Economic Science.

https://www.nobelprize.org/all-nobel-prizes-2025/


Original Submission

posted by jelizondo on Friday October 10, @07:03AM   Printer-friendly

StatCounter reports that Windows 7 has gained almost 10% market share in the last month, just as Windows 10 support is coming to an end. It's clear people aren't ready to switch to Windows 11.

Someone must be wishing really hard, as according to StatCounter, Windows 7 is gaining market share in the year 2025, five years after support for it officially ended. As of this week, Windows 7 is now in use on 9.61% of Windows PCs within StatCounters pool of data, and that's up from the 3.59% it had just a month ago.

For years, Windows 7 has hovered around 2% market share on StatCounter. After mainstream support ended, the last few holdouts very quickly made the move to Windows 10, but with support for Windows 10 ending now just two weeks away, it looks like many are giving Microsoft's best version of Windows another try.

Of course, StatCounter isn't an entirely accurate measure when it comes to actual usage numbers, but it can give us a rough idea about how the market is trending, and it seems people are not happy with the idea of upgrading to Windows 11 from Windows 10. Windows 7's sudden marketshare gain is likely a blip, but interesting nonetheless.

Taking a closer look at StatCounter, it appears Windows 11 market share stalled in the last month, maintaining around 48% share. Windows 10 continued to drop, as expected, and is now on just 40% of PCs. While I wouldn't be surprised if some people had experimented with going back to Windows 7 recently, I highly doubt it's a number as high as 9.61%.

[...] Windows 11 failing to gain any market share in the final month before Windows 10's end of support is frankly shocking, and if the numbers are accurate, should be setting alarm bells off for Microsoft internally. It's clear that much of the market has rejected Windows 11, whether that be because of its high system requirements or insistence on AI features, people aren't moving to it.

In recent months, it seems Windows' reputation has fallen off a cliff. With enshittification slowly moving in, a lack of innovative new features and experiences that aren't tied to AI, and monthly updates that consistently introduce unnecessary changes and issues, people are getting tired of Microsoft's antics.


Original Submission

posted by jelizondo on Friday October 10, @02:21AM   Printer-friendly

Are VPNs Under Attack? An Anti-Censorship Group Speaks Out:

"Leave VPNs alone." That's the plea from anti-online censorship and surveillance group Fight for the Future, which designated Sept. 25 as a VPN Day of Action to press lawmakers not to ban virtual private networks. A week later, there's still an opportunity to get involved and raise awareness.

The group of activists, artists, engineers and technologists is asking people to sign an open letter encouraging politicians to preserve the existence of VPNs and "defend privacy and to access knowledge and information online." Virtual private networks encrypt internet connections and can hide your physical location.

Joining the action on Thursday was the VPN Trust Initiative -- comprised of NordVPN, Surfshark and ExpressVPN -- and the VPN Guild, which includes Amnezia VPN.

The open letter refers to recent "age-verification" laws propelling legislative moves to ban or restrict VPN usage. Such measures would lead to increased online surveillance and censorship, which "has a huge chilling effect on our freedoms, particularly the freedoms of traditionally marginalized people," the letter notes.

Lia Holland, Fight for the Future's campaigns and communication director, said VPNs are vital for "people living under authoritarian regimes" to avoid censorship and surveillance, and have become an essential tool in exercising basic human rights.

Half of all US states have passed age-verification laws requiring internet users to prove their age with government-issued IDs, credit card checks and other methods. The laws have spurred consumers to sign up for VPNs to avoid giving out sensitive information, with one recent VPN sign-up spike in the UK.

Michigan is considering a bill banning adult content online and VPNs. If it becomes law, Michigan would be the first US state to ban VPNs. Many countries, including China, India and Iran, already ban or heavily restrict VPNs.

"Amid a moral panic, ignorant 'save-the-children' politicians are getting very close to kicking the hornet's nest of millions of people who know how important VPNs are," Holland said.

Banning VPNs would be "difficult," according to attorney Mario Trujillo of the Electronic Frontier Foundation, an international digital rights group.

Trujillo told CNET that VPNs are best for routing your network connection through a different network. "They can be used to help avoid censorship, but they are also used by employees in every sector to connect to their company's network," he said. "That is a practical reality that would make any ban difficult."

Trujillo added that the US lags behind the rest of the world in privacy regulation, and that lawmakers should focus more on privacy than VPN bans.

Fight for the Future identifies local lawmakers and provides templates for contacting them. This information is on the same page as the open letter.


Original Submission

posted by jelizondo on Thursday October 09, @09:43PM   Printer-friendly

The ship wasn't designed to withstand the powerful ice compression forces—and Shackleton knew it:

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Arctic ice with no single cause for the sinking. Furthermore, the ship wasn't designed to withstand those forces and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition's exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he's ever seen; Endurance was "in a brilliant state of preservation."

[...] Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton's diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

[...] Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them which stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship's hull.

[...] Based on his analysis, Tuhkuri concluded that the rudder wasn't the sole or primary reason for the ship's sinking. "Endurance would have sunk even if it did not have a rudder at all," Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes "simply annihilating the ship."

Perhaps the most surprising finding is that Shackleton knew of Endurance's structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition were forced to abandon their ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic's fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner's 1911-1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship's hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so and as a result, Deutschland survived eight months of being trapped in compressive ice, until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the ship builders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914-1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907-1909 expedition. So Shackleton had to know he was taking a big risk.

"Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it," said Tuhkuri. "The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints but the truth is we may never know. At least we now have more concrete findings to flesh out the stories."

Both TFA and the open-access journal reference are very interesting reads.

Journal Reference: Polar Record, 2025. 10.1017/S0032247425100090


Original Submission

posted by jelizondo on Thursday October 09, @04:55PM   Printer-friendly

https://phys.org/news/2025-09-forensic-recovers-fingerprints-ammunition-casings.html

A pioneering new test that can recover fingerprints from ammunition casing, once thought nearly impossible, has been developed by two Irish scientists.

Dr. Eithne Dempsey, and her recent Ph.D. student Dr. Colm McKeever, of the Department of Chemistry in Ireland's Maynooth University have developed a unique electrochemical method which can visualize fingerprints on brass casings, even after they have been exposed to the high temperature conditions experienced during gunfire. The study is published in the journal Forensic Chemistry.

For decades, investigators have struggled to recover fingerprints from weapons because any biological trace is usually destroyed by the high temperatures, friction and gas released after a gun is fired. As a result, criminals often abandon their weapons or casings at crime scenes, confident that they leave no fingerprint evidence behind.

"The Holy Grail in forensic investigation has always been retrieving prints from fired ammunition casings," said Dr. Dempsey. "Traditionally, the intense heat of firing destroys any biological residue. However, our technique has been able to reveal fingerprint ridges that would otherwise remain imperceptible."

The team found they could coat brass casings with a thin layer of specialized materials to make hidden fingerprint ridges visible. Unlike existing methods that need dangerous chemicals or high-powered equipment, the new process uses readily available non-toxic polymers and minimal amounts of energy to quickly reveal prints from seemingly blank surfaces.

It works by placing the brass casing of interest in an electrochemical cell containing specific chemical substances. When a small voltage is applied, chemicals in the solution are attracted to the surface, coating the spaces between fingerprint ridges and creating a clear, high contrast image of the print. The fingerprint appears within seconds as if by magic!

"Using the burnt material that remains on the surface of the casing as a stencil, we can deposit specific materials in between the gaps, allowing for the visualization," said Dr. McKeever.

Tests showed that this technique also worked on samples aged up to 16 months, demonstrating remarkable durability.

The research has significant implications for criminal investigations, where the current assumption is that firing a gun eliminates fingerprint residues on casings.

"Currently, the best case of forensic analysis of ammunition casings is to match it to the gun that fired it," said Dr. McKeever. "But we hope a method like this could match it back to the actual person who loaded the gun."

The team focused specifically on brass ammunition casings, a substance that has been traditionally resistant to fingerprint detection and is the most common type of material used globally.

The researchers believe that the test for fingerprints on brass they have developed could be adapted for other metallic surfaces, expanding its range of potential forensic applications, from firearm-related crimes to arson.

This technique uses a device called a potentiostat, which controls voltage and can be as portable as a mobile phone, making it possible to create a compact forensic testing kit.

"With this method, we have turned the ammunition casing into an electrode, allowing us to drive chemical reactions at the surface of the casing," said Dr. McKeever.

While promising, the new technology faces rigorous testing and validation before it could potentially be adopted by law enforcement agencies worldwide.

More information: Colm McKeever et al, Electrodeposition of redox materials with potential for enhanced visualisation of latent finger-marks on brass substrates and ammunition casings., Forensic Chemistry (2025). DOI: 10.1016/j.forc.2025.100663


Original Submission

posted by janrinok on Thursday October 09, @12:13PM   Printer-friendly

Guess how much of Britain's direct transatlantic data capacity runs through two cables in Bude?:

Feature The first transatlantic cable, laid in 1858, delivered a little over 700 messages before promptly dying a few weeks later. 167 years on, the undersea cables connecting the UK to the outside world process £220 billion in daily financial transactions. Now, the UK Parliament's Joint Committee on National Security Strategy (JCNSS) has told the government that it has to do a better job of protecting them.

The Committee's report, released on September 19, calls the government "too timid" in its approach to protecting the cables that snake from the UK to various destinations around the world. It warns that "security vulnerabilities abound" in the UK's undersea cable infrastructure, when even a simple anchor-drag can cause major damage.

There are 64 cables connecting the UK to the outside world, according to the report, carrying most of the country's internet traffic. Satellites can't shoulder the data volumes involved, are too expensive, and only account for around 5 percent of traffic globally.

These cables are invaluable to the UK economy, but they're also difficult to protect. They are heavily shielded in the shallow sea close to those points. That's because accidental damage from fishing operations and other vessels is common. On average, around 200 cables suffer faults each year. But as they get further out, the shielding is less robust. Instead, the companies that lay the cables rely on the depth of the sea to do its job (you'll be pleased to hear that sharks don't generally munch on them).

The report praises a strong cable infrastructure, and admits that in some areas at least we have the redundancy in the cable infrastructure to handle disruptions. For example, it notes that 75 percent of UK transatlantic traffic routes through two cables that come ashore in Bude, Cornwall. That seems like quite the vulnerability, but it acknowledges that we have plenty of infrastructure to route around if anything happened to them. There is "no imminent threat to the UK's national connectivity," it soothes.

But it simultaneously cautions against adopting what it describes as "business-as-usual" views in the industry. The government "focuses too much on having 'lots of cables' and pays insufficient attention to the system's actual ability to absorb unexpected shocks," it frets. It warns that "the impacts on connectivity would be much more serious," if onward connections to Europe suffered as part of a coordinated attack.

"While our national connectivity does not face immediate danger, we must prepare for the possibility that our cables can be threatened in the event of a security crisis," it says.

Who is the most likely to mount such an attack, if anyone? Russia seems front and center, according to experts. It has reportedly been studying the topic for years. Keir Giles, director at The Centre for International Cyber Conflict and senior consulting fellow of the Russia and Eurasia Programme at Chatham House, argues that Russia has a long history of information warfare that stepped up after it annexed Crimea in 2014.

"The thinking part of the Russian military suddenly decided 'actually, this information isolation is the way to go, because it appears to win wars for us without having to fight them'," Giles says, adding that this approach is often combined with choke holds on land-based information sources. Cutting off the population in the target area from any source of information other than what the Russian troops feed them achieves results at low cost.

In a 2021 paper he co-wrote for the NATO Cooperative Cyber Defence Centre of Excellence, he pointed to the Glavnoye upravleniye glubokovodnykh issledovaniy (Main Directorate for Deep-Water Research, or GUGI), a secretive Russian agency responsible for analyzing undersea cables for intelligence or disruption. According to the JCNSS report, this organization operates the Losharik, a titanium-hulled submarine capable of targeting cables at extreme depth.

You don't need a fancy submarine to snag a cable, as long as you're prepared to do it in plain sight closer to the coast. The JNCSS report points to several incidents around the UK and the Baltics. November last year saw two incidents. In the first, Chinese-flagged cargo vessel Yi Peng 3 dragged its anchor for 300km and cut two cables between Sweden and Lithuania. That same month, the UK and Irish navies shadowed Yantar, a Russian research ship loitering around UK cable infrastructure in the Irish sea.

The following month saw Cook Islands-flagged ship Eagle S damage one power cable and three data cables linking Finland and Estonia. This May, unaffiliated vessel Jaguar approached an underseas cable off Estonia and was escorted out of the country's waters.

The real problem with brute-force physical damage from vessels is that it's difficult to prove that it's intentional. On one hand, it's perfect for an aggressor's plausible deniability, and could also be a way to test the boundaries of what NATO is willing to tolerate. On the other, it could really be nothing.

"Attribution of sabotage to critical undersea infrastructure is difficult to prove, a situation significantly complicated by the prevalence of under-regulated and illegal shipping activities, sometimes referred to as the shadow fleet," a spokesperson for NATO told us.

"I'd push back on an assertion of a coordinated campaign," says Alan Mauldin, research director at analyst company TeleGeography, which examines undersea cable infrastructure warns. He questions assumptions that the Baltic cable damage was anything other than a SNAFU.

The Washington Post also reported comment from officials on both sides of the Atlantic that the Baltic anchor-dragging was probably accidental. Giles scoffs at that. "Somebody had been working very hard to persuade countries across Europe that this sudden spate of cables being broken in the Baltic Sea, one after another, was all an accident, and they were trying to say that it's possible for ships to drag their anchors without noticing," he says.

One would hope that international governance frameworks could help. The UN Convention on the Law of the Sea [PDF] has a provision against messing with undersea cables, but many states haven't enacted the agreement. In any case, plausible deniability makes things more difficult.

"The main challenge in making meaningful governance reforms to secure submarine cables is figuring out what these could be. Making fishing or anchoring accidents illegal would be disproportionate," says Anniki Mikelsaar, doctoral researcher at Oxford University's Oxford Internet Institute. "As there might be some regulatory friction, regional frameworks could be a meaningful avenue to increase submarine cable security."

The difficulty in pinning down intent hasn't stopped NATO from stepping in. In January it launched Baltic Sentry, an initiative to protect undersea infrastructure in the region. That effort includes frigates, patrol aircraft, and naval drones to keep an eye on what happens both above and below the waves.

Regardless of whether vessels are doing this deliberately or by accident, we have to be prepared for it, especially as cable installation shows no sign of slowing. Increasing bandwidth needs will boost global cable kilometers by 48 percent between now and 2040, says TeleGeography, adding that annual repairs will increase 36 percent between now and 2040.

"Many cable maintenance ships are reaching the end of their design life cycle, so more investment into upgrading the fleets is needed. This is important to make repairs faster," says Mikelsaar.

There are 62 vessels capable of cable maintenance today, and TeleGeography predicts that'll be enough for the next 15 years. However, it takes time to build these vessels and train the operators, meaning that we'll need to start delivering new vessels soon.

The problem for the UK is that it doesn't own any of that repair capacity, says the JNSS. It can take a long time to travel to a cable and repair it, and ships can only work on one at a time. The Committee reported that the UK doesn't own any sovereign repair capacity, and advises that it gets some, prescribing a repair ship by 2030.

"This could be leased to industry on favorable terms during peacetime and made available for Government use in a crisis," it says, adding that the Navy should establish a set of reservists that will be trained and ready to operate the vessel.

Sir Chris Bryant MP, the Minister for Data Protection and Telecoms, told the Committee it that it was being apocalyptic and "over-egging the pudding" by examining the possibility of a co-ordinated attack. "We disagree," the Committee said in the report, arguing that the security situation in the next decade is uncertain.

"Focusing on fishing accidents and low-level sabotage is no longer good enough," the report adds. "The UK faces a strategic vulnerability in the event of hostilities. Publicly signaling tougher defensive preparations is vital, and may reduce the likelihood of adversaries mounting a sabotage effort in the first place."

To that end, it has made a battery of recommendations. These include building the risk of a coordinated campaign against undersea infrastructure into its risk scenarios, and protecting the stations - often in remote coastal locations - where the cables come onto land.

The report also recommends that the Department for Science, Innovation and Technology (DSIT) ensures all lead departments have detailed sector-by-sector technical impact studies addressing widespread cable outages.

"Government works around the clock to ensure our subsea cable infrastructure is resilient and can withstand hostile and non-hostile threats," DSIT told El Reg, adding that when breaks happen, the UK has some of the fastest cable repair times in the world, and there's usually no noticeable disruption."

"Working with NATO and Joint Expeditionary Force allies, we're also ensuring hostile actors cannot operate undetected near UK or NATO waters," it added. "We're deploying new technologies, coordinating patrols, and leading initiatives like Nordic Warden alongside NATO's Baltic Sentry mission to track and counter undersea threats."

Nevertheless, some seem worried. Vili Lehdonvirta, head of the Digital Economic Security Lab (DIESL) and professor of Technology Policy at Aalto University, has noticed increased interest from governments and private sector organizations alike in how much their daily operations depend on oversea connectivity. He says that this likely plays into increased calls for digital sovereignty.

"The rapid increase in data localization laws around the world is partly explained by this desire for increased resilience," he says. "But situating data and workloads physically close as opposed to where it is economically efficient to run them (eg. because of cheaper electricity) comes with an economic cost."

So the good news is that we know exactly how vulnerable our undersea cables are. The bad news is that so does everyone else with a dodgy cargo ship and a good poker face. Sleep tight.


Original Submission

posted by janrinok on Thursday October 09, @07:25AM   Printer-friendly

https://www.abortretry.fail/p/the-qnx-operating-system

Gordon Bell and Dan Dodge were finishing their time at the University of Waterloo in Ontario in 1979. In pursuit of their masters degrees, they'd worked on a system called Thoth in their real-time operating systems course. Thoth was interesting not only for having been real-time and having featured synchronous message passing, but also for originally having been written in the B programming langue. It was then rewritten in the UW-native Eh language (fitting for a Canadian university), and then finally rewritten in Zed. It is this last, Zed-written, version of Thoth to which Bell and Dodge would have been exposed. Having always been written in a high-level language, the system was portable, and programs were the same regardless of the underlying hardware. Both by convention and by design, Thoth strongly encouraged programs to be structured as networks of communicating processes. As the final project for the RTOS course, students were expected to implement a real-time system of their own. This experience was likely pivotal to their next adventure.

A very deep and excellent dive into the world/history of QNX:


Original Submission

posted by janrinok on Thursday October 09, @02:41AM   Printer-friendly

https://phys.org/news/2025-09-human-skin-cells-fertilisable-eggs.html

Scientists said Tuesday they have turned human skin cells into eggs and fertilized them with sperm in the lab for the first time—a breakthrough that is hoped to one day let infertile people have children.

The technology is still years away from potentially becoming available to aspiring parents, the US-led team of scientists warned.

But outside experts said the proof-of-concept research could eventually change the meaning of infertility, which affects one in six people worldwide.

If successful, the technology called in-vitro gametogenesis (IVG) would allow older women or women who lack eggs for other reasons to genetically reproduce, Paula Amato, the co-author of a new study announcing the achievement, told AFP.

"It also would allow same-sex couples to have a child genetically related to both partners," said Amato, a researcher at the Oregon Health & Science University in the United States.

Scientists have been making significant advances in this field in recent years, with Japanese researchers announcing in July they had created mice with two biological fathers.

But the new study, published in the journal Nature Communications, marks a major advance by using DNA from humans, rather than mice.

The scientists first removed the nucleus from normal skin cells and transferred them into a donor egg which had its nucleus removed. This technique, called somatic cell nuclear transfer, was used to clone Dolly the sheep in 1996.

However a problem still had to be overcome: skin cells have 46 chromosomes, but eggs have 23.

The scientists managed to remove these extra chromosomes using a process they are calling "mitomeiosis", which mimics how cells normally divide.

They created 82 developing eggs called oocytes, which were then fertilized by sperm via in vitro fertilization (IVF).

After six days, less than 9% of the embryos developed to the point that they could hypothetically be transferred to the uterus for a standard IVF process.

However the embryos displayed a range of abnormalities, and the experiment was ended.

While the 9% rate was low, the researchers noted that during natural reproduction only around a third of embryos make it to the IVF-ready "blastocyst" stage.

Amato estimated the technology was at least a decade away from becoming widely available.

"The biggest hurdle is trying to achieve genetically normal eggs with the correct number and complement of chromosomes," she said.

Ying Cheong, a reproductive medicine researcher at the UK's University of Southampton, hailed the "exciting" breakthrough.

"For the first time, scientists have shown that DNA from ordinary body cells can be placed into an egg, activated, and made to halve its chromosomes, mimicking the special steps that normally create eggs and sperm," she said.

"While this is still very early laboratory work, in the future it could transform how we understand infertility and miscarriage, and perhaps one day open the door to creating egg- or sperm-like cells for those who have no other options."

Other researchers trying to create eggs in the lab are using a different technique. It involves reprogramming skin cells into what are called induced pluripotent stem cells—which have the potential to develop into any cell in the body—then turning those into eggs.

"It's too early to tell which method will be more successful," Amato said. "Either way, we are still many years away."

The researchers followed existing US ethical guidelines regulating the use of embryos, the study said.

More information: Shoukhrat Mitalipov, Induction of experimental cell division to generate cells with reduced chromosome ploidy, Nature Communications (2025). DOI: 10.1038/s41467-025-63454-7. www.nature.com/articles/s41467-025-63454-7
       


Original Submission

posted by janrinok on Wednesday October 08, @09:58PM   Printer-friendly
from the sky-not-falling-department dept.

There have been a lot of recent stories about Google restricting sideloading to apps from developers who have registered with Google. Google has issued the very important clarification that adb will still be able to used to sideload unverified apps: https://support.google.com/android-developer-console/answer/16561738

So, if you own your phone, you can still install whatever you want on it. You just might have to install adb and enable the Developer Options menu first.


Original Submission

posted by janrinok on Wednesday October 08, @05:14PM   Printer-friendly

https://phys.org/news/2025-10-ultra-thin-sodium-alternative-gold.html

From solar panels to next-generation medical devices, many emerging technologies rely on materials that can manipulate light with extreme precision. These materials—called plasmonic materials—are typically made from expensive metals like gold or silver. But what if a cheaper, more abundant metal could do the job just as well or better?

That's the question a team of researchers set out to explore. The challenge? While sodium is abundant and lightweight, it's also notoriously unstable and difficult to work with in the presence of air or moisture—two unavoidable parts of real-world conditions. Until now, this has kept it off the table for practical optical applications.

Researchers from Yale University, Oakland University, and Cornell University have teamed up to change that. By developing a new technique for structuring sodium into ultra-thin, precisely patterned films, they found a way to stabilize the metal and make it perform exceptionally well in light-based applications.

Their approach, published in the journal ACS Nano, involved combining thermally-assisted spin coating with phase-shift photolithography—essentially using heat and light to craft nanoscopic surface patterns that trap and guide light in powerful ways.

Even more impressively, the team used ultrafast laser spectroscopy to observe what happens when these sodium surfaces interact with light on time scales measured in trillionths of a second. The results were surprising: sodium's electrons responded in ways that differ from traditional metals, suggesting it could offer new advantages for light-based technologies like photocatalysis, sensing, and energy conversion.

More information: Conrad A. Kocoj et al, Ultrafast Plasmon Dynamics of Low-Loss Sodium Metasurfaces, ACS Nano (2025). DOI: 10.1021/acsnano.5c04946


Original Submission

posted by hubie on Wednesday October 08, @12:31PM   Printer-friendly

https://www.reuters.com/sustainability/society-equity/apple-removes-ice-tracking-apps-after-pressure-by-trump-administration-2025-10-03/:

Apple said on Thursday that it had removed ICEBlock and other similar ICE-tracking apps from its App Store after it was contacted by President Donald Trump's administration, in a rare instance of apps being taken down due to a U.S. federal government demand.

Alphabet's Google also removed similar apps on Thursday for policy violations, but the company said it was not approached by the Justice Department before taking the action.

The app alerts users to Immigration and Customs Enforcement agents in their area, which the Justice Department says could increase the risk of assault on U.S. agents.

[...] Apple removed more than 1,700 apps from its App Store in 2024 in response to government demands, but the vast majority — more than 1,300 — came from China, followed by Russia with 171 and South Korea with 79.


Original Submission

posted by jelizondo on Wednesday October 08, @07:44AM   Printer-friendly

First Dark Matter Sub-Halo Found In The Milky Way:

There are plenty of theories about what dark matter is and how it might be gravitationally affecting the universe. However, proving those theories out is hard since it hardly ever interacts with anything, especially on "small" scales like galaxies. So when a research team claims to have found evidence for dark matter in our own galaxy, it's worth taking a look at how. A new paper from Dr. Surkanya Chakrabati and her lab at the University of Alabama at Huntsville (UAH) does just that. They found evidence for a dark matter "sub-halo" in the galactic neighborhood, by looking at signals from binary pulsars.

A sub-halo is a clumping of dark matter that is brought together inside of a larger "halo" that is thought to form the core of galaxies. Since dark matter primarily interacts through gravity, going theory suggests that it should attract "baryonic" (i.e. normal) matter when it clumps together. This clumping is thought to the scaffolding that galaxies are built on.

Sub-halos are even denser groupings of dark matter that coalesce because of their gravitational attraction. Since they are relatively small compared to the big dark matter halos they are contained in, they can be difficult to detect. To do so, cosmologists would have to find a gravitational signal that deviates from what would be expected given the normal matter surrounding the sub-halo. So far, no one has been able to isolate that kind of signal, despite looking throughout our galactic neighborhood.

Enter binary pulsars - these star pairs contain at least one pulsar, a type of neutron star which emits a large amount of energy on a regular cycle (hence their name). These bursts can be measured so accurately they rival atomic clocks in terms of regularity. The researchers had a theory that they could use deviations in that expected cycle to detect the gravitational effects of a dark matter sub-halo, so they began looking at binary pairs in the galaxy to see if they could find any hint of it.

Overall they looked at 27 binary pulsars, and in particular were looking for gravitational changes between two pairs, to increase the chance there was indeed a structure causing the deviation. They found two, called PSR J1640+2224 and PSR J1713+0747, that had the kind of significant correlated gravitational change they were looking for.

To isolate that gravitational change, the researchers had to eliminate other forms of gravitational acceleration that could be caused by things other than dark matter. One is "gravitational radiation", the acceleration caused when the system gives off gravitational waves, and predicted by the theory of general relativity. Another is the Shklovskii Effect, which is an artifact caused by a binary system moving across our line of sight. Thankfully, both of these effects are well understood and can easily be removed from the calculation of the gravitational influence on the binary system.

Some of that gravitational influence can still come from baryonic matter, but in the case of these two binaries there appeared to be a substantial component that couldn't be explained that way. In fact, the statistics of that additional component were so compelling its hard to argue that it was caused by anything other than an unseen gravitational mass.

Defining that mass was the next step. The researchers pinpointed it at about 2,340 light years away, and determined its mass to be around 2.45 x 107 solar masses. An equivalent amount of baryonic matter causing that gravitational change would be 100 times what is observable in that part of the galaxy.

This research represents the first time a dark matter sub-halo has been detected in the general galactic neighborhood, after having been predicted by theory for years. It also offers a technique by which other researchers could do the same with other sets of binary pulsars. Though rare, astronomers are continually collecting new data on them constantly, giving cosmologists even more data to analyze. Likely this won't be the last time we'll hear of this technique being used to find dark matter sub-halos - there are plenty more places to search for them, and likely many more to discover.

arXiv paper: https://doi.org/10.48550/arXiv.2507.16932

Learn More:
    UAH - UAH researchers use pulsar accelerations to detect a dark matter sub-halo in the Milky Way for the first time
    S. Chakrabarti et al - Constraints on a dark matter sub-halo near the Sun from pulsar timing
    UT - Tying Theory To Practice When Searching For Dark Energy
    UT - Astronomers Search for Dark Matter Using Far Away Galaxies


Original Submission