Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The ship wasn't designed to withstand the powerful ice compression forces—and Shackleton knew it:
In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.
However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Arctic ice with no single cause for the sinking. Furthermore, the ship wasn't designed to withstand those forces and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.
Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition's exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he's ever seen; Endurance was "in a brilliant state of preservation."
[...] Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.
It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton's diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.
[...] Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them which stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship's hull.
[...] Based on his analysis, Tuhkuri concluded that the rudder wasn't the sole or primary reason for the ship's sinking. "Endurance would have sunk even if it did not have a rudder at all," Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes "simply annihilating the ship."
Perhaps the most surprising finding is that Shackleton knew of Endurance's structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition were forced to abandon their ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.
Shackleton knew of Antarctic's fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner's 1911-1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship's hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so and as a result, Deutschland survived eight months of being trapped in compressive ice, until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)
The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the ship builders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914-1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907-1909 expedition. So Shackleton had to know he was taking a big risk.
"Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it," said Tuhkuri. "The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints but the truth is we may never know. At least we now have more concrete findings to flesh out the stories."
Both TFA and the open-access journal reference are very interesting reads.
Journal Reference: Polar Record, 2025. 10.1017/S0032247425100090
https://phys.org/news/2025-09-forensic-recovers-fingerprints-ammunition-casings.html
A pioneering new test that can recover fingerprints from ammunition casing, once thought nearly impossible, has been developed by two Irish scientists.
Dr. Eithne Dempsey, and her recent Ph.D. student Dr. Colm McKeever, of the Department of Chemistry in Ireland's Maynooth University have developed a unique electrochemical method which can visualize fingerprints on brass casings, even after they have been exposed to the high temperature conditions experienced during gunfire. The study is published in the journal Forensic Chemistry.
For decades, investigators have struggled to recover fingerprints from weapons because any biological trace is usually destroyed by the high temperatures, friction and gas released after a gun is fired. As a result, criminals often abandon their weapons or casings at crime scenes, confident that they leave no fingerprint evidence behind.
"The Holy Grail in forensic investigation has always been retrieving prints from fired ammunition casings," said Dr. Dempsey. "Traditionally, the intense heat of firing destroys any biological residue. However, our technique has been able to reveal fingerprint ridges that would otherwise remain imperceptible."
The team found they could coat brass casings with a thin layer of specialized materials to make hidden fingerprint ridges visible. Unlike existing methods that need dangerous chemicals or high-powered equipment, the new process uses readily available non-toxic polymers and minimal amounts of energy to quickly reveal prints from seemingly blank surfaces.
It works by placing the brass casing of interest in an electrochemical cell containing specific chemical substances. When a small voltage is applied, chemicals in the solution are attracted to the surface, coating the spaces between fingerprint ridges and creating a clear, high contrast image of the print. The fingerprint appears within seconds as if by magic!
"Using the burnt material that remains on the surface of the casing as a stencil, we can deposit specific materials in between the gaps, allowing for the visualization," said Dr. McKeever.
Tests showed that this technique also worked on samples aged up to 16 months, demonstrating remarkable durability.
The research has significant implications for criminal investigations, where the current assumption is that firing a gun eliminates fingerprint residues on casings.
"Currently, the best case of forensic analysis of ammunition casings is to match it to the gun that fired it," said Dr. McKeever. "But we hope a method like this could match it back to the actual person who loaded the gun."
The team focused specifically on brass ammunition casings, a substance that has been traditionally resistant to fingerprint detection and is the most common type of material used globally.
The researchers believe that the test for fingerprints on brass they have developed could be adapted for other metallic surfaces, expanding its range of potential forensic applications, from firearm-related crimes to arson.
This technique uses a device called a potentiostat, which controls voltage and can be as portable as a mobile phone, making it possible to create a compact forensic testing kit.
"With this method, we have turned the ammunition casing into an electrode, allowing us to drive chemical reactions at the surface of the casing," said Dr. McKeever.
While promising, the new technology faces rigorous testing and validation before it could potentially be adopted by law enforcement agencies worldwide.
More information: Colm McKeever et al, Electrodeposition of redox materials with potential for enhanced visualisation of latent finger-marks on brass substrates and ammunition casings., Forensic Chemistry (2025). DOI: 10.1016/j.forc.2025.100663
Guess how much of Britain's direct transatlantic data capacity runs through two cables in Bude?:
Feature The first transatlantic cable, laid in 1858, delivered a little over 700 messages before promptly dying a few weeks later. 167 years on, the undersea cables connecting the UK to the outside world process £220 billion in daily financial transactions. Now, the UK Parliament's Joint Committee on National Security Strategy (JCNSS) has told the government that it has to do a better job of protecting them.
The Committee's report, released on September 19, calls the government "too timid" in its approach to protecting the cables that snake from the UK to various destinations around the world. It warns that "security vulnerabilities abound" in the UK's undersea cable infrastructure, when even a simple anchor-drag can cause major damage.
There are 64 cables connecting the UK to the outside world, according to the report, carrying most of the country's internet traffic. Satellites can't shoulder the data volumes involved, are too expensive, and only account for around 5 percent of traffic globally.
These cables are invaluable to the UK economy, but they're also difficult to protect. They are heavily shielded in the shallow sea close to those points. That's because accidental damage from fishing operations and other vessels is common. On average, around 200 cables suffer faults each year. But as they get further out, the shielding is less robust. Instead, the companies that lay the cables rely on the depth of the sea to do its job (you'll be pleased to hear that sharks don't generally munch on them).
The report praises a strong cable infrastructure, and admits that in some areas at least we have the redundancy in the cable infrastructure to handle disruptions. For example, it notes that 75 percent of UK transatlantic traffic routes through two cables that come ashore in Bude, Cornwall. That seems like quite the vulnerability, but it acknowledges that we have plenty of infrastructure to route around if anything happened to them. There is "no imminent threat to the UK's national connectivity," it soothes.
But it simultaneously cautions against adopting what it describes as "business-as-usual" views in the industry. The government "focuses too much on having 'lots of cables' and pays insufficient attention to the system's actual ability to absorb unexpected shocks," it frets. It warns that "the impacts on connectivity would be much more serious," if onward connections to Europe suffered as part of a coordinated attack.
"While our national connectivity does not face immediate danger, we must prepare for the possibility that our cables can be threatened in the event of a security crisis," it says.
Who is the most likely to mount such an attack, if anyone? Russia seems front and center, according to experts. It has reportedly been studying the topic for years. Keir Giles, director at The Centre for International Cyber Conflict and senior consulting fellow of the Russia and Eurasia Programme at Chatham House, argues that Russia has a long history of information warfare that stepped up after it annexed Crimea in 2014.
"The thinking part of the Russian military suddenly decided 'actually, this information isolation is the way to go, because it appears to win wars for us without having to fight them'," Giles says, adding that this approach is often combined with choke holds on land-based information sources. Cutting off the population in the target area from any source of information other than what the Russian troops feed them achieves results at low cost.
In a 2021 paper he co-wrote for the NATO Cooperative Cyber Defence Centre of Excellence, he pointed to the Glavnoye upravleniye glubokovodnykh issledovaniy (Main Directorate for Deep-Water Research, or GUGI), a secretive Russian agency responsible for analyzing undersea cables for intelligence or disruption. According to the JCNSS report, this organization operates the Losharik, a titanium-hulled submarine capable of targeting cables at extreme depth.
You don't need a fancy submarine to snag a cable, as long as you're prepared to do it in plain sight closer to the coast. The JNCSS report points to several incidents around the UK and the Baltics. November last year saw two incidents. In the first, Chinese-flagged cargo vessel Yi Peng 3 dragged its anchor for 300km and cut two cables between Sweden and Lithuania. That same month, the UK and Irish navies shadowed Yantar, a Russian research ship loitering around UK cable infrastructure in the Irish sea.
The following month saw Cook Islands-flagged ship Eagle S damage one power cable and three data cables linking Finland and Estonia. This May, unaffiliated vessel Jaguar approached an underseas cable off Estonia and was escorted out of the country's waters.
The real problem with brute-force physical damage from vessels is that it's difficult to prove that it's intentional. On one hand, it's perfect for an aggressor's plausible deniability, and could also be a way to test the boundaries of what NATO is willing to tolerate. On the other, it could really be nothing.
"Attribution of sabotage to critical undersea infrastructure is difficult to prove, a situation significantly complicated by the prevalence of under-regulated and illegal shipping activities, sometimes referred to as the shadow fleet," a spokesperson for NATO told us.
"I'd push back on an assertion of a coordinated campaign," says Alan Mauldin, research director at analyst company TeleGeography, which examines undersea cable infrastructure warns. He questions assumptions that the Baltic cable damage was anything other than a SNAFU.
The Washington Post also reported comment from officials on both sides of the Atlantic that the Baltic anchor-dragging was probably accidental. Giles scoffs at that. "Somebody had been working very hard to persuade countries across Europe that this sudden spate of cables being broken in the Baltic Sea, one after another, was all an accident, and they were trying to say that it's possible for ships to drag their anchors without noticing," he says.
One would hope that international governance frameworks could help. The UN Convention on the Law of the Sea [PDF] has a provision against messing with undersea cables, but many states haven't enacted the agreement. In any case, plausible deniability makes things more difficult.
"The main challenge in making meaningful governance reforms to secure submarine cables is figuring out what these could be. Making fishing or anchoring accidents illegal would be disproportionate," says Anniki Mikelsaar, doctoral researcher at Oxford University's Oxford Internet Institute. "As there might be some regulatory friction, regional frameworks could be a meaningful avenue to increase submarine cable security."
The difficulty in pinning down intent hasn't stopped NATO from stepping in. In January it launched Baltic Sentry, an initiative to protect undersea infrastructure in the region. That effort includes frigates, patrol aircraft, and naval drones to keep an eye on what happens both above and below the waves.
Regardless of whether vessels are doing this deliberately or by accident, we have to be prepared for it, especially as cable installation shows no sign of slowing. Increasing bandwidth needs will boost global cable kilometers by 48 percent between now and 2040, says TeleGeography, adding that annual repairs will increase 36 percent between now and 2040.
"Many cable maintenance ships are reaching the end of their design life cycle, so more investment into upgrading the fleets is needed. This is important to make repairs faster," says Mikelsaar.
There are 62 vessels capable of cable maintenance today, and TeleGeography predicts that'll be enough for the next 15 years. However, it takes time to build these vessels and train the operators, meaning that we'll need to start delivering new vessels soon.
The problem for the UK is that it doesn't own any of that repair capacity, says the JNSS. It can take a long time to travel to a cable and repair it, and ships can only work on one at a time. The Committee reported that the UK doesn't own any sovereign repair capacity, and advises that it gets some, prescribing a repair ship by 2030.
"This could be leased to industry on favorable terms during peacetime and made available for Government use in a crisis," it says, adding that the Navy should establish a set of reservists that will be trained and ready to operate the vessel.
Sir Chris Bryant MP, the Minister for Data Protection and Telecoms, told the Committee it that it was being apocalyptic and "over-egging the pudding" by examining the possibility of a co-ordinated attack. "We disagree," the Committee said in the report, arguing that the security situation in the next decade is uncertain.
"Focusing on fishing accidents and low-level sabotage is no longer good enough," the report adds. "The UK faces a strategic vulnerability in the event of hostilities. Publicly signaling tougher defensive preparations is vital, and may reduce the likelihood of adversaries mounting a sabotage effort in the first place."
To that end, it has made a battery of recommendations. These include building the risk of a coordinated campaign against undersea infrastructure into its risk scenarios, and protecting the stations - often in remote coastal locations - where the cables come onto land.
The report also recommends that the Department for Science, Innovation and Technology (DSIT) ensures all lead departments have detailed sector-by-sector technical impact studies addressing widespread cable outages.
"Government works around the clock to ensure our subsea cable infrastructure is resilient and can withstand hostile and non-hostile threats," DSIT told El Reg, adding that when breaks happen, the UK has some of the fastest cable repair times in the world, and there's usually no noticeable disruption."
"Working with NATO and Joint Expeditionary Force allies, we're also ensuring hostile actors cannot operate undetected near UK or NATO waters," it added. "We're deploying new technologies, coordinating patrols, and leading initiatives like Nordic Warden alongside NATO's Baltic Sentry mission to track and counter undersea threats."
Nevertheless, some seem worried. Vili Lehdonvirta, head of the Digital Economic Security Lab (DIESL) and professor of Technology Policy at Aalto University, has noticed increased interest from governments and private sector organizations alike in how much their daily operations depend on oversea connectivity. He says that this likely plays into increased calls for digital sovereignty.
"The rapid increase in data localization laws around the world is partly explained by this desire for increased resilience," he says. "But situating data and workloads physically close as opposed to where it is economically efficient to run them (eg. because of cheaper electricity) comes with an economic cost."
So the good news is that we know exactly how vulnerable our undersea cables are. The bad news is that so does everyone else with a dodgy cargo ship and a good poker face. Sleep tight.
https://www.abortretry.fail/p/the-qnx-operating-system
Gordon Bell and Dan Dodge were finishing their time at the University of Waterloo in Ontario in 1979. In pursuit of their masters degrees, they'd worked on a system called Thoth in their real-time operating systems course. Thoth was interesting not only for having been real-time and having featured synchronous message passing, but also for originally having been written in the B programming langue. It was then rewritten in the UW-native Eh language (fitting for a Canadian university), and then finally rewritten in Zed. It is this last, Zed-written, version of Thoth to which Bell and Dodge would have been exposed. Having always been written in a high-level language, the system was portable, and programs were the same regardless of the underlying hardware. Both by convention and by design, Thoth strongly encouraged programs to be structured as networks of communicating processes. As the final project for the RTOS course, students were expected to implement a real-time system of their own. This experience was likely pivotal to their next adventure.
A very deep and excellent dive into the world/history of QNX:
https://phys.org/news/2025-09-human-skin-cells-fertilisable-eggs.html
Scientists said Tuesday they have turned human skin cells into eggs and fertilized them with sperm in the lab for the first time—a breakthrough that is hoped to one day let infertile people have children.
The technology is still years away from potentially becoming available to aspiring parents, the US-led team of scientists warned.
But outside experts said the proof-of-concept research could eventually change the meaning of infertility, which affects one in six people worldwide.
If successful, the technology called in-vitro gametogenesis (IVG) would allow older women or women who lack eggs for other reasons to genetically reproduce, Paula Amato, the co-author of a new study announcing the achievement, told AFP.
"It also would allow same-sex couples to have a child genetically related to both partners," said Amato, a researcher at the Oregon Health & Science University in the United States.
Scientists have been making significant advances in this field in recent years, with Japanese researchers announcing in July they had created mice with two biological fathers.
But the new study, published in the journal Nature Communications, marks a major advance by using DNA from humans, rather than mice.
The scientists first removed the nucleus from normal skin cells and transferred them into a donor egg which had its nucleus removed. This technique, called somatic cell nuclear transfer, was used to clone Dolly the sheep in 1996.
However a problem still had to be overcome: skin cells have 46 chromosomes, but eggs have 23.
The scientists managed to remove these extra chromosomes using a process they are calling "mitomeiosis", which mimics how cells normally divide.
They created 82 developing eggs called oocytes, which were then fertilized by sperm via in vitro fertilization (IVF).
After six days, less than 9% of the embryos developed to the point that they could hypothetically be transferred to the uterus for a standard IVF process.
However the embryos displayed a range of abnormalities, and the experiment was ended.
While the 9% rate was low, the researchers noted that during natural reproduction only around a third of embryos make it to the IVF-ready "blastocyst" stage.
Amato estimated the technology was at least a decade away from becoming widely available.
"The biggest hurdle is trying to achieve genetically normal eggs with the correct number and complement of chromosomes," she said.
Ying Cheong, a reproductive medicine researcher at the UK's University of Southampton, hailed the "exciting" breakthrough.
"For the first time, scientists have shown that DNA from ordinary body cells can be placed into an egg, activated, and made to halve its chromosomes, mimicking the special steps that normally create eggs and sperm," she said.
"While this is still very early laboratory work, in the future it could transform how we understand infertility and miscarriage, and perhaps one day open the door to creating egg- or sperm-like cells for those who have no other options."
Other researchers trying to create eggs in the lab are using a different technique. It involves reprogramming skin cells into what are called induced pluripotent stem cells—which have the potential to develop into any cell in the body—then turning those into eggs.
"It's too early to tell which method will be more successful," Amato said. "Either way, we are still many years away."
The researchers followed existing US ethical guidelines regulating the use of embryos, the study said.
More information: Shoukhrat Mitalipov, Induction of experimental cell division to generate cells with reduced chromosome ploidy, Nature Communications (2025). DOI: 10.1038/s41467-025-63454-7. www.nature.com/articles/s41467-025-63454-7
There have been a lot of recent stories about Google restricting sideloading to apps from developers who have registered with Google. Google has issued the very important clarification that adb will still be able to used to sideload unverified apps: https://support.google.com/android-developer-console/answer/16561738
So, if you own your phone, you can still install whatever you want on it. You just might have to install adb and enable the Developer Options menu first.
https://phys.org/news/2025-10-ultra-thin-sodium-alternative-gold.html
From solar panels to next-generation medical devices, many emerging technologies rely on materials that can manipulate light with extreme precision. These materials—called plasmonic materials—are typically made from expensive metals like gold or silver. But what if a cheaper, more abundant metal could do the job just as well or better?
That's the question a team of researchers set out to explore. The challenge? While sodium is abundant and lightweight, it's also notoriously unstable and difficult to work with in the presence of air or moisture—two unavoidable parts of real-world conditions. Until now, this has kept it off the table for practical optical applications.
Researchers from Yale University, Oakland University, and Cornell University have teamed up to change that. By developing a new technique for structuring sodium into ultra-thin, precisely patterned films, they found a way to stabilize the metal and make it perform exceptionally well in light-based applications.
Their approach, published in the journal ACS Nano, involved combining thermally-assisted spin coating with phase-shift photolithography—essentially using heat and light to craft nanoscopic surface patterns that trap and guide light in powerful ways.
Even more impressively, the team used ultrafast laser spectroscopy to observe what happens when these sodium surfaces interact with light on time scales measured in trillionths of a second. The results were surprising: sodium's electrons responded in ways that differ from traditional metals, suggesting it could offer new advantages for light-based technologies like photocatalysis, sensing, and energy conversion.
More information: Conrad A. Kocoj et al, Ultrafast Plasmon Dynamics of Low-Loss Sodium Metasurfaces, ACS Nano (2025). DOI: 10.1021/acsnano.5c04946
Apple said on Thursday that it had removed ICEBlock and other similar ICE-tracking apps from its App Store after it was contacted by President Donald Trump's administration, in a rare instance of apps being taken down due to a U.S. federal government demand.
Alphabet's Google also removed similar apps on Thursday for policy violations, but the company said it was not approached by the Justice Department before taking the action.
The app alerts users to Immigration and Customs Enforcement agents in their area, which the Justice Department says could increase the risk of assault on U.S. agents.
[...] Apple removed more than 1,700 apps from its App Store in 2024 in response to government demands, but the vast majority — more than 1,300 — came from China, followed by Russia with 171 and South Korea with 79.
First Dark Matter Sub-Halo Found In The Milky Way:
There are plenty of theories about what dark matter is and how it might be gravitationally affecting the universe. However, proving those theories out is hard since it hardly ever interacts with anything, especially on "small" scales like galaxies. So when a research team claims to have found evidence for dark matter in our own galaxy, it's worth taking a look at how. A new paper from Dr. Surkanya Chakrabati and her lab at the University of Alabama at Huntsville (UAH) does just that. They found evidence for a dark matter "sub-halo" in the galactic neighborhood, by looking at signals from binary pulsars.
A sub-halo is a clumping of dark matter that is brought together inside of a larger "halo" that is thought to form the core of galaxies. Since dark matter primarily interacts through gravity, going theory suggests that it should attract "baryonic" (i.e. normal) matter when it clumps together. This clumping is thought to the scaffolding that galaxies are built on.
Sub-halos are even denser groupings of dark matter that coalesce because of their gravitational attraction. Since they are relatively small compared to the big dark matter halos they are contained in, they can be difficult to detect. To do so, cosmologists would have to find a gravitational signal that deviates from what would be expected given the normal matter surrounding the sub-halo. So far, no one has been able to isolate that kind of signal, despite looking throughout our galactic neighborhood.
Enter binary pulsars - these star pairs contain at least one pulsar, a type of neutron star which emits a large amount of energy on a regular cycle (hence their name). These bursts can be measured so accurately they rival atomic clocks in terms of regularity. The researchers had a theory that they could use deviations in that expected cycle to detect the gravitational effects of a dark matter sub-halo, so they began looking at binary pairs in the galaxy to see if they could find any hint of it.
Overall they looked at 27 binary pulsars, and in particular were looking for gravitational changes between two pairs, to increase the chance there was indeed a structure causing the deviation. They found two, called PSR J1640+2224 and PSR J1713+0747, that had the kind of significant correlated gravitational change they were looking for.
To isolate that gravitational change, the researchers had to eliminate other forms of gravitational acceleration that could be caused by things other than dark matter. One is "gravitational radiation", the acceleration caused when the system gives off gravitational waves, and predicted by the theory of general relativity. Another is the Shklovskii Effect, which is an artifact caused by a binary system moving across our line of sight. Thankfully, both of these effects are well understood and can easily be removed from the calculation of the gravitational influence on the binary system.
Some of that gravitational influence can still come from baryonic matter, but in the case of these two binaries there appeared to be a substantial component that couldn't be explained that way. In fact, the statistics of that additional component were so compelling its hard to argue that it was caused by anything other than an unseen gravitational mass.
Defining that mass was the next step. The researchers pinpointed it at about 2,340 light years away, and determined its mass to be around 2.45 x 107 solar masses. An equivalent amount of baryonic matter causing that gravitational change would be 100 times what is observable in that part of the galaxy.
This research represents the first time a dark matter sub-halo has been detected in the general galactic neighborhood, after having been predicted by theory for years. It also offers a technique by which other researchers could do the same with other sets of binary pulsars. Though rare, astronomers are continually collecting new data on them constantly, giving cosmologists even more data to analyze. Likely this won't be the last time we'll hear of this technique being used to find dark matter sub-halos - there are plenty more places to search for them, and likely many more to discover.
arXiv paper: https://doi.org/10.48550/arXiv.2507.16932
Learn More:
UAH - UAH researchers use pulsar accelerations to detect a dark matter sub-halo in the Milky Way for the first time
S. Chakrabarti et al - Constraints on a dark matter sub-halo near the Sun from pulsar timing
UT - Tying Theory To Practice When Searching For Dark Energy
UT - Astronomers Search for Dark Matter Using Far Away Galaxies
Instagram says it is not listening to users' microphones to serve ads:
Adam Mosseri, the head of Instagram, has shared a video on his account to dismiss the myth that Instagram is actively listening to users, to show them relevant ads. Now, why would you say that? Unless, it was true! Right?
Jokes aside, the timing couldn't be worse. Yesterday, Meta announced that it will be updating its privacy policy by December 16. Why? Because Meta says that it will use the data collected from user interactions with its AI, to sell targeted ads across its social networks. So, how is this going to work, privacy-wise? Well, that's another story.
The story continues at https://www.ghacks.net/2025/10/02/instagram-says/
https://phys.org/news/2025-10-track-medicines-hitchhike-cholesterol.html
Researchers at The University of Queensland have developed a test that could change our understanding of cholesterol and its potential to ferry deadly cancer messages and life-saving medicines around our body.
Technology developed by Ph.D. scholar Raluca Ghebosu and Associate Professor Joy Wolfram has been designed to assess how different medicines bind to cholesterol particles in our bloodstream.
Ghebosu said mapping this cholesterol "hitchhiking" was crucial to establishing how long medicines remain active, which organs they travel to and, ultimately, how effective they are.
"The problem is that this process requires costly, complex, and time-consuming techniques," Ghebosu said.
"All are significant barriers to what could be a crucial breakthrough in medicine. Our test seeks to simplify the mapping process while also making it much cheaper to get answers."
The new test, named lipoprotein association fluorometry (LAF), was developed at UQ's Australian Institute for Bioengineering and Nanotechnology (AIBN) under the mentorship of Associate Professor Wolfram from the AIBN and the UQ School of Chemical Engineering.
Ghebosu said LAF works by using a fluorescent signal to detect when a "molecular handshake" occurs between cholesterol particles and test agents such as medicines.
Crucially, each test is low cost and delivers results in an hour instead of a week.
"By predicting cholesterol binding to various synthetic nanoparticles—as well as polymers, proteins, peptides, and small molecules—we are better placed to uncover information beneficial to drug development," Ghebosu said.
As well as measuring how medicines bind to cholesterol, Ghebosu said the LAF also showed how cancer cells could be exploiting the same hitchhiking system.
The technology revealed that tiny messenger particles released by cancer cells—called extracellular vesicles (EVs)—display a particular affinity for latching onto "bad" cholesterol particles.
Previous studies have shown that these messenger particles help cancers grow and spread, meaning the binding patterns of these particles with cholesterol could open up new inquiries on how to slow or stop the disease.
"Cholesterol is essential for normal body function, but EVs released by cancer cells may be hijacking this system," Ghebosu said.
Ghebosu said their team had already used the LAF technology to show EVs released by metastatic or "super-spreading," cancer cells were much better at binding to bad cholesterol compared to those from healthy cells and less aggressive cancer cells.
Associate Professor Wolfram said the binding patterns revealed by LAF could advance our understanding of biological processes and inform future medical applications.
"Ultimately, this raises the possibility that cholesterol may contribute to the spread of cancer through EV binding," Associate Professor Wolfram said.
"Thankfully, our technique can also be used to explore therapies that help block this binding and assess whether this could slow cancer progression."
The findings are the subject of a patent application, which is available for licensing from UniQuest, the commercialization company of UQ, and are detailed in the Journal of Extracellular Vesicles.
More information: Raluca Ghebosu et al, Lipoprotein Association Fluorometry (LAF) as a Semi-Quantitative Characterization Tool to Assess Extracellular Vesicle-Lipoprotein Binding, Journal of Extracellular Vesicles (2025). DOI: 10.1002/jev2.70172
Forget Sparta - Leonidas wiped out all 61 drones in the demo:
Drones have become one of the most important elements of modern warfare, which is why finding effective ways of disabling them is so important. US defense technology firm Epirus has developed a countermeasure that could interest a lot of nations: a high-powered microwave (HPM) weapon that recently demonstrated how it can down 49 drones with a single blast.
[...] During the demo, Leonidas successfully disabled all 61 drones taking part. The headline feat was taking down a 49-drone swarm with a single shot.
[...] Because of how cheaply they can be procured, huge numbers of drones are being used in warfare and for reconnaissance. Leonidas' bursts of electromagnetic energy are designed to disable or destroy the electronics inside hostile drones, including swarms.
The latest second-generation of Leonidas introduces gallium nitride semiconductors. This enables a more compact, reliable, and scalable system compared to legacy microwave weapons that relied on bulkier magnetron tubes.
[...] Leonidas' software-defined architecture allows operators to adjust waveforms on the fly, tailoring effects to different targets and minimizing risks to nearby friendly systems. The modular build also means the weapon can be deployed in multiple configurations, from vehicle-mounted platforms to fixed-site defenses.
Epirus has also developed specialized versions of the weapon, such as Leonidas Expeditionary, optimized for rapid deployment by ground forces, and Leonidas H2O, designed for maritime operations capable of disabling small boat motors.
[...] The latest Leonidas weapon comes at a time when AI-powered drone swarms move closer to frontline deployment. Able to fly and fight as one coordinated force, these drones could overwhelm traditional air defenses, which is why HPM weapons are becoming increasingly important.
https://phys.org/news/2025-10-graphene-reveals-dome-superconductivity-electric.html
Superconductivity is a phenomenon where certain materials can conduct electricity with zero resistance. Obviously, this has enormous technological advantages, which makes superconductivity one of the most intensely researched fields in the world.
But superconductivity is not straightforward. Take, for example, the double-dome effect. When scientists plot where superconductivity appears in material as they change how many electrons are in it, the material's superconducting regions sometimes look like two separate "domes" on a graph.
In other words, the material becomes superconducting, then stops, then becomes superconducting again as we keep changing its electron density.
Double-dome superconductivity has been seen before in some complex materials, such as graphene. Graphene is essentially a sheet of carbon atoms just one atom thick linked together in a honeycomb pattern. Still, it has transformed the field of quantum materials research because it features some really strange effects.
For example, when we stack two graphene layers and twist them at specific angles, the electrons in the graphene behave in new and unexpected ways, creating quantum phases like magnetism, electrical insulation, and, of course, superconductivity.
But there is an even more complex structure of graphene that takes this further by adding a third layer, making the system even more complex and tunable: Magic-angle twisted trilayer graphene (MATTG). With MATTG, researchers can now observe and control a double-dome pattern of superconductivity that was previously only suspected in graphene systems.
In a new study published in Nature Physics, a team led by Mitali Banerjee at EPFL, together with partners in Switzerland, the U.K., and Japan, has shown that MATTG allows direct control of the double-dome superconductivity pattern. By carefully stacking the layers and adjusting the electric field, the researchers could fine-tune the system and track where superconductivity appeared or disappeared as they varied the number of electrons.
Their experiments, supported by theory, revealed that two distinct superconducting regions—the domes—show up as they gradually changed the electron count in MATTG. The work sheds light on how unconventional superconductivity can be created and controlled in 2D materials.
The researchers built devices consisting of three layers of graphene, stacked so the middle one is twisted by about 1.55 degrees relative to the others. They placed the stack between thin layers of insulating hexagonal boron nitride, then added electrodes and gates to precisely control the electron density and apply an electric "displacement field," which let the researchers adjust how electrons move in the material, making it possible to turn superconductivity on or off.
The scientists then measured how MATTG's resistance changed as they varied the electron density, magnetic field, and applied current at temperatures close to absolute zero (100 millikelvin). This allowed them to map out the regions where superconductivity appeared.
By tuning the displacement field, they could further tune the material's band structure (the set of rules that determines how electrons can move and behave inside the material), letting them control the emergence and disappearance of the double-dome pattern.
The team observed that superconductivity in twisted trilayer graphene does not form a single, smooth region but instead splits into two separate domes as the electron density is tuned. Between the domes, superconductivity is strongly suppressed, indicating a possible competition or change in the underlying pairing mechanism.
Each dome displayed unique features: one side showed a sharper and more sudden switch into the superconducting state, and the measurements showed a kind of "memory" in how the material responded to electrical current: how it reacted to increasing current wasn't the same as how it reacted to decreasing current. The other dome had a gentler, slower transition into superconductivity with no evidence of "memory."
The researchers developed theoretical work (Hartree-Fock calculations) to interpret their experimental findings, showing that subtle changes in how the electrons arrange themselves, which are shaped by both interactions and the applied displacement field, determine where superconductivity is favored. The data point to different types of electron pairing in the two domes, possibly linked to changes in the electronic "order" of the system.
The study highlights MATTG as the first system where double-dome superconductivity can be directly controlled by an electric field. It offers a new way to study how unconventional superconductivity emerges and how it can be tuned, opening up possibilities for designing quantum devices or exploring new states of matter in engineered materials.
More information: Zekang Zhou et al, Gate-tunable double-dome superconductivity in twisted trilayer graphene, Nature Physics (2025). DOI: 10.1038/s41567-025-03040-2
A group of physicists from Harvard and MIT just built a quantum computer that ran continuously for more than two hours. Although it doesn't sound like much versus regular computers (like servers that run 24/7 for months, if not years), this is a huge breakthrough in quantum computing. As reported by The Harvard Crimson, most current quantum computers run for only a few milliseconds, with record-breaking machines only able to operate for a little over 10 seconds.
Although two hours is still a bit limited, researchers say that the concept behind this could allow future quantum computers to run for much longer, maybe even indefinitely. "There is still a way to go and scale from where we are now," says research associate Tout T. Wang, "But the roadmap is now clear based on the breakthrough experiments that we've done here at Harvard."
[Source]: The Harvard Crimson
Artificial intelligence startups are attracting record sums of venture capital, but some of the world's largest investors warned that early-stage valuations are starting to look frothy, senior investment executives said on Friday.
"There's a little bit of a hype bubble going on in the early-stage venture space," said Bryan Yeo, group chief investment officer at Singapore sovereign wealth fund GIC (GIC.UL), as part of a panel discussion at the Milken Institute Asia Summit 2025 in Singapore.
"Any company startup with an AI label will be valued right up there at huge multiples of whatever the small revenue (is)," he said. "That might be fair for some companies and probably not for others."
In the first quarter of 2025, AI startups raised $73.1 billion globally, accounting for 57.9% of all venture capital funding, according to PitchBook. The surge was driven by funding rounds like OpenAI's $40 billion capital raising, as investors raced to catch the AI wave.
"Market expectations could be way ahead of what the technology could deliver," Yeo said. "We're seeing a major AI capex boom today. It is masking some of the potential weaknesses that might be going on in the economy."
Todd Sisitsky, president of alternative asset manager TPG said the fear of missing out is dangerous for investors, though he added that views were divided on whether the AI sector had formed a bubble.
Some AI firms are hitting $100 million in revenue within months, he said, while others in early-stage ventures command valuations at between $400 million and $1.2 billion per employee. He said that was "breathtaking."
And perhaps a case in point? . . .
Database startup Supabase notches $5 billion valuation in funding round - The Economic Times:
Open-source database startup Supabase said on Friday it has secured a valuation of $5 billion in its latest funding round, as investors continue to back companies riding the wave of the artificial intelligence boom.
[...] Supabase is a backend platform that helps developers build applications quickly and has benefited from the rise in AI-powered coding assistants.
The platform is built on the PostgreSQL open-source database, a system for organizing and managing information online.
The latest capital-raise comes just months after Supabase's Series D round, which reportedly valued it at $2 billion.
[...] Coding platforms such as Lovable and Bolt run on Supabase, which caters to more than four million developers. The company's customers also include enterprises such as PwC, McDonald's and Github Next.
"This new financing aims to accelerate Supabase's efforts to become the backend for everyone, from startups to some of the most demanding, data-intensive enterprise workloads," said Caryn Marooney, general partner at investment management firm Coatue.