Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Microsoft has recently come under fire for how its Edge browser handles your saved passwords. A security expert named Tom Jøran Sønstebyseter Rønning has shared a worrying discovery about the Microsoft Edge web browser. It turns out that when you use Edge to save your passwords, the browser turns them into plaintext as soon as the app starts.
For context, Plaintext means the passwords are not scrambled or hidden. They sit in the computer memory as plain words that anyone with administrative privileges or SYSTEM-level access can read.
Rønning shared these findings at a tech event in Oslo called Big Bite of Tech 26. The event was hosted by the research firm Palo Alto Networks Norway. He explained that Edge is the only browser he tested that works this way, whereas other browsers like Google Chrome are safer because they use a method called App-Bound Encryption (ABE).
This feature locks the passwords to the specific browser app and only unscrambles them when you actually need to log in to a site. Once you are done, the browser hides them again.
The main worry is that these passwords stay in the computer memory even if you never visit the websites they belong to. To show how easy it is to see this data, Rønning created a tool called EdgeSavedPasswordsDumper and put it on GitHub.
This tool proves that if a hacker or an infostealer gets control of a computer, they can scan the process memory of the browser to find these saved passwords.
This is a big deal for offices that use terminal servers, Citrix, or Virtual Desktop Infrastructure (VDI), where many people share one machine. In these shared setups, an attacker with administrative rights can perform cross-process memory access to see the data of every user who is logged in and then steal passwords from people who aren't even using the browser at that moment.
When Rønning told Microsoft about this, the company said the setup was by design. The company maintains that they have to balance how fast the browser works with how safe it is. They believe that if a hacker has already gained in-depth access to your computer to scan the memory, the device is already in big trouble.
Because Microsoft doesn't plan to change this soon, some experts suggest changing how you save your details. While Chrome uses better protection to stop other processes from stealing its keys, no browser is perfect. So, it's better to use a separate password app instead of saving them inside your web browser, as this will keep your data away from the browser's memory, where hackers can easily find it.
Experts shared their thoughts with Hackread.com, warning that this design choice creates a massive safety gap. Craig Lurey, from the Chicago-based firm Keeper Security, noted that while Windows tries to keep apps separate, one program can still often "pillage" the memory of another.
He added that since plaintext passwords exist in Edge's memory, other processes can read them "without restriction." To fight this, his firm created Keeper Forcefield, which uses kernel-level protection to block hackers from reading app memory even if the computer is already compromised.
Morey Haber, from the Atlanta-based firm BeyondTrust, also criticised the move. He explained that passwords should be "transient secrets" that are used and then quickly discarded. "The moment a password is retained in clear text memory... it stops being an authentication mechanism and becomes a liability," Haber warned. He added that if a password can be read in memory by a human or a malicious process, "it is already compromised."
These numbers describe a market that has bifurcated with unusual speed. Just 18 months ago, Nvidia supplied the vast majority of AI training and inference silicon used by Chinese cloud providers. Today, Huawei's Ascend 950PR is the primary procurement target for China's largest tech companies, and a training-focused successor named the 950DT is scheduled for Q4 this year.
The 950PR is currently the only Chinese-made AI processor that supports FP8, a compressed numerical format that allows more operations per second and lowers per-query costs. V4 uses a Mixture-of-Experts architecture with up to 1 trillion total parameters but activates only around 37 billion per inference pass. That favors inference-efficient hardware, which plays to the 950PR's strengths over its limitations in raw training throughput.
DeepSeek gave Huawei early optimization access, but didn’t extend the same to Nvidia or AMD. While V4's open weights are released in standard formats compatible with CUDA-based frameworks, DeepSeek's own infrastructure runs on Huawei Ascend silicon. The collaboration has pulled forward procurement timelines across the Chinese cloud industry, and chip prices for the 950PR have reportedly risen by about 20% as a result of the demand.
Meanwhile, SMIC has been working on expanding its advanced-node capacity for more than a year. The goal is a five-fold increase over a period of two years that’ll lift 7nm and 5nm production to 100,000 wafers per month and half a million by 2030. In addition, the combined capacity for 22nm and below could rise from 30,000-50,000 wafer starts per month in 2025 to 50,000-60,000 or higher this year. Huawei is adding two dedicated fabrication plants, though ownership structures remain unclear. Once fully operational, those facilities could exceed the current output of comparable lines at SMIC.
Yields remain a thorn in China’s side, with SMIC’s 7nm-class process delivering substantially fewer good dies per wafer than TSMC’s equivalent nodes, and the 950PR is likely to be a much larger chip than a TSMC equivalent. SMIC’s cycle time from wafer start to finished and packaged as an Ascend processor is also a problem, currently sitting at around eight months, according to estimates from JP Morgan. For similar nodes at TSMC, it’s around three months.
Then there’s HBM — Huawei announced in September that it had developed its own HBM chips with up to 1.6 TB/s bandwidth, HiBL 1.0, and HiZQ 2.0, in partnership with CXMT, but how quickly CXMT can ramp production of competitive HBM remains an open question.
The H200, which Nvidia received U.S. licenses to sell to China earlier this year, hasn’t shipped a single unit despite receiving orders. Contradictory regulatory requirements from Washington and Beijing created a stalemate at customs: U.S. regulators require that H200 chips ordered by Chinese customers be used only inside China, while Beijing has instructed domestic technology companies to limit Nvidia hardware to overseas operations.
Nvidia confirmed in its FY2026 10-K filing that it’s "effectively foreclosed from competing in China's data center computing market" and is not assuming any data center compute revenue from the region in its current outlook. Bernstein analysts estimated earlier this year that Nvidia’s share of the China AI GPU market could fall to roughly 8% in the coming years, down from 66% in 2024, both due to U.S. restrictions and because domestic vendors are being pushed to cover up to 80% of demand from domestic sources. TrendForce projected in December that China's high-end AI chip market would grow by more than 60% in 2026, with domestic suppliers capturing about half of the total.
Huawei compensates by linking large numbers of processors via optical interconnects. Its CloudMatrix 384 system combines twelve racks of Ascend modules into a 384-processor fabric delivering roughly 300 PFLOPS, though at nearly four times the power draw of Nvidia's comparable GB200-based configurations.
The 950PR is primarily an inference chip, though; the training-focused 950DT, expected in Q4, is designed for deep learning workloads and could narrow the gap with Nvidia's Hopper generation for model training tasks. Until it ships, Chinese firms that need to train large foundation models domestically face constraints that inference silicon can’t fully solve.
As for Huawei's CANN software ecosystem, it’s now thought to have more than four million developers, but it remains far smaller than Nvidia's CUDA install base. Whether CANN can attract enough third-party development to become self-sustaining remains to be seen. For now, commercial momentum is running in Huawei's favor inside China, driven by the simple absence of alternatives.
[Ed's Comment - From Wikipedia, the free encyclopedia:
The French HADOPI law (French: Haute Autorité pour la Diffusion des Œuvres et la Protection des droits d'auteur sur Internet,[1][a] English: "Supreme Authority for the Distribution of Works and Protection of Copyright on the Internet") or Creation and Internet law (French: la loi Création et Internet) was introduced during 2009, providing what is known as a graduated response as a means to encourage compliance with copyright laws. HADOPI is the acronym of the government agency created to administer it.
Comment Ends --JR]
Today, the Conseil d’État (the French Administrative Supreme Court) ruled [PDF in French -Ed] in favor of La Quadrature du Net, French Data Network (FDN), Franciliens.net and Fédération FDN [sites in French -Ed]. It recognised that Hadopi's surveillance system (operated by Arcom since 2021) is a breach of fundamental rights protected by the European Union. As a result, it has ordered the government to repeal the core provisions of Hadopi key decree that organises the "graduated response" system. This fight against Hadopi, in which La Quadrature is involved since the first legislative debates in the National Assembly in 2009, is emblematic of the archaic view held by successive governments, both left-wing and right-wing, on the question of sharing online culture and knowledge. It is now up to the government to acknowledge the death of Hadopi and, instead of attempting to bring it back to life, to finally admit that online cultural sharing for non-commercial purposes must not be criminalised.
La Quadrature du Net started its challenge in court back in 2009 as to whether the law was actually compatible with European Union Law and human rights. The law was named after the The French Copyright Authority (HADOPI).
Previously:
(2026) France Keeps Breaking the Internet to Stop Piracy, Even Though It's Not Working
(2021) France Gets a New Anti-Piracy Agency in 2022
Apple has agreed to pay $250 million to settle a class action lawsuit that accused it of misleading customers about the availability of its Apple Intelligence features. The proposed settlement would apply to people in the US who purchased all models of the iPhone 16 and the iPhone 15 Pro between June 10th, 2024 and March 29th, 2025.
People who submit qualifying claims can receive $25 for each eligible device, "which may decrease or increase up to $95 per device, depending on claim volume and other factors," according to Clarkson Law Firm, the legal team behind the class action lawsuit.
The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."
In a statement to The Verge, Apple spokesperson Marni Goldberg said the company "resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users." You can read Apple's full statement at the bottom of this article.
Apple previewed a series of AI-powered features coming to its iPhones during its June 2024 Worldwide Developers Conference, including a more personalized Siri. But when the iPhone 16 launched in September, Apple labeled it as "built for Apple Intelligence," as it lacked many of the capabilities it teased months earlier.
Instead, Apple gradually rolled out its new AI features, including Image Playground, Genmoji, and a ChatGPT integration in Siri. The company also delayed the launch of its more personalized Siri, which is now expected to arrive later this year.
Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for the Apple Intelligence page on its website. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Apple denied any wrongdoing. Here's the company's full statement:
Since the launch of Apple Intelligence, we have introduced dozens of features across many languages that are integrated across Apple's platforms, relevant to what users do every day, and built with privacy protections at every step. These include Visual Intelligence, Live Translation, Writing Tools, Genmoji, Clean Up and many more.
Apple has reached a settlement to resolve claims related to the availability of two additional features. We resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users.
Ah, nostalgia. The taste of Mum's secret-sauce pasta, the endless summers, that one time Fat Nadya was going to show her boobs in the bushes behind Ms Wolowitz house ... and soon, dear reader, the undescribable pleasure of wasting time selecting cars, fire hydrants, traffic lights and the like for the fourteenth time just to read or buy something online.
For Google has declared that the Olden Ways are over, as these are agentic times, and it is necessary to let your computer do the routine stuff for you, like booking a month-long cruise in the Caribbean or something. So, no more old captcha: it's ReCaptcha Version II now, and you, yes you, will from now on be obligated to prove you're not (another) machine by taking a picture with your smartphone (machine) which, of course, must be authenticated itself to the Google Machine, to prove you're not a, you guessed it, machine. (Oblig funny monkey clip here [Video not reviewed. -Ed])
Somehow I got the feeling that the only purpose of a human in the not so distant future will be to sign off (minute 21 and beyond) for a machine, and pay its bills.
I guess that's called winning -- by the machines.
https://archive.ph/TCsXg (Actually a NYT article)
There is a moment when internet companies get the stink of death on them. For AOL, it was 2003, when it became clear that its users were abandoning its clunky dial-up internet service for far-faster broadband. For Yahoo, it was 2015, when their last-ditch acquisition spree failed, and they sold themselves to Verizon.
For Meta, that time is now. I believe the company — one of the most powerful media organizations in the world and one of the most valuable members of the S&P 500 — is at the start of a long, slow decline that will trigger aftershocks to our economy and our society.
It may be named Meta, but the company's biggest asset is still Facebook. Started from a Harvard dorm, the original online social network has dominated our world for two decades. Its three billion users are still bigger than any single country. Its platforms can help sway an election, fuel an insurrection or spark a genocide.
But if you look carefully, you can see chinks in the armor. Meta's earnings are starting to show the strain from years of growing consumer disaffection and reckless spending. The latest earnings, released on April 29, revealed a dip in user numbers for the first time since it started reporting these figures. And the slumping stock confirms what we have all known in our guts for a while: This is a company entering its zombie era.
This directive — first uncovered by Russian independent journalist Maria Kolomychenko, and reported by the Russian version of Radio Free Europe — [site in Russian- Ed] marks a major escalation in the Kremlin's long-running effort to control what its citizens see online and cut them off from the open internet.
The subsidy document allocates roughly 20 billion rubles annually for the operation of ASBI. This figure corroborates a September 2024 report that authorities intended to spend 60 billion rubles (around $650 million) over the next five years to update its internet-blocking system.
A critical detail is that the Russian government hasn’t defined what "92% effectiveness" actually means. Kolomychenko noted it could refer to the number of VPN applications removed from stores, the volume of traffic blocked, or the percentage of people unable to connect.
This marks a fundamental shift in how Russia governs the internet. Rather than chasing down individual services one by one, the state is now pouring money into the underlying network layer to build a permanent filter.
By placing these filters directly in the network path, Roskomnadzor aims to make bypassing blocks a constant uphill battle for users.
Since the invasion of Ukraine, censorship has expanded from specific news outlets to targeting major social media platforms and messaging tools.
Millions of websites have been blocked, and as of 2025, authorities have started cutting off mobile internet across entire regions. They’ve also officially blocked major platforms like WhatsApp and Telegram.
So far, more than 400 VPN services have been banned, with over 1,000 restricted, according to another Russian journalist, Aleksandar Djokic. This, even though it’s still legal to use a VPN in Russia.
Starting April 15, 2026, major Russian service providers are legally required to detect whether a user is connected via a VPN, raising concerns about data privacy and potential future profiling.
At the same time, the Ministry of Digital Development is also pushing a new "foreign traffic tax". It would charge mobile users 150 rubles per gigabyte for any data over a 15GB monthly limit. This fee, which has been facing technical delays, hits the international routes VPNs rely on, making it too costly for most people to bypass the blocks.
Since their introduction in the 1960s, lasers have fueled major advances in science and everyday technology, from supermarket scanners to eye surgery. Traditional lasers operate by controlling photons, which are particles of light. Over the past two decades, researchers have expanded this concept to other particles, including phonons, which represent tiny units of vibration or sound. Learning to control phonons could unlock new capabilities, including access to unusual quantum effects such as entanglement.
A team from the University of Rochester and Rochester Institute of Technology has developed a new squeezed phonon laser that can precisely control vibrations at the nanoscale. This level of control may help scientists better understand gravity, particle acceleration, and the principles of quantum physics. In their study published in Nature Communications, the researchers explain how they guided these small units of mechanical motion to behave in a coordinated, laser-like manner.
Nick Vamivakas, the Marie C. Wilson and Joseph C. Wilson Professor of Optical Physics with the URochester Institute of Optics, previously demonstrated a phonon laser in 2019. In that work, phonons were trapped and levitated using an optical tweezer inside a vacuum. However, turning this concept into a practical tool for precise measurement required addressing a major limitation shared by both photon and phonon lasers: noise. These unwanted fluctuations can interfere with signals and reduce measurement accuracy.
“While a laser looks to the naked eye like a steady beam, there’s actually a lot of fluctuation, which causes noise when you’re using lasers for measurement,” says Vamivakas. “By pushing and pulling on a phonon laser with light in the right way, we can reduce that phonon laser fluctuation significantly.”
The researchers tackled this challenge by using a method known as squeezing to lower the thermal noise within the phonon laser. Reducing this background disturbance makes it possible to take more precise measurements. According to Vamivakas, this improvement allows acceleration to be measured more accurately than with approaches that rely on photon lasers or radio frequency waves.
With its enhanced sensitivity, the phonon laser could become a valuable tool for measuring gravity and other forces with high precision. This capability may support new navigation technologies. Scientists have proposed quantum compasses as highly accurate, “unjammable” alternatives to GPS navigation that do not depend on satellites. Vamivakas is interested in exploring whether phonon lasers could contribute to the development of such systems.
Journal Reference: Zhang, K., Xiao, K., Bhattacharya, M. et al. A two-mode thermomechanically squeezed phonon laser. Nat Commun 17, 2882 (2026). https://doi.org/10.1038/s41467-026-70564-3
The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they're released to the public, The New York Times has reported, citing unnamed U.S. officials.
The proposed order would create an "AI working group" of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week. These discussions, if true, would represent a sharp departure from the administration's current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks.
The sudden reversal coincides with a leadership vacuum in White House AI policy. David Sacks, who led the administration's deregulation push as AI czar, left the role in March, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent having since taken a more active role in shaping AI policy, according to The New York Times.
The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review. Critically, the system would grant the government early access to models without blocking their release.
Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model, which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.
That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract. The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance, though a federal judge later called that "Orwellian."
The NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments, even as other agencies remain cut off from Anthropic's tools. Some analysts have questioned whether Mythos's capabilities justify Anthropic's dramatic framing, with some studies finding that cheaper models can achieve comparable results in vulnerability discovery.
A White House official told The New York Times that talk of an executive order is "speculation," and that any announcement would come from Trump himself. Dean Ball, a former senior adviser on AI in the Trump administration, told the newspaper that officials are trying to avoid overregulation while keeping pace with the technology, calling it a “tricky balance.”
Daemon Tools users: It's time to check your machines for stealthy infections, stat:
Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates from the servers of its developer, researchers said Tuesday.
Kaspersky, the security firm reporting the supply-chain attack, said it began on April 8 and remained active as of the time its post went live. Installers that are signed by the developer's official digital certificate and downloaded from its website infect Daemon Tools executables, causing the malware to run at boot time. Kaspersky didn't explicitly say so, but based on technical details, the infected versions appear to be only those that run on Windows. Versions 12.5.0.2421 through 12.5.0.2434 are affected. Neither Kaspersky nor developer AVB could be contacted immediately for additional details.
Infected versions contain an initial payload that collects MAC addresses, hostnames, DNS domain names, running processes, installed software, and system locales. The malware sends them to an attacker-controlled server. Thousands of machines in more than 100 countries were targeted. Out of the many machines infected, about 12 of them, belonging to retail, scientific, government, and manufacturing organizations, have received a follow-on payload—an indication that the supply-chain attack targets select groups.
[...] One of the follow-on payloads pushed to about a dozen organizations was what Kaspersky described as a "minimalistic backdoor." It has the ability to execute commands, download files, and run shellcode payloads in memory—making the infection harder to detect.
Kaspersky said that it observed a more complex backdoor dubbed QUIC RAT, installed on a single machine belonging to an educational institution located in Russia. Initial analysis found that it can inject payloads into the notepad.exe and conhost.exe processes and supports a variety of C2 communication protocols, including HTTP, UDP, TCP, WSS, QUIC, DNS, and HTTP/3.
The 100 infected organizations were primarily located in Russia, Brazil, Turkey, Spain, Germany, France, Italy, and China. Kaspersky's visibility into the attack is limited because it's based solely on telemetry provided by its own products.
[...] More recent supply-chain attacks have hit Trivy, Checkmarx, and Bitwarden and more than 150 packages available through open source repositories. Last year, there were at least six notable such attacks.
Anyone who uses Daemon Tools should take time to scan the entirety of their machines using reputable antivirus software. Windows users should additionally check for indicators of compromise listed in the Kaspersky post. For more technically advanced users, Kaspersky recommends monitoring "suspicious code injections into legitimate system processes, especially when the source is executables launched from publicly accessible directories such as Temp, AppData, or Public."
The disbelief was palpable when Mozilla's CTO last month declared that AI-assisted vulnerability detection meant "zero-days are numbered" and "defenders finally have a chance to win, decisively."
[...]
Mindful of the skepticism, Mozilla on Thursday provided a behind-the-scenes look into its use of Anthropic Mythos—an AI model for identifying software vulnerabilities—to ferret out 271 Firefox security flaws over two months. In a post, Mozilla engineers said the finally ready-for-prime-time breakthrough they achieved was primarily the result of two things: (1) improvement in the models themselves and (2) Mozilla's development of a custom "harness" that supported Mythos as it analyzed Firefox source code.
[...]
The biggest differentiating factor was the use of an agent harness, a piece of code that wraps around an LLM to guide it through a series of specific tasks. For such a harness to be useful, it requires significant resources to customize it to the project-specific semantics, tooling, and processes it will be used for.Grinstead described the harness his team built as "the code that drives the LLM in order to accomplish a goal. It gives the model instructions (e.g., 'find a bug in this file'), provides it tools (e.g., allowing it to read/write files and evaluate test cases), then runs it in a loop until completion."
[...]
Thursday's behind-the-scenes view includes the unhiding of full Bugzilla reports for 12 of the 271 vulnerabilities Mozilla discovered using Mythos and, to a lesser extent, Claude Opus 4.6.
[...]
At least one researcher said Thursday that a cursory look at the reports showed they were "pretty impressive."
[...]
The critics are right to keep pushing back. Hype is a key method for inflating the already high puffed-up valuations of AI companies. Given the extensive praise Mozilla has given to Mythos, it's easy for even more trusting people to wonder: What's it getting in return? Far from settling the debate, Thursday's elaborations are likely to only further stoke the controversy.
As Americans struggle with the price of gas and groceries, Starbucks CEO Brian Niccol made the case for a $9 cup of coffee while speaking with The Wall Street Journal's What's News AM podcast.
"We're doing really well with Gen Z and millennials, and then really had strong performance across all income cohorts," Niccol said. "It can start with as little as $3 for a traditional cup of coffee. And then obviously you can build your way into all sorts of customized drinks that people love that move that ticket up."
Podcast host Luke Vargas asked, "You mentioned sort of strength across income cohorts. We've heard so much this week about the K-shaped economy. Fortunes for some Americans, very different than for others. Is that not really something that's coming up in your sales?"
"You know, we're not seeing that in our business," Niccol said. "What we're seeing is people, you know, they want to have a special experience, and regardless of what your income level is. In some cases, you know, a $9 experience does feel like you're splurging. And then, what that means is we have to make it worthwhile, right?"
"And then in other cases, certain people believe, 'Well, this is a really affordable premium experience.' Because they're saying like, 'Well it's less than $10 and I get a really premium experience,'" Niccol said. "So, regardless of where you're stationed in those income cohorts, we want to make that experience worth your while. And what we know is what's definitely something that drives that value is to be able to have a great seat, have a great moment of connection with a barista."
"We just saw on Friday, I'm sure you've seen the US consumer confidence reading, perceptions of the economy are worse than they've been since the '70s, since '08, since the pandemic," Vargas said. "These are some pretty bad reference points here. Just how do you market to that consumer?"
"Yeah. Look, when we've spent the time talking to customers, 'What is it that you're looking for in your experience?' They do talk about how they use their Starbucks experience as a moment of escapism. And my hope is we get more than our fair share of all those occasions," replied Niccol.
"Part of that is you're not playing the value game," Vargas suggested.
"Well, I think we're just playing it in a different way, which is the way we're going to play the value game is you're going to feel like it was worth it," said Niccol. "And it's not going to be a game of discounting or one-off promotions. I think people actually really do appreciate knowing, "Hey, if this is a $3 cup of coffee or a $5 latte, I know I'm going to get a great experience for that $5 experience, I'm in."
https://news.mit.edu/2026/astronomers-pin-down-origins-planetary-odd-couple-0505
Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally "lonely" hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that's had astronomers puzzled since the system's discovery in 2020.
Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.
In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune's atmosphere, made using NASA's James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.
Their measurements reveal that the smaller planet has a "heavy" atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.
Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the system's early disk of protoplanetary material. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.
The team's results are the first to show that mini-Neptunes can form beyond a star's "frost line." This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.
"This is the first time we've observed the atmosphere of a planet that is inside the orbit of a hot Jupiter," says Saugata Barat, a postdoc in MIT's Kavli Institute for Astrophysics and Space Research and the lead author of the study. "This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist."
The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.
As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.
But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.
The astronomers made their discovery using NASA's Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS' measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.
"This was a one-of-a-kind system," says Huang. "Hot Jupiters are 'lonely,' meaning they don't have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form."
The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.
Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in "mean motion resonance," meaning that each can affect the other's motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.
The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.
"It was a challenging prediction, and we had to be spot-on," Barat says.
In the end, the team was able to catch a direct and detailed snapshot of both planets.
"The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths," Barat explains. "And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere."
From JWST's measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.
But the team's new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.
The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star's frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.
Barat says the team's detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.
"This system represents one of the rarest architectures that astronomers have ever found," Barat says. "The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature."
Nissan has reversed course on plans to build electric vehicles at its Mississippi assembly plant and will instead equip the factory to produce a range of body-on-frame trucks and SUVs, a shift that changes the company's manufacturing footprint and signals a renewed focus on larger, conventional vehicles. The decision affects local supply chains, workforce planning, and Nissan's place in the broader auto market as demand patterns evolve:
The move away from EV production at the Mississippi site is a clear operational pivot for Nissan, trading a battery-driven future for heavier, body-on-frame vehicles built for hauling and towing. That choice reflects a reassessment of where the company sees near-term returns and where it wants to allocate manufacturing capacity. It is notable because it alters expectations that U.S. plants would be central to Nissan's electric vehicle rollout.
From a market perspective, trucks and large SUVs remain strong sellers in the United States, and automakers chase profitable segments when margins are tight. Body-on-frame designs are traditionally preferred for towing and rugged use, and customers who prioritize those capabilities have kept demand elevated. Nissan's decision seems tied to serving established buyer preferences rather than betting exclusively on a rapid surge in EV adoption.
[...] Local employment effects will be significant but mixed, and the specifics will depend on how Nissan structures the transition and retraining programs. Body-on-frame production can support a wide range of skilled positions, but the mix of jobs differs from an EV-focused plant where battery technicians and electrical specialists are more in demand.
[...] There are also ripple effects for suppliers and battery ecosystem plans that may have counted on Nissan's EV commitment at that location. Battery cell makers, electric motor suppliers, and companies building charging infrastructure could see fewer business opportunities tied to this plant. At the same time, chassis, frame, and drivetrain suppliers that serve conventional trucks may find fresh demand.
[...] Regulatory and incentive environments sometimes nudge automakers toward or away from certain investments, but manufacturers also follow clear market signals. Incentives, fuel-economy rules, and consumer tax credits all play roles in decision making, yet companies still prioritize segments with stable, profitable demand. Nissan's choice suggests a pragmatic approach to those competing pressures.
Previously:
Research scientist and avid skier Erik Johannes Husom has built his own pair of wooden skis from scratch and documented the process in images. That includes felling the tree and splitting it. He has a short video demonstrating the effectiveness of his new skis in actual use.
The main stages of the process included felling the tree, debarking it, cutting it down to suitable length, and splitting it into two halves. This was followed by shaping the wood, first using an axe, and then hand planes. The final polish was done using a knife and finally sandpaper.
The most challenging part came afterwards: Steam bending the wood to give the skis a raised tip (a "shovel"). I had trouble getting the wood soft enough to get a proper bend on it, and ended up using a combination of boiling and steaming to achieve this. The result was less than optimal, but I learnt some lessons on how to achieve a proper bend for the next pair of skis.
I'll add that split wood is quite flexible and by far stronger than sawed wood. So by splitting one can get greater strength with much less weight. Splitting was integral in how viking ships were made strong enough to be seaworthy on the open ocean and yet light enough for extended river ventures and even occasional portages.
Previously:
(2018) Attention Backcountry Skiers: Scientists Want Your Help - SoylentNews