Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How far do you currently live from the town where you grew up?

  • less than 60 mi or 100 km
  • greater than that, but less than 300 mi or 500 km
  • greater than the above, but less than 600 mi or 1,000 km
  • greater than the above, but less than 3,000 mi or 5,000 km
  • greater than the above, but less than 6,000 mi or 10,000 km
  • greater than the above, but less than 12,000 mi or 20,000 km
  • greater than 12,000 mi or 20,000 km (the truth is out there)
  • I never grew up, you insensitive clod!

[ Results | Polls ]
Comments:113 | Votes:357

posted by janrinok on Monday January 05, @11:19PM   Printer-friendly

The Worst CPUs Ever Made:

Processors are built by multi-billion-dollar corporations using some of the most cutting-edge technologies known to man. But even with all their expertise, investment, and know-how, sometimes these CPU makers drop the ball. Some CPUs have just been poor performers for the money or their generation, while others easily overheated or drew too much power.

Some CPUs were so bad that they set their companies back generations, taking years to recover.

But years on from their release and the fallout, we no longer need to feel let down, disappointed, or ripped off by these lame-duck processors. We can enjoy them for the catastrophic failures they were, and hope the companies involved learned a valuable lesson.

Here are some of the worst CPUs ever made.

Note: Plenty of people will bring up the Pentium FDIV bug here, but the reason we didn't include it is simple: Despite being an enormous marketing failure for Intel and a considerable expense, the actual bug was tiny. It affected no one who wasn't already doing scientific computing, and, in technical terms, the scale and scope of the problem were never estimated to be much of anything. The incident is recalled today more for the disastrous way Intel handled it than for any overarching problem in the Pentium microarchitecture.

Intel Itanium

Intel's Itanium was a radical attempt to push hardware complexity into software optimizations. All the work to determine which instructions to execute in parallel was handled by the compiler before the CPU ran a byte of code.

Analysts predicted that Itanium would conquer the world. It didn't. Compilers were unable to extract necessary performance, and the chip was radically incompatible with everything that had come before it. Once expected to replace x86 entirely and change the world, Itanium limped along for years with a niche market and precious little else.

Itanium's failure was particularly egregious because it represented the death of Intel's entire 64-bit strategy (at the time). Intel had originally planned to move the entire market to IA64 rather than extend x86. AMD's x86-64 (AMD64) proved quite popular, partly because Intel had no luck bringing a competitive Itanium to market. Not many CPUs can claim to have failed so egregiously that they killed their manufacturers' plans for an entire instruction set.

Intel Pentium 4 (Prescott)

Prescott doubled down on the Pentium 4's already-long pipeline, extending it to nearly 40 stages, while Intel simultaneously shrank it down to a 90nm die. This was a mistake.

The new chip was crippled by pipeline stalls that even its new branch prediction unit couldn't prevent, and parasitic leakage drove high power consumption, preventing the chip from hitting the clocks it needed to be successful. Prescott and its dual-core sibling, Smithfield, are the weakest desktop products Intel ever fielded relative to its competition at the time. Intel set revenue records with the chip, but its reputation took a beating.

Its reputation for running rather toasty would be a recurring issue for Intel in the future, too.

AMD Bulldozer

AMD's Bulldozer was supposed to steal a march on Intel by cleverly sharing certain chip capabilities to improve efficiency and reduce die size. AMD wanted a smaller core with higher clocks to offset any penalties from the shared design. What it got was a disaster.

Bulldozer couldn't hit its target clocks, drew too much power, and its performance was a fraction of what it needed to be. It's rare that a CPU is so bad that it nearly kills the company that invented it. Bulldozer nearly did. AMD did penance for Bulldozer by continuing to use it. Despite the core's flaws, it formed the backbone of AMD's CPU family for the next six years.

Fortunately, during the intervening years, AMD went back to the drawing board, and in 2017, Ryzen was born. And the rest is history.

Cyrix 6x86

Cyrix was one of the x86 manufacturers that didn't survive the late 1990s. (VIA now holds its x86 license.) Chips like the 6x86 were a major part of the reason why.

Cyrix has the dubious distinction of being the reason why some games and applications carry compatibility warnings. The 6x86 was significantly faster than Intel's Pentium in integer code, but its FPU was abysmal, and its chips weren't particularly stable when paired with Socket 7 motherboards. If you were a gamer in the late 1990s, you wanted an Intel CPU but could settle for AMD. The 6x86 was one of the terrible "everybody else" chips you didn't want in your Christmas stocking.

The 6x86 failed because it couldn't differentiate itself from Intel or AMD in a way that made sense or gave Cyrix an effective niche of its own. The company tried to develop a unique product and wound up earning itself a second place on this list instead.

Cyrix MediaGX

The Cyrix MediaGX was the first attempt to build an integrated SoC processor for desktop, with graphics, CPU, PCI bus, and memory controller all on one die. Unfortunately, this happened in 1998, which means all those components were really terrible.

Motherboard compatibility was incredibly limited, the underlying CPU architecture (Cyrix 5x86) was equivalent to Intel's 80486, and the CPU couldn't connect to an off-die L2 cache (the only kind of L2 cache there was, back then). Chips like the Cyrix 6x86 could at least claim to compete with Intel in business applications. The MediaGX couldn't compete with a dead manatee.

The entry for the MediaGX on Wikipedia includes the sentence "Whether this processor belongs in the fourth or fifth generation of x86 processors can be considered a matter of debate." The 5th generation of x86 CPUs is the Pentium generation, while the 4th generation refers to 80486 CPUs. The MediaGX shipped in 1997 with a CPU core stuck somewhere between 1989 and 1992, at a time when people really did replace their PCs every 2-3 years if they wanted to stay on the cutting edge.

It also notes, "The graphics, sound, and PCI bus ran at the same speed as the processor clock also due to tight integration. This made the processor appear much slower than its actual rated speed." When your 486-class CPU is being choked by its own PCI bus, you know you've got a problem.

Texas Instruments TMS9900

The TMS9900 is a noteworthy failure for one enormous reason: When IBM was looking for a chip to power the original IBM PC, it had two basic choices to hit its own ship date: the TMS9900 and the Intel 8086/8088 (the Motorola 68K was under development but wasn't ready in time).

The TMS9900 only had 16 bits of address space, while the 8086 had 20. That made the difference between addressing 1MB of RAM and just 64KB. TI also neglected to develop a 16-bit peripheral chip, which left the CPU stuck with performance-crippling 8-bit peripherals. The TMS9900 also had no on-chip general purpose registers; all 16 of its 16-bit registers were stored in main memory. TI had trouble securing partners for second-sourcing and when IBM had to pick, it picked Intel.

Good choice.

Intel Core i9-14900K

It's rare to call a top chip of its generation a "bad" CPU, and even rarer to denigrate the name of a company's current fastest gaming CPU, but with the Intel 14900K, it deserves its place on this list. Although it is fantastically fast in gaming and some productivity workloads, and can compete with some of the best chips available at the end of 2025, it is still a bad CPU for a range of key reasons.

For starters, it barely moved the needle. The 14900K is basically an overclocked 13900K (or 13900KS if we're considering special editions), which wasn't much different from the 12900K that came before it. The 14900K was the poster child for Intel's lack of innovation, which is saying a lot considering how long Intel languished on its 14nm node.

The 14900K also pulled way too much power and got exceptionally hot. I had to underclock it when reviewing it just to get it to stop thermal throttling—and that was on a 360mm AIO cooler, too.

The 14th-generation was plagued with bugs and microcode issues, too, causing crashes and stability issues that required regular BIOS updates to try to fix.

The real problem was that the rest of the range was just better. The 14600K is almost as fast in gaming despite being far cheaper, easier to cool, easier to overclock, and less prone to crashes. The rest of the range wasn't too exciting, though the 14100 remains a stellar gaming CPU under $100 today.

The 14900K was the most stopgap of stopgap flagships. It was a capstone on years of Intel stagnation, and a weird pinnacle in performance at the same time. It's not as big a dud as the other chips on this list, but it did nothing to help Intel's modern reputation, and years later, it's still trying to course-correct.

Dishonorable Mention: Qualcomm Snapdragon 810

The Snapdragon 810 was Qualcomm's first attempt to build a big.LITTLE CPU and was based on TSMC's short-lived 20nm process. The SoC was easily Qualcomm's least-loved high-end chip in recent memory—Samsung skipped it altogether, and other companies ran into serious problems with the device.

Qualcomm claimed that the issues with the chip were caused by poor OEM power management, but whether the problem was related to TSMC's 20nm process, problems with Qualcomm's implementation, or OEM optimization, the result was the same: A hot-running chip that won precious few top-tier designs and is missed by no one.

Dishonorable Mention: IBM PowerPC G5

Apple's partnership with IBM on the PowerPC 970 (marketed by Apple as the G5) was supposed to be a turning point for the company. When it announced the first G5 products, Apple promised to launch a 3GHz chip within a year. But IBM failed to deliver components that could hit these clocks at reasonable power consumption, and the G5 was incapable of replacing the G4 in laptops due to high power draw.

Apple was forced to move to Intel and x86 in order to field competitive laptops and improve its desktop performance. The G5 wasn't a terrible CPU, but IBM wasn't able to evolve the chip to compete with Intel.

Ironically, it would be Intel years later that couldn't compete with ARM that would lead Apple to build its own silicon in the M-series.

Dishonorable Mention: Pentium III 1.13GHz

The Coppermine Pentium III was a fine architecture. But during the race to 1GHz against AMD, Intel was desperate to maintain a performance lead, even as shipments of its high-end systems slipped further and further away (at one point, AMD was estimated to have a 12:1 advantage over Intel when it came to actually shipping 1GHz systems).

In a final bid to regain the performance clock, Intel tried to push the 180nm Cumine P3 up to 1.13GHz. It failed. The chips were fundamentally unstable, and Intel recalled the entire batch.

Dishonorable Mention: Cell Broadband Engine

We'll take some heat for this one, but we'd toss the Cell Broadband Engine on this pile as well. Cell is an excellent example of how a chip can be phenomenally good in theory, yet nearly impossible to leverage in practice.

Sony may have used it as the general processor for the PS3, but Cell was far better at multimedia and vector processing than it ever was at general-purpose workloads (its design dates to a time when Sony expected to handle both CPU and GPU workloads with the same processor architecture). It's quite difficult to multi-thread the CPU to take advantage of its SPEs (Synergistic Processing Elements), and it bears little resemblance to any other architecture.

It did end up as part of a linked-PS3 supercomputer built by the Department of Defense, which shows just how capable these chips could be. But that's hardly a daily-driver use case.

What's the Worst CPU Ever?

It's surprisingly difficult to pick an absolute worst CPU. All of the ones on this list were bad in their own way at that specific time. Some of them would have been amazing if they'd been released just a year earlier, or if other technologies had kept pace.

Some of them just failed to meet overinflated expectations (Itanium). Others nearly killed the company that built it (Bulldozer). Do we judge Prescott on its heat and performance (bad, in both cases) or on the revenue records Intel smashed with it?

Evaluated in the broadest possible meanings of "worst," I think one chip ultimately stands feet and ankles below the rest: the Cyrix MediaGX. Even then, it is impossible not to admire the forward-thinking ideas behind this CPU. Cyrix was the first company to build what we would now call an SoC, with PCI, audio, video, and RAM controller all on the same chip. More than 10 years before Intel or AMD would ship their own CPU+GPU configurations, Cyrix was out there, blazing a trail.

It's unfortunate that the trail led straight into what the locals affectionately call "Alligator Swamp."

Designed for the extreme budget market, the Cyrix MediaGX disappointed just about anyone who ever came in contact with it. Performance was poor—a Cyrix MediaGX 333 had 95% the integer performance and 76% of the FPU performance of a Pentium 233 MMX, a CPU running at just 70% of its clock. The integrated graphics had no video memory at all. There's no option to add an off-die L2 cache, either.

If you found this under your tree, you cried. If you had to use this for work, you cried. If you needed to use a Cyrix MediaGX laptop to upload a program to sabotage the alien ship that was going to destroy all of humanity, you died.

All in all, not a great chip. Others were bad, sure, but none embody that quite like the Cyrix MediaGX.

You might not agree with these choices. If you do not, then tell us your own favourite "worst" CPU and why you think it deserves a mention. Has anybody got information on the worst CPUs that have been produced in Russia or China yet?


Original Submission

posted by janrinok on Monday January 05, @06:37PM   Printer-friendly

Parkinson's is the canary in the coal mine warning us that our environment is sick:

  Parkinson's disease occurs worldwide, affects people of all ages and backgrounds, has an enormous societal impact, and is rising at an alarming rate. According to neurologist Bas Bloem, Parkinson's literally meets all the criteria of a pandemic, except that the disease is not infectious. In a recent publication in The Lancet Neurology, Bloem and a group of internationally recognised scientists place this development in historical perspective, beginning with James Parkinson, who first described the disease in 1817.

This historical view is needed, Bloem says, because the search for the causes of Parkinson's is anything but new. As early as the 1990s, researchers and pesticide manufacturers knew that the pesticide Paraquat was linked to Parkinson's—yet the substance is still used in parts of the world (for example, the United States). In the Netherlands, Paraquat has fortunately been banned since 2007. Two other environmental factors, dry-cleaning chemicals and air pollution, also occur on a large scale. This strengthens Bloem's conviction that this largely human-made disease can also be reduced through human intervention.

As a young medical student, Bloem found himself in the midst of groundbreaking research in California, where he worked at the age of 21. "I did not yet see the enormous impact of the research being carried out there," he recalls. One of the groundbreaking studies of that era was conducted by J. William Langston in 1983. He investigated seven young drug users who suddenly developed symptoms of advanced Parkinson's after using a contaminated heroin variant.

It turned out that this so-called designer drug contained the substance MPTP, which in the body is converted into a compound that closely resembles the pesticide Paraquat. The study demonstrated that an external chemical substance could cause Parkinson's disease. Whereas the heroin users had received a high dose all at once, most people in daily life are exposed to small amounts over long periods, with ultimately a similar effect.

During the same period, and at the same Parkinson Institute in Sunnyvale, California, researcher Carlie Tanner also carried out key work. Bloem explains: "Her hypothesis was simple: if Parkinson's is hereditary, then identical twins who share the same DNA should develop it far more often than fraternal twins, as we see for conditions such as diabetes." But this was not the case.

[...] These insights became the starting point for new research into pesticides. "When researchers exposed laboratory animals to these substances, they developed Parkinson-like symptoms, and damage occurred precisely in the substantia nigra, the area of the brain affected in Parkinson's," Bloem says, convincing evidence of a causal link.

A third important study comes from Canadian neurologist André Barbeau, who in 1987 investigated the role of environmental factors in the province of Quebec. If the disease were evenly distributed across the region, this would suggest a hereditary or random cause. But this was not the case: Parkinson's occurred in clear clusters.

These clusters were located precisely in areas where high concentrations of pesticides were found in groundwater, another strong indication that environmental factors play a causal role.

Discussions about pesticides evoke strong emotions, Bloem notes. "People are frightened, farmers feel attacked, and industry attempts to sow doubt. But farmers or horticulturalists are not the problem. They work with what they are permitted to use. The responsibility lies with the systems that allow such substances."

He advocates for policies based on the precautionary principle. "The burden of proof now lies with scientists and citizens, who must demonstrate that a substance is harmful. But doubt should benefit humans, not chemical products."

"The most hopeful message," Bloem says, "is that Parkinson's appears to be at least partly—perhaps even largely—preventable. That is revolutionary: a brain disease that we can prevent through better environmental policy." Yet hardly any funding goes into prevention. "In the US, only 2 percent of Parkinson's research focuses on prevention. Meanwhile, billions are spent on treatments instead of turning off the tap."

[...] His message is clear: "Parkinson's is not an unavoidable fate. It is the canary in the coal mine warning us that our environment is sick and that toxic substances are circulating. If we act now—by reducing toxins, improving air quality, and enforcing stricter regulations—we can reverse this pandemic. And in doing so, we will likely reduce other health risks such as dementia and cancer."


Original Submission

posted by janrinok on Monday January 05, @01:53PM   Printer-friendly

https://www.tomshardware.com/tech-industry/semiconductors/u-s-allows-tsmc-to-import-chipmaking-equipment-to-its-china-fabs-samsung-sk-hynix-likewise-receive-go-signal-from-commerce-department

The U.S. Department of Commerce has issued a permit to Taiwan Semiconductor Manufacturing Company (TSMC) to import U.S.-made chip-making equipment into China for its Nanjing fab. According to Reuters, Samsung and SK hynix were also given import licenses to bring in specialized equipment that used American-made components into their Chinese factories. These three chipmakers used to enjoy validated end-user status, meaning they could freely import restricted items into China without asking for individual licenses. However, this privilege has expired at the end of 2025, meaning they now have to seek annual approval from Washington, D.C., to continue receiving advanced tools.

"The U.S. Department of Commerce has granted TSMC Nanjing an annual export license that allows U.S. export-controlled items to be supplied to TSMC Nanjing without the need for individual vendor licenses," the company said in a statement to Reuters. It also said that this "ensures uninterrupted fab operations and product deliveries." This move to require annual licenses for the Chinese factories of these chipmakers is a part of the White House's effort to keep advanced chipmaking tools out of China.

Beijing has been working hard to achieve "semiconductor sovereignty," just as the U.S. has been trying hard to prevent it from acquiring the latest chips. Aside from that, ASML, the only manufacturer of cutting-edge chipmaking tools, has been banned from exporting its products to China and servicing those that are already installed. Because of this, we've seen reports that the country is covertly working on reverse engineering EUV lithography tools, and that it has even come up with a "Frankenstein" EUV chipmaking tool, but has yet to produce a single chip.

The U.S. does not allow EUV lithography machines with U.S. technology to be exported to China, even to companies like TSMC and Samsung that have Chinese factories. This means that these fabs are only limited to mature nodes of 16-nm and up. The revocation of the validated end-user status for the China-based fabs of these companies shows that Washington is tightening its grip on chipmaking machines, even older DUV tech, to make it difficult for Beijing to create its own technology.

Despite this, the East Asian nation is pushing hard to develop its own equipment. The central government has even told its chipmakers to use homegrown tools for half of new capacity. And while the country is still years behind cutting-edge tech from ASML and other Western companies, it's slowly taking steps in the right direction.


Original Submission

posted by janrinok on Monday January 05, @09:03AM   Printer-friendly
from the looking-at-you-cloudflare dept.

When an associate of mine accessed their personal email account on their work computer, they opened an email from a friend purporting to be an invitation to a holiday party, and it contained a link that it claimed was to RSVP. In fact, the link was to a malicious MSI file hosted on Cloudflare's r2.dev service. Not knowing what an MSI file was, the associate ran the file and installed an instance of ConnectWise's ScreenConnect software operated by an attacker. The attacker promptly took control of the associate's computer for a couple of minutes before the associate wisely powered the computer off. Sure, the obvious answers are that people shouldn't click on suspicious links in emails they weren't expecting, even if they come from a friend or trusted colleague, and that they really shouldn't use work computers for personal tasks and vice versa. But this incident also revealed troubling concerns about how some large companies like Cloudflare have double standards about security.

The neighbor's computer was compromised by the same attacker, who accessed their GMail account and apparently sent a single email with the phishing email with the entire contact list as Bcc recipients of the email. This was probably a large number of contacts, and it really should have been automatically flagged by Google as potentially a spam email. A reasonable approach might be to delay sending the email until the sender confirms they really intended to Bcc a large number of people on a potentially suspicious email. The sender would then get a notification on their phone asking to confirm if they really intended to send a mass email, which they could either confirm or reject. Google is keen to push multi-factor authentication and require that users associate phone numbers with their accounts, so it seems like this might be a rational approach for outbound emails that ought to be flagged as suspicious.

But I'm more frustrated with Cloudflare, who seems to act as a gatekeeper many websites, arbitrarily blocking browsers and locking people out of websites, especially for the dastardly crime of using a non-Chromium browser like Palemoon. The malicious file was hosted on r2.dev, which is a cloud-based object storage system. Although the actual file might not trip malware scanners because ScreenConnect has legitimate purposes, R2 storage buckets and Cloudflare's other hosting services are also often used to host malware and phishing content. This is probably because Cloudflare has a free tier and is easy to use, making them a good tool for attackers to abuse. One of the logical actions I took was to try to report the malicious content to Cloudflare so they would take it down. They encourage reporting of abuse through an online reporting form. The first time I accessed the abuse reporting form, it was blank. I reloaded the page, and Cloudflare informed me that I had been blocked from accessing their abuse reporting page. The irony here is that Cloudflare has arbitrarily blocked me for no apparent reason, as if I am malicious, preventing me from reporting actual malicious content being hosted on their platform.

The problem here is that large companies like Google and Cloudflare have positioned themselves as gatekeepers of the internet, demanding that users conform to their security standards while themselves not taking reasonable steps to prevent attacks originating from their own platforms. In the case of Google, reCaptcha is mostly security theatre, making users jump through hoops to prove they're not malicious while harvesting data that can be used for tracking users through browser fingerprinting. As for Cloudflare, they use methods like blocking browsers with low market share, supposedly in the name of blocking malicious traffic. The hypocrisy is very blatant when Cloudflare's arbitrary and opaque blocking prevents users from reporting actual malicious content hosted by Cloudflare itself. Unfortunately, this doesn't seem particularly uncommon.

It's becoming increasingly difficult not to see companies like Google and Cloudflare as bad actors. In the case of Cloudflare, I finally sent complaints to their abuse@ and noc@ email addresses, but I expect little will be done to actually address the problem. How do we demand accountability from companies that act gatekeepers of the internet and treat ordinary users like potential criminals while doing little to prevent their own platforms from being vectors for abuse? In this case, is the best solution to complain to a government agency like the state attorney general, state that the malware may have caused harm, and that Cloudflare has made it next to impossible to get the content taken down?


Original Submission

posted by janrinok on Monday January 05, @04:19AM   Printer-friendly

IFL Science has an interesting story about Michel Siffre, a guy who went into a cave for two months and emerged to invent a new field of biology:

The year is 1962. The place: Scarasson, a glacial cave in the French Alps. Climbing out of the abyss for the first time in more than two months is a lone man, eyes covered in dark goggles to protect them from the light of the Sun. He has no idea what the date is; he has not interacted with another human in seven weeks. His thoughts are slow; he feels, in his own words, like "a half-crazed, disjointed marionette."

"You have to understand, I was a geologist by training," Michel Siffre told Cabinet magazine in 2008. Nevertheless, he admitted, "without knowing it, I [...] created the field of human chronobiology."

"At first, my idea was to prepare a geological expedition, and to spend about fifteen days underground studying the glacier," Siffre recalled, "but a couple of months later, I said to myself, 'Well, fifteen days is not enough. I shall see nothing.' So, I decided to stay two months."

"I decided to live like an animal, without a watch, in the dark, without knowing the time," he said.

For 63 days, then, he lived 130 meters (427 feet) below the surface, in an icy cavern devoid of natural light or any timekeeping device. The temperature was below freezing; the humidity was 98 percent. He had no contact with the outside world.

"I had bad equipment, and just a small camp with a lot of things cramped inside," Siffre told Cabinet. "My feet were always wet, and my body temperature got as low as 34°C (93°F)."

It was, it seems, no vacation. But it was worth it: when he returned to the surface, he brought with him a whole new area of scientific research – one significant enough that it would one day merit a Nobel Prize for Siffre's academic successors.

"I raised the funds myself, picked the two months arbitrarily and invented the experimental protocol," he told New Scientist in 2018. Other scientists, he said, "thought I was mad."

But what was it that so earned Siffre the ire of the scientific establishment? Not the gall of living underground for two months – it was the 1960s, after all; they were all too busy mentally torturing people (for science!) to worry about some dude in a French cave – but rather, what he learned there: that the human body had its own internal "clock", independent of the rhythm of the Sun.

"There was a very large perturbation in my sense of time," he told Cabinet. "My psychological time [...] compressed by a factor of two."

This was true in the short term – in psychological tests during his stay, counting to 120 took him five minutes, corresponding to an internal clock 2.5 times slower than external time – and longer term, too. "I descended into the cave on July 16 and was planning finish the experiment on September 14," Siffre recalled. "When my surface team notified me that the day had finally arrived, I thought that it was only August 20. I believed I still had another month to spend in the cave."

At first, his days went from 24 hours to 24.5 – but 10 years later, in a second period of cave-bound timelessness, it stretched all the way out to 48 hours.

"I would have thirty-six hours of continuous wakefulness, followed by twelve hours of sleep," he explained. "I couldn't tell the difference between these long days and the days that lasted just twenty-four hours."

He wasn't the only one. Since his first trip underground, quite a few people have followed – some working hand-in-hand with Siffre himself – and all have reported weird, irregular, and unpredictable changes to their sleep-wake cycle. Some had 25-hour "days" followed by 12-hour "nights"; others would occasionally stay awake for three days at a time. "In 1964, the second man after me to go underground had a microphone attached to his head," Siffre recalled. "One day he slept thirty-three hours, and we weren't sure if he was dead."

Siffre faced a lot of criticism in his day – and not all of it was without merit. His style of research was flashy, people said; he was accused of being reckless with his own and others' lives in pursuit of headline-grabbing results. Cavers and environmental scientists feared that his experiments might disturb fragile underground ecosystems, unused to the heat, light, and carbon dioxide brought by a human and his camping equipment.

But claims that his place as a non-specialist in biology made his results dubious, or that his work was somehow trivial or unimportant, were shown to be unfounded. Siffre's work not only kickstarted the entire field of human chronobiology – an area that today has yielded insights into issues as diverse as avoiding jet lag, gene transcription, and even how certain cancers can develop and spread.

And Siffre's work would prove too tempting for the US and French military to ignore. "I came at the right time," he told Cabinet. "It was the Cold War [...] Not only was there a competition between the US and Russia to put men into space, but France had also just begun its nuclear submarine program. French headquarters knew nothing about how best to organize the sleep cycle of submariners."

"This is probably why I received so much financial support," he added. "NASA analyzed my first experiment in 1962 and put up the money to do sophisticated mathematical analysis."

While Siffre's very hands-on, personal brand of experimentation is unlikely to be recreated any time soon – not least because spending lengthy amounts of time alone underground has proved distressing and injurious to just about everyone who's tried it, Siffre included – its knock-on effects are still echoing through science today.

"Caves are a place of hope," he said in 2008. "We go into them to find minerals and treasures, and it's one of the last places where it is still possible to have adventures and make new discoveries."


Original Submission

posted by janrinok on Sunday January 04, @11:36PM   Printer-friendly

A scientist's unconventional project illustrates many challenges in developing new vaccines:

Chris Buck stands barefoot in his kitchen holding a glass bottle of unfiltered Lithuanian farmhouse ale. He swirls the bottle gently to stir up a fingerbreadth blanket of yeast and pours the turbulent beer into a glass mug.

Buck raises the mug and sips. "Cloudy beer. Delightful!"

He has just consumed what may be the world's first vaccine delivered in a beer. It could be the first small sip toward making vaccines more palatable and accessible to people around the world. Or it could fuel concerns about the safety and effectiveness of vaccines. Or the idea may go nowhere. No matter the outcome, the story of Buck's unconventional approach illustrates the legal, ethical, moral, scientific and social challenges involved in developing potentially life-saving vaccines.

Buck isn't just a home brewer dabbling in drug-making. He is a virologist at the National Cancer Institute in Bethesda, Md., where he studies polyomaviruses, which have been linked to various cancers and to serious health problems for people with weakened immune systems. He discovered four of the 13 polyomaviruses known to infect humans.

The vaccine beer experiment grew out of research Buck and colleagues have been doing to develop a traditional vaccine against polyomavirus. But Buck's experimental sips of vaccine beer are unsanctioned by his employer. A research ethics committee at the National Institutes of Health told Buck he couldn't experiment on himself by drinking the beer.

Buck says the committee has the right to determine what he can and can't do at work but can't govern what he does in his private life. So today he is Chef Gusteau, the founder and sole employee of Gusteau Research Corporation, a nonprofit organization Buck established so he could make and drink his vaccine beer as a private citizen. His company's name was inspired by the chef in the film Ratatouille, Auguste Gusteau, whose motto is "Anyone can cook."

Buck's body made antibodies against several types of the virus after drinking the beer and he suffered no ill effects, he and his brother Andrew Buck reported December 17 at the data sharing platform Zenodo.org, along with colleagues from NIH and Vilnius University in Lithuania. Andrew and other family members have also consumed the beer with no ill effects, he says. The Buck brothers posted a method for making vaccine beer December 17 at Zenodo.org. Chris Buck announced both publications in his blog Viruses Must Die on the online publishing platform Substack, but neither has been peer-reviewed by other scientists.

[...] Buck's unconventional approach has also sparked concerns among other experts about the safety and efficacy of the largely untested vaccine beer. While he has promising data in mice that the vaccine works, he has so far reported antibody results in humans from his own sips of the brew. Normally, vaccines are tested in much larger groups of people to see how well they work and whether they trigger any unanticipated side effects. This is especially important for polyomavirus vaccines, because one of the desired uses is to protect people who are about to get organ transplants. The immune-suppressing drugs these patients must take can leave them vulnerable to harm from polyomaviruses.

Michael Imperiale, a virologist and emeritus professor at the University of Michigan Medical School in Ann Arbor, first saw Buck present his idea at a scientific conference in Italy in June. The beer approach disturbed him. "We can't draw conclusions based on testing this on two people," he says, referring to Buck and his brother. It's also not clear which possible side effects Buck was monitoring for. Vaccines for vulnerable transplant patients should go through rigorous safety and efficacy testing, he says. "I raised a concern with him that I didn't think it was a good idea to be sidestepping that process."

Other critics warn that Buck's unconventional approach could fuel antivaccine sentiments. Arthur Caplan, who until recently headed medical ethics at the New York University Grossman School of Medicine, is skeptical that a vaccine beer will ever make it beyond Buck's kitchen.

"This is maybe the worst imaginable time to roll out something that you put on a Substack about how to get vaccinated," he says. Many people won't be interested because of antivaccine rhetoric. Beer companies may fear that having a vaccine beer on the market could sully the integrity of their brands. And Buck faces potential backlash from "a national administration that is entirely hostile to vaccines," Caplan says. "This is not the place for do-it-yourself."

But the project does have supporters who say it could instead calm vaccine fears by allowing everyday people to control the process. Other researchers are on the fence, believing that an oral vaccine against polyomavirus is a good idea but questioning whether Buck is going about introducing such a vaccine correctly.

[...] Buck says his self-experiment illustrates that a person can be safely immunized against BK polyomaviruses through drinking beer. But even though Buck produced antibodies, there is no guarantee others will. And right now, people who drink the vaccine beer won't know whether they produce antibodies or if any antibodies they do produce will be sufficient to protect them from developing cancer or other serious health problems later.

Other scientists familiar with Buck and his yeast project also have conflicting opinions about how it might influence public trust and acceptance of vaccines.

If something were to go wrong when a person tried to replicate Buck's beer experiment, Imperiale worries about "the harm that it could do to our ability to administer vaccines that have been tested, tried and true, and just the more general faith that the public has in us scientists. Right now, the scientific community has to think about everything it does and answer the question, 'Is what we're doing going to cause more distrust amongst the public?'"

That's especially true now that health officials in the Trump administration are slashing funding for vaccine research, undermining confidence in vaccines and limiting access to them. A recent poll by the Pew Research Center found that a majority of Americans are still confident that childhood vaccines are highly effective at preventing illness. But there has been an erosion of trust in the safety of those vaccines, particularly among Republicans.

[...] Buck feels a moral imperative to move forward with his self-experiments and to make polyomavirus vaccine beer available to everyone who wants it. "This is the most important work of my whole career," he says. "It's important enough to risk my career over." What he's doing in his home lab is consistent with his day job, he adds. "At the NIH in my contract it says my job is to generate and disseminate scientific knowledge," he says. "This is my only job, to make knowledge and put it out there and try to sell it to the public."

He doesn't see himself as a maverick. "I'm not a radical who's trying to subvert the system. I'm obeying the system, and I'm using the only thing that is left available to me."


Original Submission

posted by janrinok on Sunday January 04, @06:52PM   Printer-friendly

OS news brings us the news that HP-UX reached the end of its life on December 31st:

It's 31 December 2025 today, the last day of the year, but it also happens to mark the end of support for the last and final version of one of my favourite operating systems: HP-UX. Today is the day HPE puts the final nail in the coffin of their long-running UNIX operating system, marking the end of another vestige of the heyday of the commercial UNIX variants, a reign ended by cheap x86 hardware and the increasing popularisation of Linux.

HP-UX' versioning is a bit of a convoluted mess for those not in the know, but the versions that matter are all part of the HP-UX 11i family. HP-UX 11i v1 and v2 (also known as 11.11 and 11.23, respectively) have been out of support for exactly a decade now, while HP-UX 11i v3 (also known as 11.31) is the version whose support ends today. To further complicate matters, like 11i v2, HP-UX 11i v3 supports two hardware platforms: HP 9000 (PA-RISC) and HP Integrity (Intel Itanium). Support for the HP-UX 11i v3 variant for HP 9000 ended exactly four years ago, and today marks the end of support for HP-UX 11i v3 for HP Integrity.

And that's all she wrote.

HP-UX 11i v1 was the last PA-RISC version of the operating system to officially support workstations, with 11i v2 only supporting Itanium workstations. There are some rumblings online that 11i v2 will still work just fine on PA-RISC workstations, but I have not yet tried this out. My c8000 also has a ton of other random software on it, of course, and only yesterday I discovered that the most recent release of sudo configures, compiles, and installs from source just fine on it. Sadly, a ton of other modern open source code does not run on it, considering the slightly outdated toolchain on HP-UX and few people willing and/or able to add special workarounds for such an obscure platform.

Over the past few years, I've been trying to get into contact with HPE about the state of HP-UX' patches, software, and drivers, which are slowly but surely disappearing from the web. A decent chunk is archived on various websites, but a lot of it isn't, which is a real shame. Most patches from 2009 onwards are unavailable, various software packages and programs for HP-UX are lost to time, HP-UX installation discs and ISOs later than 2006-2009 are not available anywhere, and everything that is available is only available via non-sanctioned means, if you know what I mean. Sadly, I never managed to get into contact with anyone at HPE, and my concerns about HP-UX preservation seem to have fallen on deaf ears. With the end-of-life date now here, I'm deeply concerned even more will go missing, and the odds of making the already missing stuff available are only decreasing.

I've come to accept that very few people seem to hold any love for or special attachment to HP-UX, and that very few people care as much about its preservation as I do. HP-UX doesn't carry the movie star status of IRIX, nor the benefits of being available as both open source and on commodity hardware as Solaris, so far fewer people have any experience with it or have developed a fondness for it. HP-UX didn't star in a Steven Spielberg blockbuster, it didn't leave behind influential technologies like ZFS. Despite being supported up until today, it's mostly forgotten – and not even HPE itself seems to care.

And that makes me sad.

When you raise your glasses tonight to mark the end of 2025 and welcome the new year, spare a thought for the UNIX everyone forgot still exists. I know I will.

Did you work with HP-UX? What did you think of it? How does it compare to more modern OSes? More widely, can we still learn things from older software, and are they worth archiving as historical items?


Original Submission

posted by jelizondo on Sunday January 04, @02:06PM   Printer-friendly

https://www.tomshardware.com/tech-industry/cryptocurrency/americans-lost-usd333-million-to-bitcoin-atm-fraud-in-2025-fbi-says-there-is-a-clear-and-constant-rise-of-this-scam-and-that-it-is-not-slowing-down

The FBI says Americans lost at least $333 million to Bitcoin ATM scams in 2025, as the cryptocurrency has continued to gain popularity for use in fraudulent transactions. The law enforcement agency told CNBC that this is a "clear and constant rise" that is "not slowing down." Reported losses to crypto ATM scams first broke $100 million in 2023, with the amount hitting $114 million — this then doubled the following year to $247 million. While the reported losses in 2025 weren't as huge a jump, it's still costing private citizens a huge amount of money, with most scammers targeting older victims.

The authorities are acting against cryptocurrency ATM providers, saying that they're "pocketing hundreds of thousands of dollars in undisclosed fees on the backs of scam victims." The U.S. Attorney General even sued Athena Bitcoin, with the lawsuit pointing out the 93% of the transactions on its ATMs "are the product of outright fraud," with victims having a median age of 71 years. In its defense, Athena told ABC News that it has "strong safeguards against fraud, including transparent instructions, prominent warnings, and customer education." An Athena rep also said, "Just as a bank isn't held responsible if someone willingly sends funds to someone else, Athena does not control users' decisions."

Earlier this year, we saw one local government take things into its own hands using a power tool to recover almost $32,000 that a victim deposited into a Bitcoin Depot ATM. The Sheriff's office was able to do this after securing a warrant, but the company said that it will seek damages, especially as each machine costs around $14,000. Furthermore, the victim will not be able to immediately get the recovered money, as it will have to go through the legal system before the scammed amount will be returned to them.

The U.S. isn't the only place that is seeing a growing number of crypto ATM scam cases — Australian authorities also said that most crypto ATM users are either scam victims or money mules who were forced to deposit cash into these machines. Cryptocurrency does, of course, have some advantages and legitimate uses. But because it's still fairly new, many don't understand how it works and often assume that it's just like any other bank. And with crypto ATMs becoming more ubiquitous in the U.S., it's also making it much easier for scammers to extort and steal money from their unsuspecting victims.


Original Submission

posted by jelizondo on Sunday January 04, @09:22AM   Printer-friendly

The Guardian has an article about the forthcoming upgrade of the Large Hadron Collider at CERN in Switzerland, which will be overseen by a new CERN Director General, Mark Thomson.

The LHC is famous for its use in discovering the Higgs boson, a fundamental particle whose existence was predicted in the 1960s as the means by which some other particles gain mass.

The latest upgrade, the high-luminosity LHC, will begin in June and take approximately five years. The superconducting magnets will be upgraded to increase the luminosity of the proton beams being collided and the detectors are also being upgraded.

It is hoped that the improved performance of the LHC will allow it to explore the interactions of Higgs bosons.

If the upgrade works, the LHC will make more precise measurements of particles and their interactions, which could find cracks in today's theories that become the foundations for tomorrow's. One remaining mystery surrounds the Higgs boson. Elementary particles gain their masses from the Higgs, but why the masses vary as they do is anyone's guess. It is not even clear how Higgs bosons interact with one another. "We could see something completely unexpected," Thomson says.

CERN also has plans to replace the LHC with a larger and more powerful collider called the Future Circular Collider, which will require a new 91km circular tunnel (compared with the LHC's 27km). There is no certainty as to what new science might be discovered with the FCC, and there are challenges obtaining sufficient funding. However, there are several fundamental questions to be explored by the new machine such as: what is the dark matter that clumps around galaxies; what is the dark energy that pushes the universe apart; why is gravity so weak; and why did matter win out over antimatter when the universe formed?


Original Submission

posted by jelizondo on Sunday January 04, @04:34AM   Printer-friendly

Ozempic is changing the foods Americans buy:

When Americans begin taking appetite-suppressing drugs like Ozempic and Wegovy, the changes extend well beyond the bathroom scale. According to new research, the medications are associated with meaningful reductions in how much households spend on food, both at the grocery store and at restaurants.

The study, published Dec. 18 in the Journal of Marketing Research, links survey data on GLP-1 receptor agonist use – a class of drugs originally developed for diabetes and now widely prescribed for weight loss – with detailed transaction records from tens of thousands of U.S. households. The result is one of the most comprehensive looks yet at how GLP-1 adoption is associated with changes in everyday food purchasing in the real world.

The headline finding is striking: Within six months of starting a GLP-1 medication, households reduce grocery spending by an average of 5.3%. Among higher-income households, the drop is even steeper, at more than 8%. Spending at fast-food restaurants, coffee shops and other limited-service eateries falls by about 8%.

Among households who continue using the medication, lower food spending persists at least a year, though the magnitude of the reduction becomes smaller over time, say co-authors, assistant professor Sylvia Hristakeva and professor Jura Liaukonyte, both in the Charles H. Dyson School of Applied Economics and Management in the Cornell SC Johnson College of Business.

"The data show clear changes in food spending following adoption," Hristakeva said. "After discontinuation, the effects become smaller and harder to distinguish from pre-adoption spending patterns."

[...] The reductions were not evenly distributed across the grocery store.

Ultra-processed, calorie-dense foods – the kinds most closely associated with cravings – saw the sharpest declines. Spending on savory snacks dropped by about 10%, with similarly large decreases in sweets, baked goods and cookies. Even staples like bread, meat and eggs declined.

Only a handful of categories showed increases. Yogurt rose the most, followed by fresh fruit, nutrition bars and meat snacks.

"The main pattern is a reduction in overall food purchases. Only a small number of categories show increases, and those increases are modest relative to the overall decline," Hristakeva said.

The effects extended beyond the supermarket. Spending at limited-service restaurants such as fast-food chains and coffee shops fell sharply as well.

[...] Notably, about one-third of users stopped taking the medication during the study period. When they did, their food spending reverted to pre-adoption levels – and their grocery baskets became slightly less healthy than before they started, driven in part by increased spending on categories such as candy and chocolate.

That movement underscores an important limitation, the authors caution. The study cannot fully separate the biological effects of the drugs from other lifestyle changes users may make at the same time. However, evidence from clinical trials, combined with the observed reversion in spending after discontinuation, suggests appetite suppression is likely a key mechanism behind the spending changes.

Journal Reference: Hristakeva, S., Liaukonytė, J., & Feler, L. (2025). EXPRESS: The No-Hunger Games: How GLP-1 Medication Adoption is Changing Consumer Food Demand. Journal of Marketing Research, 0(ja). https://doi.org/10.1177/00222437251412834


Original Submission

posted by jelizondo on Saturday January 03, @11:48PM   Printer-friendly

https://phys.org/news/2025-12-scientists-outline-atomic-scale-polaritons.html

Controlling light at dimensions thousands of times smaller than the thickness of a human hair is one of the pillars of modern nanotechnology.

An international team led by the Quantum Nano-Optics Group of the University of Oviedo and the Nanomaterials and Nanotechnology Research Center (CINN/Principalty of Asturias-CSIC) has published a review article in Nature Nanotechnology detailing how to manipulate fundamental optical phenomena when light couples to matter in atomically thin materials.

The study focuses on polaritons, hybrid quasiparticles that emerge when light and matter interact intensely. By using low-symmetry materials, known as van der Waals materials, light ceases to propagate in a conventional way and instead travels along specific directions, a characteristic that gives rise to phenomena that challenge conventional optics.

Among the findings reviewed are behaviors such as negative refraction, where light bends in the opposite direction to the usual one when crossing a boundary between materials, or canalized propagation, which makes it possible to guide energy without it dispersing.

"These properties offer unprecedented control over light–matter interaction in regions of the spectrum ranging from the visible to the terahertz," the team describes in the article.

This research is part of the TWISTOPTICS project, led by University of Oviedo professor Pablo Alonso González. This project is dedicated to the study of how twisting or stacking nanometric layers—a technique reminiscent of atomic-scale "Lego" pieces—makes it possible to design physical properties à la carte.

The publication is the result of an international collaboration in which—alongside the University of Oviedo—leading centers such as the Beijing Institute of Technology (BIT), the Donostia International Physics Center (DIPC), and the Max Planck Institute have participated.

The theoretical and experimental framework presented in this work lays the foundations for future practical implementations in various technological sectors, including integrated optical circuits, high-sensitivity biosensors, thermal management, and super-resolution imaging.

More information: Yixi Zhou et al, Fundamental optical phenomena of strongly anisotropic polaritons at the nanoscale, Nature Nanotechnology (2025). DOI: 10.1038/s41565-025-02039-3


Original Submission

posted by hubie on Saturday January 03, @07:04PM   Printer-friendly

One small step for chips, one giant leap for a lack of impurities:

A team from Cardiff, Wales, is experimenting with the feasibility of building semiconductors in space, and its most recent success is another step forward towards its goal. According to the BBC, Space Forge's microwave-sized furnace has been switched on in space and has reached 1,000°C (1,832°F) — one of the most important parts of the manufacturing process that the company needs to validate in space.

"This is so important because it's one of the core ingredients that we need for our in-space manufacturing process," Payload Operations Lead Veronica Vera told the BBC. "So being able to demonstrate this is amazing." Semiconductor manufacturing is a costly and labor-intensive endeavor on Earth, and while putting it in orbit might seem far more complicated, making chips in space offers some theoretical advantages. For example, microgravity conditions would help the atoms in semiconductors line up perfectly, while the lack of an atmosphere would also reduce the chance of contaminants affecting the wafer.

These two things would help reduce imperfections in the final wafer output, resulting in a much more efficient fab. "The work that we're doing now is allowing us to create semiconductors up to 4,000 times purer in space than we can currently make here today," Space Forge CEO Josh Western told the publication. "This sort of semiconductor would go on to be in the 5G tower in which you get your mobile phone signal, it's going to be in the car charger you plug an EV into, it's going to be in the latest planes."

Space Forge launched its first satellite in June 2025, hitching a ride on the SpaceX Transporter-14 rideshare mission. However, it still took the company several months before it finally succeeded in turning on its furnace, showing how complicated this project can get. Nevertheless, this advancement is quite promising, with Space Forge planning to build a bigger space factory with the capacity to output 10,000 chips. Aside from that, it also needs to work on a way to bring the finished products back to the surface. Other companies are also experimenting with orbital fabs, with U.S. startup Besxar planning to send "Fabships" into space on Falcon 9 booster rockets.

Putting semiconductor manufacturing in space could help reduce the massive amounts of power and water that these processes require from our resources while also outputting more wafers with fewer impurities. However, we also have to consider the huge environmental impact of launching multiple rockets per day just to deliver the raw materials and pick up the finished products from orbit.


Original Submission

posted by hubie on Saturday January 03, @02:19PM   Printer-friendly

Consumes 1/3 the power of optical, but costs 1/3 more than optical:

Scale-up connectivity is crucial for the performance of rack-scale AI systems, but achieving high bandwidth and low latency for such interconnections using copper wires is becoming increasingly complicated with each generation. Using optical interconnections for scale-up connectivity is a possibility, but it may be an overkill, so start-ups Point2 and AttoTude propose to use radio-based interconnections operating at millimeter-wave and terahertz frequencies over waveguides that connect to systems using standard pluggable connectors, reports IEEE Spectrum.

Point2's implementation uses what it calls an 'active radio cable' built from eight 'e-Tube' waveguides. Each waveguide carries data using two frequencies — 90 GHz and 225 GHz — and plug-in modules at both ends convert digital signals directly into modulated millimeter-wave radio and back again. A full cable delivers 1.6 Tb/s, occupies 8.1mm, or about a half the volume of a comparable active copper cable, and can reach up to seven meters, more than enough for scale-up connectivity. Point2 says the design consumes roughly one-third the power of optical links, costs about one-third as much, and adds as little as one-thousandth the latency.

A notable aspect of Point2's approach is the relative maturity of its technology. The radio transceivers can be fabricated at standard semiconductor production facilities using well-known fabrication processes — the company has already demonstrated this approach using a 28nm chip with the Korea Advanced Institute of Science and Technology (KAIST). Also, its partners Molex and Foxconn Interconnect Technology have shown that the specialized cables can be produced on existing lines without major retooling.

AttoTude is pursuing a similar concept, but at even higher frequencies. Its system combines a digital interface, a terahertz signal generator, and a mixer that encodes data onto carriers between 300 and 3,000 GHz that feeds the signal into a narrow dielectric waveguide. Early versions used hollow copper tubes, while later generations rely on fibers measuring approximately 200 micrometers across with losses as low as 0.3 dB per meter (considerably lower than copper). The company has demonstrated 224 Gb/s transmission over four meters at 970 GHz and projects viable reaches of around 20 meters.

Both companies use waveguides instead of cables because, at millimeter-wave and terahertz frequencies cables fail. While at very high data rates copper cables can pass signals, they do so by becoming thicker, shorter, and more power-hungry. Furthermore, their losses and jitter rise so fast that the link budget collapses and breaks, so cables cannot be used for such applications. Meanwhile, waveguides are not an exotic choice, they are among a few viable option for interconnects with terabit/s-class bandwidth.


Original Submission

posted by hubie on Saturday January 03, @09:30AM   Printer-friendly

A proof-of-concept is now available on the internet:

MongoBleed, a high-severity vulnerability plaguing multiple versions of MongoDB, can now easily be exploited since a proof-of-concept (PoC) is now available on the web.

Earlier this week, security researcher Joe Desimone published code that exploits a "read of uninitialized heap memory" vulnerability tracked as CVE-2025-14847. This vulnerability, rated 8.7/10 (high), stems from "mismatched length fields in Zlib compressed protocol headers".

By sending a poisoned message claiming a larger size when decompressed, the attacker can cause the server to allocate a bigger memory buffer, through which they would leak in-memory data containing sensitive information, such as credentials, cloud keys, session tokens, API keys, configurations, and other data.

What's more - the attackers exploiting MongoBleed do not need valid credentials to pull the attack off.

In its writeup, BleepingComputer confirms that there are roughly 87,000 potentially vulnerable instances exposed on the public internet, as per data from Censys. The majority are located in the United States (20,000), with notable instances in China (17,000), and Germany (around 8,000).

Here is a list of all the vulnerable versions:

  • MongoDB 8.2.0 through 8.2.3
  • MongoDB 8.0.0 through 8.0.16
  • MongoDB 7.0.0 through 7.0.26
  • MongoDB 6.0.0 through 6.0.26
  • MongoDB 5.0.0 through 5.0.31
  • MongoDB 4.4.0 through 4.4.29
  • All MongoDB Server v4.2 versions
  • All MongoDB Server v4.0 versions
  • All MongoDB Server v3.6 versions

If you are running any of the above, make sure to patch up - a fix for self-hosting instances has been available since December 19. Users running MongoDB Atlas don't need to do anything, since their instances were automatically patched.

So far, there are no confirmed reports of in-the-wild abuse, although some researchers are linking MongoBleed to the recent Ubisoft Rainbow Six Siege breach.


Original Submission

posted by hubie on Saturday January 03, @04:45AM   Printer-friendly
from the year-of-the-QNX-desktop dept.

https://www.osnews.com/story/144075/qnx-releases-new-desktop-focused-image-qnx-8-0-with-xfce-on-wayland/

Christmas is already behind us, but since this is an announcement from 11 December – that I missed – I'm calling this a very interesting and surprising Christmas present.

The team and I are beyond excited to share what we've been cooking up over the last little while: a full desktop environment running on QNX 8.0, with support for self-hosted compilation! This environment both makes it easier for newly-minted QNX developers to get started with building for QNX, but it also vastly simplifies the process of porting Linux applications and libraries to QNX 8.0.
        ↫ John Hanam at the QNX Developer Blog

What we have here is QNX 8.0 running the Xfce desktop environment on Wayland, a whole slew of build and development tools like clang, gcc, git, etc.), a ton of popular code editors and IDEs, a web browser (looks like GNOME Web?), access to all the ports on the QNX Open-Source Dashboard, and more. For now, it's only available as a Qemu image to run on top of Ubuntu, but the plan is to also release an x86 image in the coming months so you can run this directly on real hardware.

This isn't quite the same as the QNX of old with its unique Photon microGUI, but it's been known for a while now that Photon hasn't been actively developed in a long time and is basically abandoned. Running Xfce on Wayland is obviously a much more sensible solution, and one that's quite future-proof, too. As a certified QNX desktop enthusiast of yore, I can't wait for the x86 image to arrive so I can try this out properly.

There are downsides. This image, too, is encumbered by annoying non-commercial license requirements and sign-ups, and this also wouldn't be the first time QNX starts an enthusiast effort, only to abandon it shortly after. Buyer beware, then, but I'm cautiously optimistic.

= Related: QNX at Wikipedia


Original Submission