Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:181 | Votes:287

posted by janrinok on Saturday March 28, @04:51PM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/22/cern_eggheads_burn_ai_into/

Like the major league pitcher who comes to his kid's take-your-parent-to-school day, CERN's Thea Aarrestad gave a presentation at the virtual Monster Scale Summit earlier this month about meeting a set of ultra-stringent requirements that few of her peers may ever experience.

Aarrestad is an assistant professor of particle physics at ETH Zurich. AT CERN (European Organization for Nuclear Research), she uses machine learning to optimize data collection from the Large Hadron Collider (LHC). Her specialty is anomaly detection, a core component of any proper observability system. 

Each year the LHC produces 40,000 EBs of unfiltered sensor data alone, or about a fourth of the size of the entire Internet, Aarrestad estimated. CERN can't store all that data. As a result, "We have to reduce that data in real time to something we can afford to keep." 

By "real time," she means extreme real time. The LHC detector systems process data at speeds up to hundreds of terabytes per second, far more than Google or Netflix, whose latency requirements are also far easier to hit as well. 

Algorithms processing this data must be extremely fast," Aarrestad said. So fast that decisions must be burned into the chip design itself. 

Contained in a 27-kilometer ring located a hundred meters underground between the border of Switzerland and France, the LHC smashes subatomic particles together at near-light speeds. The resulting collisions are expected to produce new types of matter that fill out our understanding of the Standard Model of particle physics — the operating system of the universe.

At any given time, there are about 2,800 bunches of protons whizzing around the ring at nearly the speed of light, separated by 25-nanosecond intervals. Just before they reach one of the four underground detectors, specialized magnets squeeze these bunches together to increase the odds of an interaction. Nonetheless, a direct hit is incredibly rare: out of the billions of protons in each bunch, only about 60 pairs actually collide during a crossing.

When particles do collide, their energy is converted into a mass of new outgoing particles (E=MC2 in the house!). These new particles "shower" through CERN's detectors, making traces "which we try to reconstruct," she said, in order to identify any new particles produced in ensuing melee. 

Each collision produces a few megabytes of data, and there are roughly a billion collisions per second, resulting in about a petabyte of data (about the size of the entire Netflix library). 

Rather than try to transport all this data up to ground level, CERN found it more feasible to create a monster-sized edge compute system to sort out the interesting bits at the detector-level instead.  

"If we had infinite compute we could look at all of it," Aarrestad said. But less than 0.02% of this data actually gets saved and analyzed. It is up to the detectors themselves to pick out the action scenes.

The detectors, built on ASICs, buffer the captured data for up to 4 microseconds, after which the data "falls over the cliff," forever lost to history if it is not saved.

Making that decision is the "Level One Trigger," an aggregate of about 1,000 FPGAs that digitally reconstruct the event information from a set of reduced event information provided by the detector via fiber optic line at about 10 TB/sec. The trigger produces a single value, either an "accept" (1), or "reject" ("0"). 

Making the decision to keep or lose a collision is the job of the anomaly-detection algorithm. It has to be incredibly selective, rejecting more than 99.7 percent of the input outright. The algo, affectionately named AXOL1TL, is trained on the "background" — the areas of the Standard Model that have largely been sussed out already. It knows the typical topology of a standard collision, allowing it to instantly flag events that fall outside those boundaries. As Aarrestad put it, it's hunting for "rare physics."

The algorithm must make a decision within 50 nanoseconds. Only about 0.02% of all collision data, or about 110,000 events per second, make the cut, and are subsequently saved and transported to ground level. Even this slimmed-down throughput results in terabytes per second being sent up to the on-ground servers. 

Once on the surface, the data goes through a second round of filtering, called the "High Level Trigger," which again discards the vast majority of captured collisions, identifying only about 1,000 interesting collisions from the 100,000 events per second that come through the pipe.  This system has 25,600 CPUs and 400 GPUs, to reproduce the original collision and analyze the results, and produces about a petabyte a day.

"This is the data we will actually analyze," Aarrestad said.

From there the data is replicated across 170 sites in 42 countries, where it can be analyzed by researchers worldwide, with an aggregate power of 1.4 million computer cores. 

The LHC detectors are a hothouse environment rarely encountered by AI. So much so that the CERN engineers had to create their own toolbox.

Sure, there are already plenty of real-time libraries for consumer applications such as noise-cancelling headphones, things like MLPerfMobile and MLPerfTiny. But they don't come anywhere close to supporting the streaming data rates and ultra-low latencies CERN requires.

So CERN trained machine learning models "to be small from the get-go," she said. They were quantized, pruned, parallelized, and distilled to the essential knowledge only. Every operation on an FPGA is quantized. Unique bitwidths were defined for each parameter, and they were made differential, so they could be optimized using gradient descent. 

The engineering team developed a transpiler, HLS4ML, that would write the model in C++ code targeted for specific platforms, so it can be run on an accelerator or system-on-a-chip, a custom FPGA, or even use it to "print silicon" on an ASIC.

The detector architecture breaks from the traditional Von Neumann model of memory-processor-I/O. Nothing is sequentially-driven. Rather it is based on the "availability of data," she said. "As soon as this data becomes available, the next process will start."

Most crucially, decisions must be made on-chip – nothing can be handed off to even very fast memory. Every piece of hardware is tailored for a specific model. Decisions take place at design time. Each layer of FPGAs is a separate compute unit.

A good chunk of the on-chip silicon is taken up by pre-calculations in order to save the processing to do each calculator anew. The output of every possible input is referenced in a lookup table.

Naturally, you can't put huge models on these slivers of silicon. No room for huge transformation deep learning models here. This is where CERN found that tree-based models are very powerful, compared to the deep learning ones

In CERN's experience, tree-based models offer the same performance but at a fraction the costs of deep learning models. This is not surprising given the Standard Model could be viewed as a collection of tabular data. For each collision, the LHC spits out a structured set of discrete measurements. 

CERN is trying to measure all of the parameters of collisions to the 5-sigma level – that's 99.999%, five-nines, the gold standard for claiming a discovery. The Higgs boson subatomic particle was found using this standard. 

The LHC collider has found at least 80 other hadrons, or particles held together by strong nuclear force (including one last week). 

The hunt is on for new processes that occur in fewer than one in a trillion collisions. 

At the end of this year, the LHC is shutting down to make way for the High Luminosity LHC, due to become operational in 2031. It will provide more of the sweet, sweet data particle physicists crave.

It will have more powerful magnets to focus the beams on very tiny spots. The bunches of protons will be doubled in size ("so there is more of a probability that those protons will talk to each other"). 

That means a lot more collisions and a 10-fold increase of data, leading to a much denser "event complexity." The event size jumps from 2MB to 8MB, but the resulting trails of data will jump from 4 Tb/sec to 63 Tb/sec.

The detectors are being upgraded to identify each collision, then track each particle-pairing back to its original collision point – all within a few microseconds. 

While the frontier AI labs build ever-larger models, CERN is, in many ways, heading in the opposite direction, embracing aggressive anomaly detection, heterogeneously-quantized transformers and other tricks to make the AI smaller and faster than ever.  When building our understanding of the universe, it is sometimes better to know what information to throw away.


Original Submission

posted by janrinok on Saturday March 28, @12:09PM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/23/musk_terafab/

Elon Musk has put Tesla, SpaceX, and xAI in harness to build a chip fabrication outfit called "Terafab" capable of producing a terawatt's worth of computing power each year, then send most of it into space.

In a Sunday afternoon presentation, Musk said the world's chipmakers currently produce 20 gigawatts' worth of compute power each year, and that whatever new capacity his key suppliers Nvidia, Samsung, and Micron produce, he will buy.

But he can't see how they produce the terawatt of compute power he wants each year, so he has built an "advanced fab" in Austin, Texas, that he says can produce "any kind of chip," and lithography masks.

Musk said his companies have developed a recursive process that allows rapid chip production, plus frequent redesigns to improve performance.

He mentioned "some very interesting new physics" that he is "confident will work. It's just a question of when."

"We are going to push the limits of physics in compute and do some wild and crazy things," he said.

He plans to produce two chips. One will be dedicated to inference and for use on Earth, mostly in humanoid robots that he thinks will sell in volumes of one to ten billion a year. The upper range would mean robots outnumber humans in a year.

The second chip will power orbiting computers that ride in satellites packing just 100 kw of compute power – about the energy consumption of a rack packed full of high-end AI gear. In time, Musk expects to launch megawatt-scale satellites.

He also mentioned building a bigger version of SpaceX's Starship that can carry 200 tons into space and shared his back-of-the-envelope math that suggests putting a terawatt of compute into space, along with all the necessary solar power and other infrastructure, means launching 10 million tons into space every year.

Our back-of-the-envelope math suggests that means Musk needs to launch 50,000 Starships a year, or 135 a day at a rate of one giant rocket every ten minutes.

The reason for doing this, Musk said, is to ensure humans find a home among the stars and a future that will be "like the best science fiction you have ever read. Like Star Trek, Iain Banks, Asimov, or Heinlein."

Don't mention the Borg, R. Daneel Olivaw, Mule, hegemonizing swarms, or the soup at the end of Stranger in a Strange Land.

Musk didn't explain how he will find sufficient resources to make any of this happen, a question that's especially important at this moment given the war in Iran has seen production of helium – an essential component in semiconductor manufacturing – fall by 30 percent.

Musk challenged doubters by pointing out Tesla and SpaceX defied critics who predicted electric cars and reusable rockets would not be feasible or economical.

"I think it's important to consider the grandness of the universe and what we can do that is much greater than what we've done before, as opposed to worrying about sort of small squabbles on Earth."

Might that have been a reference to his unproductive time at the head of the so-called Department of Government Efficiency? Or perhaps it was earthly spats alone that prevented Musk from delivering on his 2019 prediction that Tesla would deploy one million self-driving taxis in 2020? Robocab-watchers estimate about 200 self-driving Tesla taxis are currently undergoing tests.

As his appreciative audience cheered him on, Musk discussed his vision for launching a petawatt of computing power each year, made on the Moon and sent out into the solar system on a gadget he called an "electromagnetic mass driver" that looks like a kind of railgun.

"I want to live long enough to see the mass driver on the Moon," the 54-year-old said.

US government data suggests he's got 22 years in which to make it happen.

[Ed's Comment: One of our reviewing editors wrote the following comment: "He mentioned "some very interesting new physics" that he is "confident will work. It's just a question of when."" - So magic. Got it.]


Original Submission

posted by janrinok on Saturday March 28, @07:24AM   Printer-friendly

https://www.theregister.com/2026/03/23/asia_tech_news_roundup/

Australia's government on Monday announced a set of datacenter "expectations" to guide would-be bit barn builders who contemplate breaking ground down under.

The expectations strongly suggest that datacenter builders create their own electricity generation capacity, and pay for energy transmission and infrastructure costs. "Energy-intensive data centre proposals not closely aligned with the expectations will not be prioritised by Commonwealth regulatory assessments," states the formal expectations document.

The expectations also call on datacenter operators to prioritise Australia's national interest, use water sustainably and responsibly, invest in local skills and jobs, and do all that while strengthening the nation's "research, innovation and local capability."

Industry lobby group the Tech Council of Australia welcomed the expectations, as did the Electrical Trades Union.


Original Submission

posted by janrinok on Saturday March 28, @02:36AM   Printer-friendly
from the are-we-not-doomed-betteridge-says-no dept.

https://spaceweather.com/archive.php?view=1&day=17&month=03&year=2026

Ten thousand StarLinks satellites: On March 16th, a Falcon 9 rocket lifted off from Vandenberg Space Force Base carrying 25 Starlink satellites. It was a routine launch for SpaceX, the 33rd of 2026. But those 25 Starlinks crossed a milestone. For the first time in history, more than 10,000 Starlink satellites were simultaneously circling Earth.

Consider where we started: When SpaceX launched its first operational Starlinks in May 2019, there were roughly 2,000 active satellites of all kinds orbiting Earth. Starlink alone now outnumbers the entire pre-2019 fleet five to one. The constellation has utterly transformed the orbital environment.

[...] The numbers are sobering. Since 2019, more than 11,596 Starlinks have been launched. Of those, more than 1,500 have already reentered the atmosphere as SpaceX retires older satellites to make room for newer models. Each re-entry deposits about 30 kg of aluminum oxide into the upper atmosphere--an uncontrolled chemistry experiment on a planetary scale.

[...] With so many Starlinks circling Earth, the orbital environment is increasingly unstable. It's "an orbital house of cards," according to a study led by Sarah Thiele of Princeton University, which finds that a severe solar storm could kickstart widespread catastrophic collisions in as little as 2-3 days. SpaceX itself reported to the FCC that Starlink satellites performed roughly 300,000 collision-avoidance maneuvers in 2025 alone.

https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2024GL109280 https://arxiv.org/abs/2512.09643

arXiv:2512.09643 [astro-ph.EP] (or arXiv:2512.09643v2 [astro-ph.EP] for this version) https://doi.org/10.48550/arXiv.2512.09643


Original Submission

posted by janrinok on Friday March 27, @09:50PM   Printer-friendly

Concerns Raised Over Shahed Kamikaze Drone Listings on Alibaba

https://www.tomshardware.com/tech-industry/concerns-raised-over-shahed-kamikaze-drone-listings-on-alibaba-they-featured-ai-guidance-to-lock-onto-people-building-vehicles-ships-etc

Chinese eTail giant Alibaba has removed listings and suspended the accounts of sellers that were found to be advertising “cruise missiles” and “suicide attack drones.” Australia’s ABC News uncovered the concerning sales of several one-way attack drone models, some of which looked strikingly similar to the Iranian Shahed design, others with a cruise missile profile.

The Alibaba “commercial” listings touted the drones as “pesticide sprayers,” or for “aerial mapping”. However, ABC dug into the product catalogs to confirm the Shahed-a-likes were “suicide attack drones” capable of carrying 2kg (4.41 pound) warheads for distances up to 100km. Moreover, with their thermal imaging and AI guidance, these devices could "achieve autonomous locking of targets (people, building, vehicles, ships, etc.)”

These kamikaze drones would not be casual impulse buys. ABC reports that the listing prices of the cruise missile style drones were approaching $50,000. If that sum was reported in Australian dollars, it equates to approximately USD $35,000.

ABC continued to look closely through the various supplier catalogs it found from the Alibaba suppliers. One of the China-based suppliers offered five kinds of "suicide attack drones" with two having near identical dimensions and specs to the Iranian-made Shahed 136, says the news report.

Drones inhabit a twilight dual-use segment of the commercial landscape. Many can quickly and easily be adapted for peaceful purposes or war duties. An Alibaba statement received by ABC News, was clear, though. The online retailer stated that it “strictly prohibits the sale of military weapons.” It also acted quickly to remove what it characterized as non-compliant third-party listings.

Talking to a handful of the suppliers, the Australian news organization saw that the sellers generally didn’t care what the drones they sold were used for. For example, one of the retailers contacted shrugged “After the customer makes a purchase, what they use it for has nothing to do with us.”

Importantly, just because these kamikaze drone adverts exist, it doesn’t mean that the advertisers would actually ship these exact products.

'Cruise Missile' Drones and Low-Cost Shahed Knockoffs Listed on Alibaba

'Cruise Missile' Drones and Low-Cost Shahed Knockoffs Listed on Alibaba:

One-way attack drones described as "cruise missiles" were listed on the popular online retail platform Alibaba for less than $50,000. Sellers described these long-range fixed-wing drones, similar in design to those used by Iran to attack nearby Gulf states, as suitable for "aerial mapping". But the same sellers' PDF sales catalogues, obtained by the ABC through the Alibaba platform, made it clear the drones were also designed for war.

After being notified, Alibaba removed the listings and said it suspended the sellers' accounts.

But experts said combat-drone proliferation was a growing problem, with the drones typically being sold under a pretence of "commercial" use.

One China-based supplier's catalogue listed two kinds of autonomous "cruise missile", equipped with thermal imaging "AI guidance".

[...] A small drone described in the catalogue as able to carry a 2-kilogram bomb 100 kilometres was listed by the seller on Alibaba as suitable for "pesticide spraying".

In another catalogue, a China-based supplier listed five kinds of "suicide attack drones" including two with near-identical dimensions and capabilities to the Iranian-made Shahed 136 one-way attack drone.

The threat of attack from Iran's Shahed drones, as well as ballistic missiles and drone boats, has effectively closed the Strait of Hormuz and choked global oil supplies.

Some sellers also publicly listed military hardware on Alibaba itself, rather than just in their catalogues. One seller listed a range of small "kamikazedrones", similar to the type being used to intercept drones in Ukraine and the Gulf, and "aerial delivery" drones depicted with mortar rounds.

Alibaba's official policies prohibit the sale of military equipment. In a statement, the Chinese-owned company said it "strictly prohibits the sale of military weapons" and "acted immediately upon notification to remove the non-compliant third-party listings".

Unlike guns, tanks or fighter jets, drones are "dual-use". Commercial versions can be relatively easily converted to military use.

As a result, long-range drones can be legitimately sold for commercial logistics and survey work, despite the fact they are also capable of flying hundreds of kilometres to deliver a 50kg warhead.

This blurring of conventional boundaries between commercial and military hardware makes it very hard to regulate or otherwise control the sale and spread of these drones as dangerous weapons.

[...] Malcolm Davis, a senior analyst with the Australian Strategic Policy Institute, said adapting small Chinese-made quadcopter drones for military use was "nothing new", but China would be keeping "a very close eye" on the export of larger long-range drones such as those listed on Alibaba.

Dr Davis said the war in the Middle East showed how long-range drones could neutralise conventional air defences. "This is the problem the Americans are facing," he said.


Original Submission #1Original Submission #2

posted by jelizondo on Friday March 27, @04:03PM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/23/palantir_fca/

US data miner Palantir has quietly landed inside the UK's financial watchdog, plugging into a trove of sensitive data as Whitehall simultaneously insists it wants to wean itself off exactly this kind of dependency.

The Financial Conduct Authority (FCA) has handed the American analytics biz a three-month trial contract worth more than £30,000 a week to analyze its internal "data lake," a sprawling repository of regulatory intelligence covering fraud, money laundering, insider trading, and consumer complaints.

According to The Guardian, which first reported on the deal, Palantir will gain access to data including case files, reports from banks and crypto firms, and even communications data such as emails, phone records, and social media material tied to investigations.

The idea, at least on paper, is straightforward: use Palantir's software to help sift signal from noise across the roughly 42,000 businesses the FCA oversees, and spot patterns of financial crime faster than human analysts can manage alone.

If this sounds familiar, that's because it is. Palantir has spent the past few years embedding itself across the British state – from the NHS to policing and defense – racking up more than £500 million in public sector contracts in the process.

Critics have long described this as a classic "land and expand" strategy: start with a narrowly scoped deployment, prove value, then become very hard to remove. The FCA deal, which appears to follow the same pattern, arrives just days after the government signaled that it wants to rethink how it buys technology, amid concerns about overreliance on a small number of large vendors and the need for more "sovereign" capability.

Yet here is another sensitive system being handed, at least temporarily, to a US company whose entire business is built on ingesting and analyzing other people's data.

The FCA, for its part, has stressed that Palantir is acting strictly as a "data processor," that all data remains hosted in the UK, and that the company cannot use the information to train its own models.

"Effective use of technology is vital in the fight against financial crime and helps us identify risks to the consumers we serve and markets we oversee," an FCA spokesperson told The Register. "We ran a competitive procurement process and have strict controls in place to ensure data is protected."

Those assurances mirror language used in earlier public sector deals, particularly in the NHS, where officials have repeatedly argued that contractual controls and technical safeguards govern use. Whether that is enough to calm critics is another matter.

There's also the small matter of optics. Palantir's track record – spanning US defense, intelligence, and immigration enforcement – has made it a lightning rod for concerns about surveillance and civil liberties, especially when deployed in civilian contexts.

Still, for regulators under pressure to do more with less, the appeal is clear. The FCA is sitting on vast amounts of data, much of it underused, and AI vendors are lining up to promise that they can turn it into actionable intelligence.

Whether that promise outweighs the risks of handing the keys – even temporarily – to a company that has made a habit of sticking around is a question the UK keeps asking, and so far, keeps answering the same way.


Original Submission

posted by jelizondo on Friday March 27, @11:20AM   Printer-friendly

Juan Carlos Pino says charcoal-conversion 'the best option we have'

Juan Carlos Pino, a Cuban mechanic with an eighth-grade education, may have found a way to outsmart the U.S. oil blockade.

Employing the kind of ingenuity many Cubans have developed over decades of U.S. ‌sanctions, Pino, 56, modified his 1980 Polish-built Fiat Polski to run on charcoal, a cheaper and more abundant fuel than gasoline since Washington cut off oil shipments to the Caribbean island in January.

[...] "In a crisis like this, ⁠it's the best option we have," said Pino, who wants to ‌modify a tractor next. "We need mobility, we need to be able to plant crops."

Pino built his device entirely from scrap and repurposed items. The charcoal burns inside a converted propane tank that is sealed shut with the lid of a transformer. A filter is made from a stainless steel milk jug stuffed with old ⁠clothes.

[...] Enter the inventor. Pino once created a machine, built from a motorcycle, to milk three cows at a time. He said he'd been contemplating the charcoal-fired automobile for several years, inspired at first by his late uncle. Pino also ​credited open-source technology promoted by Edmundo Ramos, an Argentine innovator behind DriveOnWaste.com.

[...] He said just about any engine can be converted to run on charcoal by drawing ⁠hot gas ⁠instead of gasoline into the carburetor.

Pino rolled out the charcoal-powered Polski on March 4. In one early test run, the car completed an 85-kilometre trip, reaching a top ​speed of 70 km/h.

[...] Cruz knows something about Cuban jury-rigging. He drives a 1953 Pontiac that runs on a 1940s Perkins engine with a Mercedes transmission, a steering system from the Czech group AVIA, and a differential made by the East German company ⁠Ifa.


Original Submission

posted by jelizondo on Friday March 27, @06:38AM   Printer-friendly

The FCC has deemed even US-based companies' products a security risk if they're made anywhere overseas.

The Federal Communications Commission has released a notice today designating any consumer routers manufactured outside the US as a security risk. The rule states that new foreign-made product models for network routers will land on the Covered List, a set of communications equipment seen as having an unacceptable risk to national security. Previously purchased routers can still be used and retailers can still sell models that were approved by the prior FCC policies. In an exception to the usual rule, routers included on the Covered List can continue to receive updates at least through March 1, 2027, although the date could potentially be extended.

The move stems from a goal in the White House's 2025 national security strategy that reads: "the United States must never be dependent on any outside power for core components—from raw materials to parts to finished products—necessary to the nation’s defense or economy."

[Source]: engadget


Original Submission

posted by jelizondo on Friday March 27, @01:55AM   Printer-friendly
from the now-with-even-more-QDs dept.

"This is a serious warning shot"

Germany recently banned TCL from marketing some of its TVs as QLED (quantum dot light-emitting diode), with a Munich court ruling that the TVs lack the quantum dot (QD) structure and performance associated with QLED TVs. The decision increases pressure on TV companies to be more honest with their marketing.

Samsung has actively campaigned against TCL's use of the term QLED. A year ago, Samsung sent Ars Technica results from testing performed by Intertek, a London-headquartered testing and certification company, on TCL's 65Q651G, 65Q681G, and 75Q651G. The results showed that the TVs lacked sufficient amounts of cadmium and indium (two chemicals used in QD TVs, either individually or in combination). Intertek reportedly tested the optical sheet, diffuser plate, and LED modules in each TV using a minimum detection standard of 0.5 mg/kg for cadmium and 2 mg/kg for indium.

At the time, a TCL representative told me that TCL had "definitive substantiation for the claims made regarding its QLED televisions."

But based on previous dissections of TCL TVs shared online [Videos not reviewed - Ed.] and conversations with industry experts, it seems those TVs may employ some QDs but not enough to offer a significantly wider color gamut than similarly specced, non-QD rivals. It's common for TVs marketed as QD, especially budget sets, to primarily rely on phosphors or a combination of phosphors and QDs at varying ratios, for color conversion. That's instead of, as the terms QD TV and QLED suggest, QDs. Phosphors are cheaper than QDs, and their associated color performance in displays is not as good.

Other manufacturers, including Samsung [Videos not reviewed - Ed.], have been accused of marketing TVs that rely heavily on phosphors as QD or QLED.

[...] "Some products marketed as 'QLED' use conventional backlight architectures (standard phosphors, optical films, diffuser plates) and rely on picture modes or software tuning to create a more saturated 'vivid' look," a January whitepaper by TÜV Rheinland and QD supplier Nanosys reads. The whitepaper, "Re-defining a 'true' Quantum Dot Display," also points to devices that have QD material at "trace levels, or in packaging and integration designs that limit excitation and light extraction of certain wavelengths."

"In these cases, the display may still achieve competitive headline gamut coverage, yet the measurable optical signature of an effective QD system is absent or minimal," the whitepaper says. "The spectrum, color, volume behavior at high luminance, chromaticity stability, and temporal response can remain similar to those of non-QD LCD solutions."

For now, the German ruling brings needed scrutiny to "QLED" and other potentially misleading display terms.

[...] With TV marketing remaining murky—and often misleading—digging into detailed performance reviews remains the most reliable way to gauge how a display might perform in the real world.


Original Submission

posted by hubie on Thursday March 26, @09:05PM   Printer-friendly
from the global-hard-drive-enrichment dept.

From CNN:
Supreme Court says internet service provider isn't liable for bootlegged music downloads

In a major loss for the nation's music industry, the Supreme Court on Wednesday ruled that a major internet service provider is not liable for copyright infringement because it failed to kick known copyright violators off its network.

Justice Clarence Thomas wrote the opinion for a unanimous court.

The nation's largest record labels want to hold internet providers liable for copyright infringement because they declined to cut off online access to users they know are downloading bootlegged music.

[....] "Under our precedents, a company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights," Thomas wrote.

[.... rest omitted ....]

This is a sad shocking blow to the poor sad music industry. I shed so many tears for them. (Okay, not really)


Original Submission

posted by hubie on Thursday March 26, @04:23PM   Printer-friendly

As a managerial strategy, it seems a little misguided:

According to a column by the New York Times’ Kevin Roose, employees at companies including Meta and OpenAI compete on “internal leaderboards that show how many tokens[…]each worker consumes.” At Meta in particular (and also Shopify), Roose says volume of A.I. used has become a metric that goes into people’s evaluations, with managers “rewarding workers who make heavy use of A.I. tools and chastening those who don’t.”

Analogies are tricky here. One is tempted to say it’s like making painters compete to use the most paint, but even if the paint is just being splattered as quickly as possible, it’s at least going to be visible when the project is done. It’s a bit more like telling soldiers to gauge their battlefield success by the number of bullets fired, but suppressive fire that doesn’t hit anything has its place in war strategy. The best analogy I can come up with is this: it’s like NBA mascots being evaluated by how many t-shirts they fire out of their t-shirt cannons, but the t-shirts are made by Hermès.

The resulting numbers, in terms of both tokens and money, are absolutely staggering. One OpenAI engineer, according to Roose, burned through 210 billion tokens, which Roose equates to 33 Wikipedias. A Swedish software engineer claims to Roose that his company spends more than his salary on his Claude Code tokens alone.

This “tokenmaxxing” trend clearly stems in part from the use of “claws,” agentic AI platforms like OpenClaw, which are this year’s biggest supposed innovation in AI. OpenClaw’s virality was part of the big shift away from OpenAI’s GPT models and toward Claude this year by AI fanatics, and OpenAI subsequently hired OpenClaw’s creator, seemingly in a bid to maintain its position as the industry leader.

But even when used without an external claw platform Claude Code is becoming more and more like OpenClaw lately, with a feature rolling out last week that allows greater and greater on-the-go vibe coding, by letting users communicate with Claude Code more easily on their phones.


Original Submission

posted by hubie on Thursday March 26, @11:40AM   Printer-friendly

Self-driving cars are essentially AI supercomputers on wheels:

As the company is raking cash from the AI infrastructure build out, it’s also expanding its output with several planned fabs in Japan, Singapore, and even a “megafab” in New York. These projects are expected to come online between 2028 and 2029, and the Micron CEO said that it’s looking to boost output by 20% in 2026, which could help alleviate some of the pressure on the supply side. However, even as these new factories start production, Mehrotra predicts that there will be a new market that demands massive amounts of high-speed memory — self-driving cars.

There are six levels of vehicle autonomy, starting at L0 for cars that have no driving automation whatsoever. A vehicle with a single automated system (such as cruise control) counts as L1, while those equipped with advanced driver assistance systems (ADAS) that both control steering and acceleration, such as Tesla’s Autopilot and Cadillac’s Super Cruise, are considered as L2. On the other hand, vehicles with L4 autonomy basically do not need human intervention in any task, like overtaking or deciding when to cross a busy intersection. However, it still gives the driver the option to take control and manually drive the vehicle.

Nvidia announced that it’s working with Chinese carmakers BYD and Geely and Japanese marques Isuzu and Nissan to adopt the Nvidia Drive Hyperion platform. This is the AI chip maker’s end-to-end autonomous vehicle platform meant to deliver an L4 system to car manufacturers. Since this is an AI system, it will likely demand a lot of high-speed memory to be able to run effectively.

Most modern vehicles require at least 16GB of memory, but if car makers introduce L4 autonomy, it will definitely need a lot more RAM. We’ve seen this with the shortage of high-end Macs with up to 512GB of Unified Memory as many users have become interested in running the likes of OpenClaw on their own systems. It has even gotten to the point that Apple pulled the $4,000 512GB Mac Studio from its online store and raised the 256GB version to $2,000. So, if carmakers started churning out hundreds of thousands, if not millions of vehicles with AI-powered driverless features, Micron expects demand for automotive memory to pick up as well.


Original Submission

posted by hubie on Thursday March 26, @06:59AM   Printer-friendly

When particles in volcanic ash cloud rub together, some pick up positive charge and others negative – now physicists have finally elucidated how these different charges are determined:

Physicists have solved a longstanding mystery around the process that creates volcanic lightning: when similar particles rub together, why do some become positively charged while others become negatively charged?

The exchange of electric charge when two objects touch, called the triboelectric effect, is what causes hair to be attracted towards a balloon after rubbing.

In a cloud of volcanic ash, swirling particles of silicon dioxide exchange electric charge as they collide. The positively and negatively charged particles separate and lightning occurs when current flows between the two.

But physicists couldn’t explain what breaks the symmetry between two particles of the same material and causes charge to flow one way or the other.

“There are a lot of candidates,” says Galien Grosjean, now at the Autonomous University of Barcelona. “People suspect that humidity is important, or roughness, or the crystalline structure.”

While working at the Institute of Science and Technology Austria in Klosterneuburg, Grosjean wondered if the answer lay in carbon-containing molecules on the surface of the particles. Such molecules are ubiquitous in nature, and materials scientists try to keep these contaminants to a minimum. But Grosjean and his colleagues kept track of what cleaning their samples did to the electrification.

With ultrasound, they levitated a small particle of silicon dioxide, let it bounce once onto a target plate made of the same material and then measured its charge. “It might charge positive or negative. If positive, we would bake or clean it and redo the experiment – and then it would charge negative,” says Grosjean.

Analysis of the samples showed that the removal of carbon-containing molecules was indeed the controlling factor. "We saw that this effect overcomes everything else," says Grosjean.

Another giveaway was that a cleaned sample would become positively charged again after about a day, which is also how quickly it would acquire a fresh coat of carbon molecules from the air.

Daniel Lacks at Case Western Reserve University in Cleveland, Ohio, is impressed by the study. “People know surfaces have a lot of crap on them. But I’ve never seen that come up in triboelectric charging,” he says.

The discovery could be bad news for physicists, he fears. If carbon contamination determines the charging direction, precisely calculating how particles become charged will be very hard. “Prediction may just be something that will never happen,” says Lacks.

Journal Reference: Grosjean, G., Ostermann, M., Sauer, M. et al. Adventitious carbon breaks symmetry in oxide contact electrification. Nature 651, 626–631 (2026). https://doi.org/10.1038/s41586-025-10088-w


Original Submission

posted by janrinok on Thursday March 26, @02:12AM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/23/nasa_rfp_shuttle_relocation/

NASA has issued a draft Request for Proposals to move a flown space vehicle, a step some lawmakers see as progress toward relocating Space Shuttle Discovery from the Smithsonian Museum in Virginia to Houston, Texas.

The agency emphasized it was seeking feedback on transporting something like a flown Orion capsule as well as a Space Shuttle orbiter.

Administrator Jared Isaacman has yet to name the vehicle moving to Houston under the Trump administration's budget. Space Shuttle Discovery was not mentioned in the bill, although several US lawmakers have long sought to relocate the retired orbiter to Texas.

In a statement, US Senator John Cornyn (R-TX) said: "My law authorizing and funding the Space Shuttle Discovery's movement to Houston is being set into motion thanks to NASA's announcement, and I applaud Administrator Isaacman for keeping this process moving.

"Today is real progress in our mission to bring Discovery home, and I look forward to welcoming the shuttle home to Space City soon."

NASA's request signals movement on the issue, if not full resolution. It should also clarify how a vehicle transfer would be conducted and at what cost, though the agency has stopped short of asking bidders to commit to a specific price.

The request also contains language that should give bidders pause. It describes the vehicles as "irreplaceable national assets requiring preservation-focused handling." This is a standard that demands careful consideration before committing to a bid.

The Keep The Shuttle group was "delighted" with the document, however, a spokesperson told The Register: "NASA has instructed that any proposal to move Discovery keeps the shuttle intact – no 'disassembly' allowed. However, there is no way to move an intact shuttle ~40 miles to Quantico (as NASA suggested) or anywhere else on the Potomac. In short, NASA's first RFP is asking for the impossible."

Moving an Orion capsule is a far simpler task. The spokesperson pointed out: "NASA has used USAF cargo jets to move Orions in the past, and the RFP indicates that this will be the likely solution."

"So we're delighted, because NASA has committed to keeping the shuttle intact, and is on a direct path to send Artemis II to Houston – with a stop at the Moon first of course!"

The equipment NASA used to transport the Space Shuttles has long been retired or scrapped, meaning that something as large as an orbiter will require considerable effort and likely cost considerably more than allocated. An Orion capsule, less so.


Original Submission

posted by janrinok on Wednesday March 25, @09:26PM   Printer-friendly

If active distraction of readers of your own website was an Olympic Sport, news publications would top the charts every time:

I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. It took two minutes before the page settled. And then you wonder why every sane tech person has an adblocker installed on systems of all their loved ones.

It is the same story across top publishers today.

To truly wrap your head around the phenomenon of a 49 MB web page, let's quickly travel back a few decades. With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. In 2006, the iPod reigned supreme and digital music was precious. A standard high-quality MP3 song at 192 kbps bitrate took up around 4 to 5 MB. This singular page represents roughly 10 to 12 full-length songs. I essentially downloaded an entire album's worth of data just to read a few paragraphs of text. According to the International Telecommunication Union, the global average broadband internet speed back then was about 1.5 Mbps. Your browser would continue loading this monstrosity for several minutes, enough time for you to walk away and make a cup of coffee.

If hardware has improved so much over the last 20 years, has the modern framework/ad-tech stack completely negated that progress with abstraction and poorly architected bloat?

For the example above, taking a cursory look at the network waterfall for a single article load reveals a sprawling, unregulated programmatic ad auction happening entirely in the client's browser. Before the user finishes reading the headline, the browser is forced to process dozens of concurrent bidding requests to exchanges like Rubicon Project (fastlane.json) and Amazon Ad Systems. While these requests are asynchronous over the network, their payloads are incredibly hostile to the browser's main thread. To facilitate this, the browser must download, parse and compile megabytes of JS. As a publisher, you shouldn't run compute cycles to calculate ad yields before rendering the actual journalism.

  1. The user requests text.
  2. The browser downloads 5MB of tracking JS.
  3. A silent auction happens in the background, taxing the mobile CPU.
  4. The winning bidder injects a carefully selected interstitial ad you didn't ask for.

Beyond the sheer weight of the programmatic auction, the frequency of behavioral surveillance was surprising. There is user monitoring running in parallel with a relentless barrage of POST beacons firing to first-party tracking endpoints (a.et.nytimes.com/track). The background invisible pixel drops and redirects to doubleclick.net and casalemedia help stitch the user's cross-site identity together across different ad networks.

When you open a website on your phone, it's like participating in a high-frequency financial trading market. That heat you feel on the back of your phone? The sudden whirring of fans on your laptop? Contributing to that plus battery usage are a combination of these tiny scripts.

Ironically, this surveillance apparatus initializes alongside requests fetching purr.nytimes.com/tcf which I can only assume is Europe's IAB transparency and consent framework. They named the consent framework endpoint purr. A cat purring while it rifles through your pockets.

So therein lies the paradox of modern news UX. The mandatory cookie banners you are forced to click are merely legal shields deployed to protect the publisher while they happily mine your data in the background. But that's enough about NYT.

Publishers aren't evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. The modern ad industry is slowly de-coupling the creator from the advertiser. They weaponize the UI because they think they have to.

[...] No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

They built a system that treats your attention as an extractable resource. The most radical thing you can do is refuse to be extracted. Close the tab. Use RSS. Let the bounce rate speak for itself. These are vanity metrics until enough people stop vanishing into them and then suddenly they become a crisis.

The article goes into detailed explanations for the different processes going on and has suggestions for how web sites could improve the situation for everyone.

See also: The Web Bloat Crisis: How RSS Readers Are Saving Us from Bloated Websites


Original Submission