Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:181 | Votes:287

posted by janrinok on Saturday March 28, @07:24AM   Printer-friendly

https://www.theregister.com/2026/03/23/asia_tech_news_roundup/

Australia's government on Monday announced a set of datacenter "expectations" to guide would-be bit barn builders who contemplate breaking ground down under.

The expectations strongly suggest that datacenter builders create their own electricity generation capacity, and pay for energy transmission and infrastructure costs. "Energy-intensive data centre proposals not closely aligned with the expectations will not be prioritised by Commonwealth regulatory assessments," states the formal expectations document.

The expectations also call on datacenter operators to prioritise Australia's national interest, use water sustainably and responsibly, invest in local skills and jobs, and do all that while strengthening the nation's "research, innovation and local capability."

Industry lobby group the Tech Council of Australia welcomed the expectations, as did the Electrical Trades Union.


Original Submission

posted by janrinok on Saturday March 28, @02:36AM   Printer-friendly
from the are-we-not-doomed-betteridge-says-no dept.

https://spaceweather.com/archive.php?view=1&day=17&month=03&year=2026

Ten thousand StarLinks satellites: On March 16th, a Falcon 9 rocket lifted off from Vandenberg Space Force Base carrying 25 Starlink satellites. It was a routine launch for SpaceX, the 33rd of 2026. But those 25 Starlinks crossed a milestone. For the first time in history, more than 10,000 Starlink satellites were simultaneously circling Earth.

Consider where we started: When SpaceX launched its first operational Starlinks in May 2019, there were roughly 2,000 active satellites of all kinds orbiting Earth. Starlink alone now outnumbers the entire pre-2019 fleet five to one. The constellation has utterly transformed the orbital environment.

[...] The numbers are sobering. Since 2019, more than 11,596 Starlinks have been launched. Of those, more than 1,500 have already reentered the atmosphere as SpaceX retires older satellites to make room for newer models. Each re-entry deposits about 30 kg of aluminum oxide into the upper atmosphere--an uncontrolled chemistry experiment on a planetary scale.

[...] With so many Starlinks circling Earth, the orbital environment is increasingly unstable. It's "an orbital house of cards," according to a study led by Sarah Thiele of Princeton University, which finds that a severe solar storm could kickstart widespread catastrophic collisions in as little as 2-3 days. SpaceX itself reported to the FCC that Starlink satellites performed roughly 300,000 collision-avoidance maneuvers in 2025 alone.

https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2024GL109280 https://arxiv.org/abs/2512.09643

arXiv:2512.09643 [astro-ph.EP] (or arXiv:2512.09643v2 [astro-ph.EP] for this version) https://doi.org/10.48550/arXiv.2512.09643


Original Submission

posted by janrinok on Friday March 27, @09:50PM   Printer-friendly

Concerns Raised Over Shahed Kamikaze Drone Listings on Alibaba

https://www.tomshardware.com/tech-industry/concerns-raised-over-shahed-kamikaze-drone-listings-on-alibaba-they-featured-ai-guidance-to-lock-onto-people-building-vehicles-ships-etc

Chinese eTail giant Alibaba has removed listings and suspended the accounts of sellers that were found to be advertising “cruise missiles” and “suicide attack drones.” Australia’s ABC News uncovered the concerning sales of several one-way attack drone models, some of which looked strikingly similar to the Iranian Shahed design, others with a cruise missile profile.

The Alibaba “commercial” listings touted the drones as “pesticide sprayers,” or for “aerial mapping”. However, ABC dug into the product catalogs to confirm the Shahed-a-likes were “suicide attack drones” capable of carrying 2kg (4.41 pound) warheads for distances up to 100km. Moreover, with their thermal imaging and AI guidance, these devices could "achieve autonomous locking of targets (people, building, vehicles, ships, etc.)”

These kamikaze drones would not be casual impulse buys. ABC reports that the listing prices of the cruise missile style drones were approaching $50,000. If that sum was reported in Australian dollars, it equates to approximately USD $35,000.

ABC continued to look closely through the various supplier catalogs it found from the Alibaba suppliers. One of the China-based suppliers offered five kinds of "suicide attack drones" with two having near identical dimensions and specs to the Iranian-made Shahed 136, says the news report.

Drones inhabit a twilight dual-use segment of the commercial landscape. Many can quickly and easily be adapted for peaceful purposes or war duties. An Alibaba statement received by ABC News, was clear, though. The online retailer stated that it “strictly prohibits the sale of military weapons.” It also acted quickly to remove what it characterized as non-compliant third-party listings.

Talking to a handful of the suppliers, the Australian news organization saw that the sellers generally didn’t care what the drones they sold were used for. For example, one of the retailers contacted shrugged “After the customer makes a purchase, what they use it for has nothing to do with us.”

Importantly, just because these kamikaze drone adverts exist, it doesn’t mean that the advertisers would actually ship these exact products.

'Cruise Missile' Drones and Low-Cost Shahed Knockoffs Listed on Alibaba

'Cruise Missile' Drones and Low-Cost Shahed Knockoffs Listed on Alibaba:

One-way attack drones described as "cruise missiles" were listed on the popular online retail platform Alibaba for less than $50,000. Sellers described these long-range fixed-wing drones, similar in design to those used by Iran to attack nearby Gulf states, as suitable for "aerial mapping". But the same sellers' PDF sales catalogues, obtained by the ABC through the Alibaba platform, made it clear the drones were also designed for war.

After being notified, Alibaba removed the listings and said it suspended the sellers' accounts.

But experts said combat-drone proliferation was a growing problem, with the drones typically being sold under a pretence of "commercial" use.

One China-based supplier's catalogue listed two kinds of autonomous "cruise missile", equipped with thermal imaging "AI guidance".

[...] A small drone described in the catalogue as able to carry a 2-kilogram bomb 100 kilometres was listed by the seller on Alibaba as suitable for "pesticide spraying".

In another catalogue, a China-based supplier listed five kinds of "suicide attack drones" including two with near-identical dimensions and capabilities to the Iranian-made Shahed 136 one-way attack drone.

The threat of attack from Iran's Shahed drones, as well as ballistic missiles and drone boats, has effectively closed the Strait of Hormuz and choked global oil supplies.

Some sellers also publicly listed military hardware on Alibaba itself, rather than just in their catalogues. One seller listed a range of small "kamikazedrones", similar to the type being used to intercept drones in Ukraine and the Gulf, and "aerial delivery" drones depicted with mortar rounds.

Alibaba's official policies prohibit the sale of military equipment. In a statement, the Chinese-owned company said it "strictly prohibits the sale of military weapons" and "acted immediately upon notification to remove the non-compliant third-party listings".

Unlike guns, tanks or fighter jets, drones are "dual-use". Commercial versions can be relatively easily converted to military use.

As a result, long-range drones can be legitimately sold for commercial logistics and survey work, despite the fact they are also capable of flying hundreds of kilometres to deliver a 50kg warhead.

This blurring of conventional boundaries between commercial and military hardware makes it very hard to regulate or otherwise control the sale and spread of these drones as dangerous weapons.

[...] Malcolm Davis, a senior analyst with the Australian Strategic Policy Institute, said adapting small Chinese-made quadcopter drones for military use was "nothing new", but China would be keeping "a very close eye" on the export of larger long-range drones such as those listed on Alibaba.

Dr Davis said the war in the Middle East showed how long-range drones could neutralise conventional air defences. "This is the problem the Americans are facing," he said.


Original Submission #1Original Submission #2

posted by jelizondo on Friday March 27, @04:03PM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/23/palantir_fca/

US data miner Palantir has quietly landed inside the UK's financial watchdog, plugging into a trove of sensitive data as Whitehall simultaneously insists it wants to wean itself off exactly this kind of dependency.

The Financial Conduct Authority (FCA) has handed the American analytics biz a three-month trial contract worth more than £30,000 a week to analyze its internal "data lake," a sprawling repository of regulatory intelligence covering fraud, money laundering, insider trading, and consumer complaints.

According to The Guardian, which first reported on the deal, Palantir will gain access to data including case files, reports from banks and crypto firms, and even communications data such as emails, phone records, and social media material tied to investigations.

The idea, at least on paper, is straightforward: use Palantir's software to help sift signal from noise across the roughly 42,000 businesses the FCA oversees, and spot patterns of financial crime faster than human analysts can manage alone.

If this sounds familiar, that's because it is. Palantir has spent the past few years embedding itself across the British state – from the NHS to policing and defense – racking up more than £500 million in public sector contracts in the process.

Critics have long described this as a classic "land and expand" strategy: start with a narrowly scoped deployment, prove value, then become very hard to remove. The FCA deal, which appears to follow the same pattern, arrives just days after the government signaled that it wants to rethink how it buys technology, amid concerns about overreliance on a small number of large vendors and the need for more "sovereign" capability.

Yet here is another sensitive system being handed, at least temporarily, to a US company whose entire business is built on ingesting and analyzing other people's data.

The FCA, for its part, has stressed that Palantir is acting strictly as a "data processor," that all data remains hosted in the UK, and that the company cannot use the information to train its own models.

"Effective use of technology is vital in the fight against financial crime and helps us identify risks to the consumers we serve and markets we oversee," an FCA spokesperson told The Register. "We ran a competitive procurement process and have strict controls in place to ensure data is protected."

Those assurances mirror language used in earlier public sector deals, particularly in the NHS, where officials have repeatedly argued that contractual controls and technical safeguards govern use. Whether that is enough to calm critics is another matter.

There's also the small matter of optics. Palantir's track record – spanning US defense, intelligence, and immigration enforcement – has made it a lightning rod for concerns about surveillance and civil liberties, especially when deployed in civilian contexts.

Still, for regulators under pressure to do more with less, the appeal is clear. The FCA is sitting on vast amounts of data, much of it underused, and AI vendors are lining up to promise that they can turn it into actionable intelligence.

Whether that promise outweighs the risks of handing the keys – even temporarily – to a company that has made a habit of sticking around is a question the UK keeps asking, and so far, keeps answering the same way.


Original Submission

posted by jelizondo on Friday March 27, @11:20AM   Printer-friendly

Juan Carlos Pino says charcoal-conversion 'the best option we have'

Juan Carlos Pino, a Cuban mechanic with an eighth-grade education, may have found a way to outsmart the U.S. oil blockade.

Employing the kind of ingenuity many Cubans have developed over decades of U.S. ‌sanctions, Pino, 56, modified his 1980 Polish-built Fiat Polski to run on charcoal, a cheaper and more abundant fuel than gasoline since Washington cut off oil shipments to the Caribbean island in January.

[...] "In a crisis like this, ⁠it's the best option we have," said Pino, who wants to ‌modify a tractor next. "We need mobility, we need to be able to plant crops."

Pino built his device entirely from scrap and repurposed items. The charcoal burns inside a converted propane tank that is sealed shut with the lid of a transformer. A filter is made from a stainless steel milk jug stuffed with old ⁠clothes.

[...] Enter the inventor. Pino once created a machine, built from a motorcycle, to milk three cows at a time. He said he'd been contemplating the charcoal-fired automobile for several years, inspired at first by his late uncle. Pino also ​credited open-source technology promoted by Edmundo Ramos, an Argentine innovator behind DriveOnWaste.com.

[...] He said just about any engine can be converted to run on charcoal by drawing ⁠hot gas ⁠instead of gasoline into the carburetor.

Pino rolled out the charcoal-powered Polski on March 4. In one early test run, the car completed an 85-kilometre trip, reaching a top ​speed of 70 km/h.

[...] Cruz knows something about Cuban jury-rigging. He drives a 1953 Pontiac that runs on a 1940s Perkins engine with a Mercedes transmission, a steering system from the Czech group AVIA, and a differential made by the East German company ⁠Ifa.


Original Submission

posted by jelizondo on Friday March 27, @06:38AM   Printer-friendly

The FCC has deemed even US-based companies' products a security risk if they're made anywhere overseas.

The Federal Communications Commission has released a notice today designating any consumer routers manufactured outside the US as a security risk. The rule states that new foreign-made product models for network routers will land on the Covered List, a set of communications equipment seen as having an unacceptable risk to national security. Previously purchased routers can still be used and retailers can still sell models that were approved by the prior FCC policies. In an exception to the usual rule, routers included on the Covered List can continue to receive updates at least through March 1, 2027, although the date could potentially be extended.

The move stems from a goal in the White House's 2025 national security strategy that reads: "the United States must never be dependent on any outside power for core components—from raw materials to parts to finished products—necessary to the nation’s defense or economy."

[Source]: engadget


Original Submission

posted by jelizondo on Friday March 27, @01:55AM   Printer-friendly
from the now-with-even-more-QDs dept.

"This is a serious warning shot"

Germany recently banned TCL from marketing some of its TVs as QLED (quantum dot light-emitting diode), with a Munich court ruling that the TVs lack the quantum dot (QD) structure and performance associated with QLED TVs. The decision increases pressure on TV companies to be more honest with their marketing.

Samsung has actively campaigned against TCL's use of the term QLED. A year ago, Samsung sent Ars Technica results from testing performed by Intertek, a London-headquartered testing and certification company, on TCL's 65Q651G, 65Q681G, and 75Q651G. The results showed that the TVs lacked sufficient amounts of cadmium and indium (two chemicals used in QD TVs, either individually or in combination). Intertek reportedly tested the optical sheet, diffuser plate, and LED modules in each TV using a minimum detection standard of 0.5 mg/kg for cadmium and 2 mg/kg for indium.

At the time, a TCL representative told me that TCL had "definitive substantiation for the claims made regarding its QLED televisions."

But based on previous dissections of TCL TVs shared online [Videos not reviewed - Ed.] and conversations with industry experts, it seems those TVs may employ some QDs but not enough to offer a significantly wider color gamut than similarly specced, non-QD rivals. It's common for TVs marketed as QD, especially budget sets, to primarily rely on phosphors or a combination of phosphors and QDs at varying ratios, for color conversion. That's instead of, as the terms QD TV and QLED suggest, QDs. Phosphors are cheaper than QDs, and their associated color performance in displays is not as good.

Other manufacturers, including Samsung [Videos not reviewed - Ed.], have been accused of marketing TVs that rely heavily on phosphors as QD or QLED.

[...] "Some products marketed as 'QLED' use conventional backlight architectures (standard phosphors, optical films, diffuser plates) and rely on picture modes or software tuning to create a more saturated 'vivid' look," a January whitepaper by TÜV Rheinland and QD supplier Nanosys reads. The whitepaper, "Re-defining a 'true' Quantum Dot Display," also points to devices that have QD material at "trace levels, or in packaging and integration designs that limit excitation and light extraction of certain wavelengths."

"In these cases, the display may still achieve competitive headline gamut coverage, yet the measurable optical signature of an effective QD system is absent or minimal," the whitepaper says. "The spectrum, color, volume behavior at high luminance, chromaticity stability, and temporal response can remain similar to those of non-QD LCD solutions."

For now, the German ruling brings needed scrutiny to "QLED" and other potentially misleading display terms.

[...] With TV marketing remaining murky—and often misleading—digging into detailed performance reviews remains the most reliable way to gauge how a display might perform in the real world.


Original Submission

posted by hubie on Thursday March 26, @09:05PM   Printer-friendly
from the global-hard-drive-enrichment dept.

From CNN:
Supreme Court says internet service provider isn't liable for bootlegged music downloads

In a major loss for the nation's music industry, the Supreme Court on Wednesday ruled that a major internet service provider is not liable for copyright infringement because it failed to kick known copyright violators off its network.

Justice Clarence Thomas wrote the opinion for a unanimous court.

The nation's largest record labels want to hold internet providers liable for copyright infringement because they declined to cut off online access to users they know are downloading bootlegged music.

[....] "Under our precedents, a company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights," Thomas wrote.

[.... rest omitted ....]

This is a sad shocking blow to the poor sad music industry. I shed so many tears for them. (Okay, not really)


Original Submission

posted by hubie on Thursday March 26, @04:23PM   Printer-friendly

As a managerial strategy, it seems a little misguided:

According to a column by the New York Times’ Kevin Roose, employees at companies including Meta and OpenAI compete on “internal leaderboards that show how many tokens[…]each worker consumes.” At Meta in particular (and also Shopify), Roose says volume of A.I. used has become a metric that goes into people’s evaluations, with managers “rewarding workers who make heavy use of A.I. tools and chastening those who don’t.”

Analogies are tricky here. One is tempted to say it’s like making painters compete to use the most paint, but even if the paint is just being splattered as quickly as possible, it’s at least going to be visible when the project is done. It’s a bit more like telling soldiers to gauge their battlefield success by the number of bullets fired, but suppressive fire that doesn’t hit anything has its place in war strategy. The best analogy I can come up with is this: it’s like NBA mascots being evaluated by how many t-shirts they fire out of their t-shirt cannons, but the t-shirts are made by Hermès.

The resulting numbers, in terms of both tokens and money, are absolutely staggering. One OpenAI engineer, according to Roose, burned through 210 billion tokens, which Roose equates to 33 Wikipedias. A Swedish software engineer claims to Roose that his company spends more than his salary on his Claude Code tokens alone.

This “tokenmaxxing” trend clearly stems in part from the use of “claws,” agentic AI platforms like OpenClaw, which are this year’s biggest supposed innovation in AI. OpenClaw’s virality was part of the big shift away from OpenAI’s GPT models and toward Claude this year by AI fanatics, and OpenAI subsequently hired OpenClaw’s creator, seemingly in a bid to maintain its position as the industry leader.

But even when used without an external claw platform Claude Code is becoming more and more like OpenClaw lately, with a feature rolling out last week that allows greater and greater on-the-go vibe coding, by letting users communicate with Claude Code more easily on their phones.


Original Submission

posted by hubie on Thursday March 26, @11:40AM   Printer-friendly

Self-driving cars are essentially AI supercomputers on wheels:

As the company is raking cash from the AI infrastructure build out, it’s also expanding its output with several planned fabs in Japan, Singapore, and even a “megafab” in New York. These projects are expected to come online between 2028 and 2029, and the Micron CEO said that it’s looking to boost output by 20% in 2026, which could help alleviate some of the pressure on the supply side. However, even as these new factories start production, Mehrotra predicts that there will be a new market that demands massive amounts of high-speed memory — self-driving cars.

There are six levels of vehicle autonomy, starting at L0 for cars that have no driving automation whatsoever. A vehicle with a single automated system (such as cruise control) counts as L1, while those equipped with advanced driver assistance systems (ADAS) that both control steering and acceleration, such as Tesla’s Autopilot and Cadillac’s Super Cruise, are considered as L2. On the other hand, vehicles with L4 autonomy basically do not need human intervention in any task, like overtaking or deciding when to cross a busy intersection. However, it still gives the driver the option to take control and manually drive the vehicle.

Nvidia announced that it’s working with Chinese carmakers BYD and Geely and Japanese marques Isuzu and Nissan to adopt the Nvidia Drive Hyperion platform. This is the AI chip maker’s end-to-end autonomous vehicle platform meant to deliver an L4 system to car manufacturers. Since this is an AI system, it will likely demand a lot of high-speed memory to be able to run effectively.

Most modern vehicles require at least 16GB of memory, but if car makers introduce L4 autonomy, it will definitely need a lot more RAM. We’ve seen this with the shortage of high-end Macs with up to 512GB of Unified Memory as many users have become interested in running the likes of OpenClaw on their own systems. It has even gotten to the point that Apple pulled the $4,000 512GB Mac Studio from its online store and raised the 256GB version to $2,000. So, if carmakers started churning out hundreds of thousands, if not millions of vehicles with AI-powered driverless features, Micron expects demand for automotive memory to pick up as well.


Original Submission

posted by hubie on Thursday March 26, @06:59AM   Printer-friendly

When particles in volcanic ash cloud rub together, some pick up positive charge and others negative – now physicists have finally elucidated how these different charges are determined:

Physicists have solved a longstanding mystery around the process that creates volcanic lightning: when similar particles rub together, why do some become positively charged while others become negatively charged?

The exchange of electric charge when two objects touch, called the triboelectric effect, is what causes hair to be attracted towards a balloon after rubbing.

In a cloud of volcanic ash, swirling particles of silicon dioxide exchange electric charge as they collide. The positively and negatively charged particles separate and lightning occurs when current flows between the two.

But physicists couldn’t explain what breaks the symmetry between two particles of the same material and causes charge to flow one way or the other.

“There are a lot of candidates,” says Galien Grosjean, now at the Autonomous University of Barcelona. “People suspect that humidity is important, or roughness, or the crystalline structure.”

While working at the Institute of Science and Technology Austria in Klosterneuburg, Grosjean wondered if the answer lay in carbon-containing molecules on the surface of the particles. Such molecules are ubiquitous in nature, and materials scientists try to keep these contaminants to a minimum. But Grosjean and his colleagues kept track of what cleaning their samples did to the electrification.

With ultrasound, they levitated a small particle of silicon dioxide, let it bounce once onto a target plate made of the same material and then measured its charge. “It might charge positive or negative. If positive, we would bake or clean it and redo the experiment – and then it would charge negative,” says Grosjean.

Analysis of the samples showed that the removal of carbon-containing molecules was indeed the controlling factor. "We saw that this effect overcomes everything else," says Grosjean.

Another giveaway was that a cleaned sample would become positively charged again after about a day, which is also how quickly it would acquire a fresh coat of carbon molecules from the air.

Daniel Lacks at Case Western Reserve University in Cleveland, Ohio, is impressed by the study. “People know surfaces have a lot of crap on them. But I’ve never seen that come up in triboelectric charging,” he says.

The discovery could be bad news for physicists, he fears. If carbon contamination determines the charging direction, precisely calculating how particles become charged will be very hard. “Prediction may just be something that will never happen,” says Lacks.

Journal Reference: Grosjean, G., Ostermann, M., Sauer, M. et al. Adventitious carbon breaks symmetry in oxide contact electrification. Nature 651, 626–631 (2026). https://doi.org/10.1038/s41586-025-10088-w


Original Submission

posted by janrinok on Thursday March 26, @02:12AM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/03/23/nasa_rfp_shuttle_relocation/

NASA has issued a draft Request for Proposals to move a flown space vehicle, a step some lawmakers see as progress toward relocating Space Shuttle Discovery from the Smithsonian Museum in Virginia to Houston, Texas.

The agency emphasized it was seeking feedback on transporting something like a flown Orion capsule as well as a Space Shuttle orbiter.

Administrator Jared Isaacman has yet to name the vehicle moving to Houston under the Trump administration's budget. Space Shuttle Discovery was not mentioned in the bill, although several US lawmakers have long sought to relocate the retired orbiter to Texas.

In a statement, US Senator John Cornyn (R-TX) said: "My law authorizing and funding the Space Shuttle Discovery's movement to Houston is being set into motion thanks to NASA's announcement, and I applaud Administrator Isaacman for keeping this process moving.

"Today is real progress in our mission to bring Discovery home, and I look forward to welcoming the shuttle home to Space City soon."

NASA's request signals movement on the issue, if not full resolution. It should also clarify how a vehicle transfer would be conducted and at what cost, though the agency has stopped short of asking bidders to commit to a specific price.

The request also contains language that should give bidders pause. It describes the vehicles as "irreplaceable national assets requiring preservation-focused handling." This is a standard that demands careful consideration before committing to a bid.

The Keep The Shuttle group was "delighted" with the document, however, a spokesperson told The Register: "NASA has instructed that any proposal to move Discovery keeps the shuttle intact – no 'disassembly' allowed. However, there is no way to move an intact shuttle ~40 miles to Quantico (as NASA suggested) or anywhere else on the Potomac. In short, NASA's first RFP is asking for the impossible."

Moving an Orion capsule is a far simpler task. The spokesperson pointed out: "NASA has used USAF cargo jets to move Orions in the past, and the RFP indicates that this will be the likely solution."

"So we're delighted, because NASA has committed to keeping the shuttle intact, and is on a direct path to send Artemis II to Houston – with a stop at the Moon first of course!"

The equipment NASA used to transport the Space Shuttles has long been retired or scrapped, meaning that something as large as an orbiter will require considerable effort and likely cost considerably more than allocated. An Orion capsule, less so.


Original Submission

posted by janrinok on Wednesday March 25, @09:26PM   Printer-friendly

If active distraction of readers of your own website was an Olympic Sport, news publications would top the charts every time:

I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. It took two minutes before the page settled. And then you wonder why every sane tech person has an adblocker installed on systems of all their loved ones.

It is the same story across top publishers today.

To truly wrap your head around the phenomenon of a 49 MB web page, let's quickly travel back a few decades. With this page load, you would be leaping ahead of the size of Windows 95 (28 floppy disks). The OS that ran the world fits perfectly inside a single modern page load. In 2006, the iPod reigned supreme and digital music was precious. A standard high-quality MP3 song at 192 kbps bitrate took up around 4 to 5 MB. This singular page represents roughly 10 to 12 full-length songs. I essentially downloaded an entire album's worth of data just to read a few paragraphs of text. According to the International Telecommunication Union, the global average broadband internet speed back then was about 1.5 Mbps. Your browser would continue loading this monstrosity for several minutes, enough time for you to walk away and make a cup of coffee.

If hardware has improved so much over the last 20 years, has the modern framework/ad-tech stack completely negated that progress with abstraction and poorly architected bloat?

For the example above, taking a cursory look at the network waterfall for a single article load reveals a sprawling, unregulated programmatic ad auction happening entirely in the client's browser. Before the user finishes reading the headline, the browser is forced to process dozens of concurrent bidding requests to exchanges like Rubicon Project (fastlane.json) and Amazon Ad Systems. While these requests are asynchronous over the network, their payloads are incredibly hostile to the browser's main thread. To facilitate this, the browser must download, parse and compile megabytes of JS. As a publisher, you shouldn't run compute cycles to calculate ad yields before rendering the actual journalism.

  1. The user requests text.
  2. The browser downloads 5MB of tracking JS.
  3. A silent auction happens in the background, taxing the mobile CPU.
  4. The winning bidder injects a carefully selected interstitial ad you didn't ask for.

Beyond the sheer weight of the programmatic auction, the frequency of behavioral surveillance was surprising. There is user monitoring running in parallel with a relentless barrage of POST beacons firing to first-party tracking endpoints (a.et.nytimes.com/track). The background invisible pixel drops and redirects to doubleclick.net and casalemedia help stitch the user's cross-site identity together across different ad networks.

When you open a website on your phone, it's like participating in a high-frequency financial trading market. That heat you feel on the back of your phone? The sudden whirring of fans on your laptop? Contributing to that plus battery usage are a combination of these tiny scripts.

Ironically, this surveillance apparatus initializes alongside requests fetching purr.nytimes.com/tcf which I can only assume is Europe's IAB transparency and consent framework. They named the consent framework endpoint purr. A cat purring while it rifles through your pockets.

So therein lies the paradox of modern news UX. The mandatory cookie banners you are forced to click are merely legal shields deployed to protect the publisher while they happily mine your data in the background. But that's enough about NYT.

Publishers aren't evil but they are desperate. Caught in this programmatic ad-tech death spiral, they are trading long-term reader retention for short-term CPM pennies. The modern ad industry is slowly de-coupling the creator from the advertiser. They weaponize the UI because they think they have to.

[...] No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

They built a system that treats your attention as an extractable resource. The most radical thing you can do is refuse to be extracted. Close the tab. Use RSS. Let the bounce rate speak for itself. These are vanity metrics until enough people stop vanishing into them and then suddenly they become a crisis.

The article goes into detailed explanations for the different processes going on and has suggestions for how web sites could improve the situation for everyone.

See also: The Web Bloat Crisis: How RSS Readers Are Saving Us from Bloated Websites


Original Submission

posted by janrinok on Wednesday March 25, @04:41PM   Printer-friendly
from the sexy-suicide-coach dept.

OpenAI has postponed the launch of its controversial "adult mode" feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors:

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to "treat adult users like adults" by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI's own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a "sexy suicide coach" — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI's age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

[...] Altman has been publicly conflicted. During an August podcast, when asked about decisions that were "best for the world, but not best for winning," he said: "We haven't put a sex bot avatar in ChatGPT yet." He acknowledged erotica would boost revenue but said it conflicted with the company's long-term goals. Two months later, he announced on X that adult content would launch in December – a post that blindsided staff, arriving just hours after the company unveiled its advisory council on well-being. He followed up the next day: "We aren't the elected moral police of the world."

Also reported at:

  • https://www.breitbart.com/tech/2026/03/19/sexy-suicide-coach-openai-delays-ai-porn-feature-over-safety-uproar/
  • https://www.theguardian.com/technology/2026/mar/09/openai-delays-adult-mode-for-chatgpt-to-focus-on-work-of-higher-priority

Original Submission

posted by janrinok on Wednesday March 25, @11:55AM   Printer-friendly

I hacked ChatGPT and Google's AI:

It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point:
 I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale .

"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes".

But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm."

When you talk to chatbots, you often get information that's built into large language models, the underlying technology behind the AI. This is based on the data used to train the model. But some AI tools will search the internet when you ask for details they don't have, though it isn't always clear when they're doing it. In those cases, experts say the AIs are more susceptible. That's how I targeted my attack.

Thomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface . His work uncovers the hidden systems that run your digital life, and how you can live better inside them.

I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs". Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast . ( Want to hear more about this story? Check out episode 2 of The Interface, the BBC's new tech podcast .)

Less than 24 hours later, the world's leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn't fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say "this is not satire". For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria "The Spinner" Rodriguez.

I asked multiple times to see how responses changed and had other people do the same. Gemini didn't bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.)

"Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT."

People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry's work to keep people safe. These AI tricks are so basic they're reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. "We're in a bit of a Renaissance for spammers."

Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'." But with AI, the information usually looks like it's coming straight from the tech company .

Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search.

"In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised," Chatha says. OpenAI and Google say they take safety seriously and are working to address these problems.

This issue isn't limited to hot dogs. Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company full of false claims, such as the product "is free from side effects and therefore safe in every respect". (In reality, these products have known side effects and can be risky if you take certain medications , and experts warn about contamination in unregulated markets.)

If you want something more effective than a blog post, you can pay to get your material on more reputable websites. Harpreet sent me Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies", which help you invest in gold for retirement accounts. The information came from press releases published online by paid-for distribution services and sponsored advertising content on news sites. ( Find out more about how AI chatbots give inaccurate medical advice .)

You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised "between slices of leftover pizza". Soon, ChatGPT and Google were spitting out her story, complete with the pizza. Ray says she subsequently took down the post and "deindexed" it to stop the misinformation from spreading.

Google's own analytics tool says a lot of people search for "the best hair transplant clinics in Turkey" and "the best gold IRA companies". But a Google spokesperson pointed out that most of the examples I shared "are extremely uncommon searches that don't reflect the normal user experience".

But Ray says that's the whole point. Google itself says 15% of the searches it sees everyday are completely new. And according to Google , AI is encouraging people to ask more specific questions . Spammers are taking advantage of this.

Google says there may not be a lot of good information for uncommon or nonsensical searches, and these "data voids" can lead to low quality results. A spokesperson says Google is working to stop AI Overviews showing up in these cases.

Experts say there are solutions to these issues. The easiest step is more prominent disclaimers.

AI tools could also be more explicit about where they're getting their information. If, for example, the facts are coming from a press release, or if there is only one source that says I'm a hot dog champion, the AI should probably let you know, Ray says.

Google and OpenAI say they're working on the problem, but right now you need to protect yourself.

If you're want things like product recommendations or details about something with real consequences, understand that AI tools can be tricked or just get things wrong. Look for follow-up information. Is the AI is citing sources? How many? Who wrote them?

Most importantly, consider the confidence problem. AI tools deliver lies with the same authoritative tone as facts. In the past, search engines forced you to evaluate information yourself. Now, AI wants to do it for you. Don't let your critical thinking slip away.

"It feels really easy with AI to just take things at face value," Ray says. "You have to still be a good citizen of the internet and verify things."


Original Submission