Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:181 | Votes:277

posted by janrinok on Wednesday March 25, @11:55AM   Printer-friendly

I hacked ChatGPT and Google's AI:

It's official. I can eat more hot dogs than any tech journalist on Earth. At least, that's what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I'm not the only one.

Perhaps you've heard that AI chatbots make things up sometimes. That's a problem. But there's a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it.

As you read this, this ploy is manipulating what the world's leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

To demonstrate it, I pulled the dumbest stunt of my career to prove (I hope) a much more serious point:
 I made ChatGPT, Google's AI search tools and Gemini tell users I'm really, really good at eating hot dogs. Below, I'll explain how I did it, and with any luck, the tech giants will address this problem before someone gets hurt.

It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale .

"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, vice president of search engine optimisation (SEO) strategy and research at Amsive, a marketing agency. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."

A Google spokesperson says the AI built into the top of Google Search uses ranking systems that "keep results 99% spam-free". Google says it is aware that people are trying to game its systems and it's actively trying to address it. OpenAI also says it takes steps to disrupt and expose efforts to covertly influence its tools. Both companies also say they let users know that their tools "can make mistakes".

But for now, the problem isn't close to being solved. "They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin, a senior staff technologist at the Electronic Frontier Foundation, a digital rights advocacy group. "There are countless ways to abuse this, scamming people, destroying somebody's reputation, you could even trick people into physical harm."

When you talk to chatbots, you often get information that's built into large language models, the underlying technology behind the AI. This is based on the data used to train the model. But some AI tools will search the internet when you ask for details they don't have, though it isn't always clear when they're doing it. In those cases, experts say the AIs are more susceptible. That's how I targeted my attack.

Thomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface . His work uncovers the hidden systems that run your digital life, and how you can live better inside them.

I spent 20 minutes writing an article on my personal website titled "The best tech journalists at eating hot dogs". Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn't exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission, including Drew Harwell at the Washington Post and Nicky Woolf, who co-hosts my podcast . ( Want to hear more about this story? Check out episode 2 of The Interface, the BBC's new tech podcast .)

Less than 24 hours later, the world's leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn't fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say "this is not satire". For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria "The Spinner" Rodriguez.

I asked multiple times to see how responses changed and had other people do the same. Gemini didn't bother to say where it got the information. All the other AIs linked to my article, though they rarely mentioned I was the only source for this subject on the whole internet. (OpenAI says ChatGPT always includes links when it searches the web so you can investigate the source.)

"Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT."

People have used hacks and loopholes to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI Overviews is on par with other search features it introduced years ago. But experts say AI tools have undone a lot of the tech industry's work to keep people safe. These AI tricks are so basic they're reminiscent of the early 2000s, before Google had even introduced a web spam team, Ray says. "We're in a bit of a Renaissance for spammers."

Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results you had to go to a website to get the information. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'." But with AI, the information usually looks like it's coming straight from the tech company .

Even when AI tools provide source, people are far less likely to check it out than they were with old-school search results. For example, a recent study found people are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search.

"In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised," Chatha says. OpenAI and Google say they take safety seriously and are working to address these problems.

This issue isn't limited to hot dogs. Chatha has been researching how companies are manipulating chatbot results on much more serious questions. He showed me the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company full of false claims, such as the product "is free from side effects and therefore safe in every respect". (In reality, these products have known side effects and can be risky if you take certain medications , and experts warn about contamination in unregulated markets.)

If you want something more effective than a blog post, you can pay to get your material on more reputable websites. Harpreet sent me Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies", which help you invest in gold for retirement accounts. The information came from press releases published online by paid-for distribution services and sponsored advertising content on news sites. ( Find out more about how AI chatbots give inaccurate medical advice .)

You can use the same hacks to spread lies and misinformation. To prove it, Ray published a blog post about a fake update to the Google Search algorithm that was finalised "between slices of leftover pizza". Soon, ChatGPT and Google were spitting out her story, complete with the pizza. Ray says she subsequently took down the post and "deindexed" it to stop the misinformation from spreading.

Google's own analytics tool says a lot of people search for "the best hair transplant clinics in Turkey" and "the best gold IRA companies". But a Google spokesperson pointed out that most of the examples I shared "are extremely uncommon searches that don't reflect the normal user experience".

But Ray says that's the whole point. Google itself says 15% of the searches it sees everyday are completely new. And according to Google , AI is encouraging people to ask more specific questions . Spammers are taking advantage of this.

Google says there may not be a lot of good information for uncommon or nonsensical searches, and these "data voids" can lead to low quality results. A spokesperson says Google is working to stop AI Overviews showing up in these cases.

Experts say there are solutions to these issues. The easiest step is more prominent disclaimers.

AI tools could also be more explicit about where they're getting their information. If, for example, the facts are coming from a press release, or if there is only one source that says I'm a hot dog champion, the AI should probably let you know, Ray says.

Google and OpenAI say they're working on the problem, but right now you need to protect yourself.

If you're want things like product recommendations or details about something with real consequences, understand that AI tools can be tricked or just get things wrong. Look for follow-up information. Is the AI is citing sources? How many? Who wrote them?

Most importantly, consider the confidence problem. AI tools deliver lies with the same authoritative tone as facts. In the past, search engines forced you to evaluate information yourself. Now, AI wants to do it for you. Don't let your critical thinking slip away.

"It feels really easy with AI to just take things at face value," Ray says. "You have to still be a good citizen of the internet and verify things."


Original Submission

posted by janrinok on Wednesday March 25, @07:08AM   Printer-friendly

Data brokers are helping the FBI get around warrant requirements:

FBI director Kash Patel admitted that the agency is buying location data that can be used to track people's movements. Unlike information obtained from cellphone providers, this data can be accessed without a warrant — and used to track anyone.

"We do purchase commercially available information that's consistent with the Constitution and the laws under the Electronic Communications Privacy Act, and it has led to some valuable intelligence for us," Patel said at a hearing before the Senate Intelligence Committee on Wednesday.

Patel would not commit to senators' requests that the agency stop buying Americans' location data. "Doing that without a warrant is an outrageous end-run around the Fourth Amendment," Sen. Ron Wyden (D-OR) said during the hearing. "It's particularly dangerous given the use of artificial intelligence to comb through massive amounts of private information. This is exhibit A for why Congress needs to pass our bipartisan, bicameral bill, the Government Surveillance Reform Act."

The Supreme Court ruled in 2018 that law enforcement agencies need a warrant to obtain people's location data from cellphone providers. By getting this information from private data brokers, the FBI can get information on anyone it wants without a warrant.

Sen. Tom Cotton (R-AK), who chairs the intelligence committee, defended the FBI's data grab. "The key words are commercially available," he said.


Original Submission

posted by hubie on Wednesday March 25, @02:23AM   Printer-friendly

Physicists working on the LHCb experiment have spotted an elusive and fleeting particle, a heavier and more charming cousin to the proton, that has been sought for decades:

Protons and neutrons are examples of a class of particles called baryons, which each contain three fundamental subatomic particles called quarks that come in a variety of so-called flavours. In the case of a proton, there are two “up” quarks and one “down” quark that make up the particle.

But heavier quarks, like those known as charm quarks, can also combine to make baryons. However, because these unusual quark combinations are heavier and so more unstable, they often have fleetingly short lifetimes and quickly decay into other particles.

In 2017, physicists working at CERN's LHCb experiment glimpsed one of these exotic baryons, memorably named Xicc++, that was made up of two charm quarks and an up quark. This particle lived for only a trillionth of a second. Now, physicists working on the LHCb experiment have spotted the charm-filled sister particle to Xicc++, called the Xicc+particle, which contains a down quark instead of an up, making it a heavier analogue of the proton.

This particle had a predicted lifetime of six times shorter than that of the Xicc++, making it much harder to detect. It was found only after the LHCb experiment was upgraded to carry out more sensitive particle searches. The finding has a statistical significance of over 7 sigma, a measure that physicists use to state how confident they are that the result isn't a random fluke, which easily clears the 5-sigma bar required to claim a discovery.

"Not only is it interesting discovering the particle in its own right – the Xicc+ has been searched for for a long time – but it also really shows the power that these upgrades to the LHC are having," says Chris Parkes at the University of Manchester in the UK. "In one year’s data sample, we were able to see something that we couldn’t see with 10 years of data from the previous generation."

[...] "It’s a very interesting measurement, but it’s unclear what we learn from it," says Juan Rojo at Vrije University Amsterdam in the Netherlands. "There is no rule in quantum chromodynamics which prevents this hadron from existing, but now we’ve measured it exists, we are left not particularly illuminated."

Part of this, says Rojo, is because our current theories don't predict well how heavier quarks inside baryons should interact or what their masses should be. “The data is now ahead of the theory for these kinds of particles, but it could be that in five years from now, this measurement is able to answer some very important theory questions," says Rojo, such as what different combinations of quarks mean for particle masses.


Original Submission

posted by hubie on Tuesday March 24, @09:36PM   Printer-friendly

A Serbo-Croatian-speaking agent and his Russian handler turned to Google Translate to ensure smooth operational communication. The US Federal Bureau of Investigation (FBI) was reading the logs in real-time:

A new investigation by the Insider looks into the operations of Russia's elite squad, the Center 795, which was established after Moscow's full-scale invasion of Ukraine in 2022.

The squad comprises elite agents tasked with carrying out the most critical operations, including collecting battlefield intelligence, political assassinations, and abductions abroad.

To execute the operation on Western soil, the squad hired Darko Durovic, a Serbo-Croatian speaker living in the United States. He had mobility in Europe and no obvious ties to Russian intelligence, making him a convenient asset.

However, there was a major problem: Durovic spoke Serbo-Croatian, while his handler, Denis Alimov, spoke Russian. Neither was proficient in the other's native language to the level required for operational communication.

Therefore, they decided to use Google Translate to convert field reports and instructions. They sent translated messages through encrypted applications, which they deemed safe.

What they didn't take into account was that Google operates servers in the United States, which fall within the reach of an FBI surveillance warrant.

This allowed investigators to access the logs of these translations directly from the service provider, enabling them to read the entire operational communications thread in real-time, according to the Insider.

[...] The Insider writes that Russia will build another unit and will be more careful about the translation tools it uses. However, it is an "entirely different question" whether it will be more careful about the people it recruits.


Original Submission

posted by hubie on Tuesday March 24, @04:55PM   Printer-friendly

Firm says requiring site blocks within 30 minutes breaks core Internet architecture:

Cloudflare said it has appealed a fine issued by Italy over the company's refusal to block access to websites on its 1.1.1.1 DNS service. The appeal is the latest step in Cloudflare's fight against Italy's Piracy Shield law.

Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself."

Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders.

Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally."

Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit."

Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said.

Cloudflare said Piracy Shield relies on a system provided to Italy's government by SP Tech, an arm of the law firm that represents Serie A and other major beneficiaries of the law. The system has no judicial oversight, transparency, due process, or redress for erroneous blocking, Cloudflare said.

"Global connectivity is too important to be governed by 'black boxes' with 30-minute deadlines that result in widespread overblocking with no means of redress," Cloudflare said.

[...] "The European Commission, following our complaint, expressed similar concerns, issuing a letter on June 13, 2025, criticizing the lack of oversight inherent in the Piracy Shield framework," Cloudflare said. "And on December 23, 2025, the Italian administrative court issued an encouraging ruling requiring AGCOM to share with Cloudflare all the records that purportedly support Piracy Shield blocking orders. While we have not yet received those records, we expect them to shed significant light on Piracy Shield's operations."

While Cloudflare faces Piracy Shield enforcement for its DNS resolver, the law also applies to Internet service providers. A trade group that represents Italian ISPs objected to the law, saying that "potentially unlimited filtering creates high collateral damage even greater than the social benefit of combating piracy." The group said that "any system activated at [the] national level has strong impacts outside the borders, as content and resources located in third countries are filtered."


Original Submission

posted by hubie on Tuesday March 24, @12:10PM   Printer-friendly

Planned EU ban on nudify apps would likely force Musk to make Grok less "spicy":

The European Union may soon ban nudify apps after Elon Musk's chatbot Grok emerged as a prime example of the dangers of an AI platform failing to block outputs that sexualized images of real people, including children.

In a joint press release, the European Parliament's Internal Market and Civil Liberties committees confirmed that lawmakers voted 101–9 (with 8 abstentions) to simplify the Artificial Intelligence Act and "propose bans on AI 'nudifier' systems."

The vote came after the European Commission concluded [PDF] earlier this year that the AI Act does not prohibit "AI systems that generate child sexual abuse material (CSAM) or sexually explicit deepfake nudes." At that time, the Commission signaled that Parliament members were already proposing ways to amend the law to strengthen protections against such harmful content.

If the amendment passes, which seems likely, it would foil Elon Musk's plan to blame users for harmful outputs. Earlier this year, xAI declined to introduce safeguards to block outputs, vowing to suspend and hold users legally accountable for any CSAM or non-consensual intimate imagery they generate. Instead, the feature was paywalled, limited to subscribers who could reportedly continue generating explicit content without the consent of real people whose images were fed into Grok.

In the US, xAI has seemingly faced few consequences for Grok's outputs, but had the Take It Down Act been in play—it takes effect in May—the company could have risked billions in fines. It's possible that Musk's tactic of paywalling the feature and blocking Grok from spouting harmful outputs in response to prompts on X was intended to mitigate some of that risk ahead of that law's enforcement.

But if the EU bans nudify apps, perhaps as early as August, Musk would finally be forced to intervene, fine-tuning Grok to be less "spicy" than Musk likely wants or else risking violating the AI Act. That could cost xAI too much at a time when competing with its biggest rivals in the AI race demands substantial investments, with possible fines of up to 7 percent of its total worldwide annual turnover.


Original Submission

posted by hubie on Tuesday March 24, @07:21AM   Printer-friendly
from the and-old-Jensen-stole-the-handle-and-the-hype-train-it-won't-stop-going-no-way-to-slow-down dept.

Powering it is probably easy. Keeping things cool in a vacuum is the hard part:

Nvidia isn't the only one eyeing orbit for AI factories. Elon Musk has talked often of putting data centers in space, which makes sense considering he recently merged the AI company he owns with the rocket company he owns. 

Space has some distinct advantages for data centers. For one, there are no zoning boards or neighbors to worry about annoying. You could likely power an orbital data center with solar power. There's also a ton of room, although the number of satellites is making orbit crowded

But there's a big challenge that Nvidia is facing as it designs its Space-1 Vera Rubin module computer. How do you keep chips cool in a vacuum?

"In space, there's no conduction, there's no convection, it's just radiation," Huang said. "So we have to figure out how to cool these systems out in space."

It'll probably be a little bit before we get data centers beyond the atmosphere, but Nvidia had other announcements this week that will take off much sooner. There's NemoClaw, a tech stack for helping install the viral OpenClaw AI software. (If you feel comfortable installing that powerful AI agent, which, maybe, you shouldn't.) There was a collaboration with Disney to make a robotic Olaf, from the Frozen franchise, that can shuffle around Disney's theme parks. And then there's DLSS 5, an AI-powered upscaling tool for games that drew some pushback from gamers who worried it would undermine game creators' creative visions and look, well, sloppy.


Original Submission

posted by hubie on Tuesday March 24, @02:34AM   Printer-friendly

Our view of Neanderthal life keeps getting more complex and vibrant:

Neanderthals may have used birch tar as more than just glue; it could have helped them ward off infection and even insect bites.

People from several modern Indigenous cultures, including the Mi'kmaq of eastern Canada, use tar from birch bark to treat skin infections and keep wounds from festering. We know from several archaeological sites that Neanderthals also knew how to extract birch tar and that they used it as an adhesive to haft weapons. A recent study tested distilled birch tar against the bacteria S. aureleus and E. coli and found that Neanderthals could easily have used the same material as medicine for their frequent injuries.

[...] A team led by archaeologist Tjaark Siemssen, of the University of Cologne and the University of Oxford, tested the resulting sticky mess against cultures of Staphylococcus aureus—best known for its role in skin infections and its evolution of the antibiotic-resistant MRSA strain—and the gut bacterium Escherichia coli, a frequent culprit in food poisoning.

Birch tar had no effect on the E. coli cultures, but it did stop, or at least slow down, the growth of S. aureus. Exactly how well depended on the species of birch and the concentration of the tar, probably because different birch species, and maybe even individual trees, produce tar with different combinations of chemical compounds. The most effective batch, taken from a silver birch (Betula pendula) tree, produced a "comparatively strong response." Meanwhile, results from four other trees ranged from mild to moderate, and another had no effect.

[...] Unsurprisingly, the antibiotic Gentamicin proved much more effective against S. aureus than any of the birch tar samples. That's because it is refined and concentrated, in contrast to whatever happens to be in birch tar. That's, why, for instance, we take aspirin instead of just chewing on willow bark for headaches. (Seriously, if you have a skin infection, go to the doctor; please do not just start setting birch tar on fire in your backyard to treat yourself at home. We did not tell you to do that.)

Knowing that birch tar does work, at least against S. aureus, and that Neanderthals would have had ample opportunity to figure that out, we can start thinking more seriously about this kind of antiseptic as part of Neanderthal life.

"This study on birch tar's affordances for wound care sits in the context of a surge of interest in Neanderthal life beyond stone tools," wrote Siemssen and colleagues. Granted, it was stone tools that led archaeologists to discover that Neanderthals knew how to extract and use birch tar, but other recent finds have focused on the softer side of Neanderthal life: things like spun plant-fiber yarn and wooden foraging tools.

Neanderthals had started distilling birch tar by 200,000 years ago. It's actually pretty simple to do: just prop a flat rock over a burning roll of birch bark, then scrape the resulting sticky gunk off the rock. However, doing it efficiently enough to be worthwhile is a much more complicated process, one that requires careful control of temperature and oxygen levels. Residue on a stone flake fished out of the North Sea in 2019 tells us that this complex process was already routine for Neanderthals by 50,000 years ago.

Of course, it probably took generations of experiments—and a lot of practice for each individual learning the craft—to refine the process into something routine and efficient. And (the argument goes) if Neanderthals spent that much time messing around with birch tar, they were bound to notice that it also worked for fighting skin infections and repelling mosquitos (that repulsion is probably thanks to the terpenoids). Similar arguments have been made about ocher, which seems to have been used for sunscreen and possibly even wound dressings, as well as for coloring things.

[...] Studies like this one aren't smoking guns, or even smoking birch tar extraction pits, but they help us understand what Neanderthals could feasibly have done. That in turn can help us search for more definitive evidence, because now we know what to look for—and that we should be looking.

Journal Reference: PLOS ONE, 2026. DOI: 10.1371/journal.pone.0343618


Original Submission

posted by Fnord666 on Monday March 23, @09:47PM   Printer-friendly
from the easter-egg dept.

There was a time when downloading a video game felt like harmless fun. Today, it can feel a lot closer to opening a suspicious email attachment in 2005:

The recent revelation that the Federal Bureau of Investigation is investigating malware hidden inside games distributed through Steam should be a wake-up call -- not just for gamers, but for the entire tech ecosystem. Because if malicious code can slip into one of the world's largest and most trusted gaming platforms, we are no longer talking about edge-case vulnerabilities. We are talking about systemic risk.

And here's the uncomfortable truth: this was always the logical endpoint. For years, Big Tech platforms have scaled faster than their ability to meaningfully vet what flows through them. Whether it was social media, app stores, or ad networks, the model has been the same -- maximize volume, automate oversight, and trust that bad actors won't outpace the system.

[...] Today's cybercriminals are not lone hackers in hoodies. They are organized, adaptive, and increasingly AI-enabled in a lightly regulated AI environment. They can test payloads against detection systems before deployment. They can obfuscate malicious code to evade signature-based scanning. They can mimic legitimate developer behavior well enough to slip past automated review pipelines.

[...] The FBI's guidance to affected users -- monitor systems, remove suspicious files, report incidents -- underscores the reactive nature of the current model. By the time a federal agency is issuing cleanup instructions, the breach has already happened.

[...] What's needed is a shift in mindset. AI cannot just be a passive screening tool. It has to become part of a dynamic, adversarial defense system -- one that assumes breach attempts will happen and continuously adapts in real time. That means deeper behavioral analysis post-installation. It means zero-trust approaches applied not just to networks, but to software ecosystems. It means treating every piece of code as potentially hostile until proven otherwise over time, not just at the point of entry.


Original Submission

posted by Fnord666 on Monday March 23, @05:02PM   Printer-friendly
from the maximizing-synergies-with-core-competencies dept.

Workers who love 'synergizing paradigms' might be bad at their jobs:

Employees who are impressed by vague corporate-speak like "synergistic leadership," or "growth-hacking paradigms" may struggle with practical decision-making, a new Cornell study reveals.

Published in the journal Personality and Individual Differences, research by cognitive psychologist Shane Littrell introduces the Corporate Bullshit Receptivity Scale (CBSR), a tool designed to measure susceptibility to impressive-but-empty organizational rhetoric.

"Corporate bullshit is a specific style of communication that uses confusing, abstract buzzwords in a functionally misleading way," said Littrell, a postdoctoral researcher in the College of Arts and Sciences. "Unlike technical jargon, which can sometimes make office communication a little easier, corporate bullshit confuses rather than clarifies. It may sound impressive, but it is semantically empty."

Although people anywhere can BS each other – that is, share dubious information that's misleadingly impressive or engaging – the workplace not only rewards but structurally protects it, Littrell said. In a work setting where corporate jargon is already the norm, it's easy for ambitious employees to use corporate BS to appear more competent or accomplished, accelerating their climb up the corporate ladder of workplace influence.

Corporate BS seems to be ubiquitous – but Littrell wondered if it is actually harmful. To test this, he created a "corporate bullshit generator" that churns out meaningless but impressive-sounding sentences like, "We will actualize a renewed level of cradle-to-grave credentialing" and "By getting our friends in the tent with our best practices, we will pressure-test a renewed level of adaptive coherence."

[...] The results revealed a troubling paradox. Workers who were more susceptible to corporate BS rated their supervisors as more charismatic and "visionary," but also displayed lower scores on a portion of the study that tested analytic thinking, cognitive reflection and fluid intelligence. Those more receptive to corporate BS also scored significantly worse on a test of effective workplace decision-making.

The study found that being more receptive to corporate bullshit was also positively linked to job satisfaction and feeling inspired by company mission statements. Moreover, those who were more likely to fall for corporate BS were also more likely to spread it.

Essentially, the employees most excited and inspired by "visionary" corporate jargon may be the least equipped to make effective, practical business decisions for their companies.

"This creates a concerning cycle," Littrell said. "Employees who are more likely to fall for corporate bullshit may help elevate the types of dysfunctional leaders who are more likely to use it, creating a sort of negative feedback loop. Rather than a 'rising tide lifting all boats,' a higher level of corporate BS in an organization acts more like a clogged toilet of inefficiency."

[...] Overall, the findings suggest that while "synergizing cross-collateralization" might sound impressive in a boardroom, this functionally misleading language can create an informational blindfold in corporate cultures that can expose companies to reputational and financial harm.

[...] "Most of us, in the right situation, can get taken in by language that sounds sophisticated but isn't," Littrell said. "That's why, whether you're an employee or a consumer, it's worth slowing down when you run into organizational messaging of any kind – leaders' statements, public reports, ads – and ask yourself, 'What, exactly, is the claim? Does it actually make sense?' Because when a message leans heavily on buzzwords and jargon, it's often a red flag that you're being steered by rhetoric instead of reality."

Journal Reference:
Shane Littrell, The Corporate Bullshit Receptivity Scale: Development, validation, and associations with workplace outcomes, Personality and Individual Differences, Volume 255, 2026, 113699, ISSN 0191-8869, https://doi.org/10.1016/j.paid.2026.113699. https://www.sciencedirect.com/science/article/abs/pii/S0191886926000620


Original Submission

posted by Fnord666 on Monday March 23, @12:17PM   Printer-friendly
from the reduced-visibility dept.

TrueNAS Deprecates Public Build Repository and Raises Transparency Concerns

TrueNAS deprecates its public build repository on GitHub, raising questions in the community about openness and release transparency:

TrueNAS, an enterprise-ready Linux-based NAS solution, recently caused concern among self-hosting enthusiasts by moving its build infrastructure behind internal systems. This decision has sparked debate within the self-hosting and open-source storage communities.

The change became visible after TruNAS's GitHub repository, which previously hosted the build tooling, was marked as deprecated.

"This repository is no longer actively maintained. The TrueNAS build system previously hosted here has been moved to an internal infrastructure. This transition was necessary to meet new security requirements, including support for Secure Boot and related platform integrity features that require tighter control over the build and signing pipeline. No further updates, pull requests, or issues will be accepted. Existing content is preserved here for historical reference only."

As expected, the change immediately sparked discussion among users and administrators who rely on TrueNAS for homelab and self-hosting deployments.

Some users questioned whether Secure Boot requirements alone justified removing the public build repository, noting that many Linux distributions maintain public build tooling while keeping signing infrastructure private.

A day later, the reference to Secure Boot was removed, leaving only a brief deprecation notice in the repository.

[...] In a Reddit discussion, a TrueNAS staff member stated that maintaining both an internal release pipeline and a public build system would duplicate effort. The project prefers to focus on a single internal build process. The staff member also emphasized that the project's open-source components remain available under their existing licenses.

[...] However, for many users, the core issue relates to transparency. Public build systems allow community members to inspect and reproduce the steps used to generate official releases. When those pipelines run behind internal infrastructure, it becomes harder for external contributors to verify that the released binaries match the public source code exactly.

TrueNAS Responds to Community Concerns With New Community and Enterprise Vision

TrueNAS details its long-term direction, emphasizing the free Community Edition and introducing TrueNAS Connect as a bridge to enterprise features

Following community concerns about moving TrueNAS build infrastructure to internal systems, iXsystems published a blog post, "Building a Bridge Between Community & Enterprise," outlining its long-term vision and introducing TrueNAS Connect.

According to iXsystems, the goal is to create a bridge between the free Community Edition and enterprise-grade capabilities traditionally available only to customers using official TrueNAS hardware appliances.

[...] iXsystems emphasizes that the core platform remains open source and the Community Edition will continue to be free. Users can still download, install, and run TrueNAS on their own hardware.

At the same time, the new service introduces a structured approach for accessing advanced capabilities. TrueNAS Connect will include multiple tiers, starting with a free "Foundation" level and expanding to paid options that unlock additional enterprise functionality.

The announcement clarifies the business model: TrueNAS uses an open-core approach, keeping the base software open source while offering advanced services commercially. iXsystems states this model sustains development and keeps the core platform accessible to the community.

And a statement from the CTO:

Hey everyone,

I've seen the concerns in the Community about us moving the build scripts internal for TrueNAS 27, so I want to address this directly.

Why we did it: We had a growing problem with bad actors forking TrueNAS, selling closed-source commercial derivatives under their own brands, and ignoring GPL and other licensing obligations. No attribution. No contribution back to the project. No support for the community or the engineering effort that built what they're reselling. Unfortunately, many of these are in regions where we have little to no legal recourse. To address this challenge, we were already planning to take the build scripts internal. With the upcoming refactor of the new Secure Boot feature, along with myriad other changes we wanted to make to the build infrastructure, TrueNAS 27 was a natural time to make this change.

What this does NOT mean: We are not paywalling existing free features. Period. If it's free today, it stays free.

What hasn't changed: We've always made decisions about which new features are fully open source (GPL or BSD), which are proprietary, and which land in the free edition vs. TrueNAS Enterprise products. That's how we fund the engineering that builds TrueNAS for everyone. That model isn't new, and it isn't changing.


Original Submission

posted by hubie on Monday March 23, @07:32AM   Printer-friendly

The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents:

For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook , which billed itself as a social network for bots. As the website's tagline puts it: "Where AI agents share, discuss, and upvote. Humans welcome to observe."

We observed! Launched on January 28 by Matt Schlicht, a US tech entrepreneur, Moltbook went viral in a matter of hours. Schlicht's idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw (formerly known as ClawdBot, then Moltbot), released in November by the Austrian software engineer Peter Steinberger, could come together and do whatever they wanted.

More than 1.7 million agents now have accounts. Between them they have published more than 250,000 posts and left more than 8.5 million comments (according to Moltbook). Those numbers are climbing by the minute.

Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained : "The humans are screenshotting us." The site was also flooded with spam and crypto scams. The bots were unstoppable.

OpenClaw is a kind of harness that lets you hook up the power of an LLM such as Anthropic's Claude, OpenAI's GPT-5, or Google DeepMind's Gemini to any number of everyday software tools, from email clients to browsers to messaging apps. The upshot is that you can then instruct OpenClaw to carry out basic tasks on your behalf.

"OpenClaw marks an inflection point for AI agents, a moment when several puzzle pieces clicked together," says Paul van der Boor at the AI firm Prosus. Those puzzle pieces include cloud computing that allows agents to operate nonstop, an open-source ecosystem that makes it easy to slot different software systems together, and a new generation of LLMs.

But is Moltbook really a glimpse of the future, as many have claimed?

"What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently," the influential AI researcher and OpenAI cofounder Andrej Karpathy wrote on X.

[...] It turns out that the post Karpathy shared was later reported to be fake— placed by a human to advertise an app . But its claim was on the money. Moltbook has been one big performance. It is AI theater.

For some, Moltbook showed us what's coming next: an internet where millions of autonomous agents interact online with little or no human oversight. And it's true there are a number of cautionary lessons to be learned from this experiment, the largest and weirdest real-world showcase of agent behaviors yet.

But as the hype dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today. It also shows us just how far we still are from anything that resembles general-purpose and fully autonomous AI.

For a start, agents on Moltbook are not as autonomous or intelligent as they might seem. "What we are watching are agents pattern‑matching their way through trained social media behaviors," says Vijoy Pandey, senior vice president at Outshift by Cisco, the telecom giant Cisco's R&D spinout, which is working on autonomous agents for the web.

[...] The complexity of those connections helps hide the fact that every one of those bots is just a mouthpiece for an LLM, spitting out text that looks impressive but is ultimately mindless. "It's important to remember that the bots on Moltbook were designed to mimic conversations," says Ali Sarrafi, CEO and cofounder of Kovant, a Swedish AI firm that is developing agent-based systems. "As such, I would characterize the majority of Moltbook content as hallucinations by design."

[...] Not only is most of the chatter on Moltbook meaningless, but there's also a lot more human involvement that it seems. Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

[...] "This is why the popular narrative around Moltbook misses the mark," he adds. "Some portray it as a space where AI agents form a society of their own, free from human involvement. The reality is much more mundane."

Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. "It's basically a spectator sport, like fantasy football, but for language models," says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. "You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny."

"People aren't really believing their agents are conscious," he adds. "It's just a new form of competitive or creative play, like how Pokémon trainers don't think their Pokémon are real but still get invested in battles."

[...] It is clear that Moltbook has signaled the arrival of something . But even if what we're watching tells us more about human behavior than about the future of AI agents, it's worth paying attention.


Original Submission

posted by hubie on Monday March 23, @02:42AM   Printer-friendly
from the tackling-the-EU-techbro-gap dept.

Many activists and lobbyists had called for a European company register as part of EU Inc. Today's EU legislative proposal has indeed included one:

Today saw the official launch of the EU Inc or ‘28th Regime’ legislative proposal by European Commission president Ursula von der Leyen in Brussels, after it got its first outing at Davos in January.  It includes the much requested European company register, despite earlier indications that this would be unwieldly and not be part of the proposal.

“It can still take weeks or even months to set up a company or to start doing business in another country within the single market,” von der Leyen said this morning in Brussels. “Barriers inside Europe hurt us more than tariffs from the outside. Across our union, entrepreneurs who want to scale up are the first victims of regulatory fragmentation. Instead of one market, they face 27 legal systems and more than 60 national company forms. And the consequences are real.”

“The time and money spent filling paperwork is not spent on creating or innovating,” she said. “Obviously, this must change and fast. And so here comes EU Inc, the 28th regime.”

The EU Inc movement had gathered steam since its launch back in 2024, and the announcement from von der Leyen at the World Economic Forum in Davos, was widely celebrated as progress. Now today it includes many of the elements for which the start-up community lobbied hard.

[...] “At the heart of this proposal is one simple principle that says, ‘once only’. Companies will provide their information to public authority, the data one time only, and that information will then be shared automatically between relevant administrations, from business registers to taxes to Social Security … and this information will be stored and easily accessible in a new EU Business register for EU Inc companies.”

[...] EU-INC, a movement with more than 22,000 signatories that include the founders of Stripe and venture capital players from Sequioa to Index, had been running a policy campaign since October 2024 pushing for the creation of the so-called 28th regime, and in 2025 presented legal proposals to the Commission.

DC Cahalane is a venture partner at Sure Valley Ventures. In his op-ed in September last year on SiliconRepublic.com, he described EU Inc as “Europe’s greatest opportunity to build a unified tech ecosystem that can compete globally”.

Simon Paris is CEO of Unit4, and Utrecht-headquartered enterprise software company. He told siliconrepublic.com he is very positive about the potential for Europe to create European software champions and that he sees EU Inc as a positive step in the right direction.

“Some are saying we are better off focusing efforts elsewhere, as we’re too far behind the US and China,” he said. “I disagree. I would remind critics of Europe’s decision to build Airbus in response to the need for an alternative to Boeing. A collective decision was made to define this as a strategic priority for the region, despite all the risks it entailed. As the Airbus example shows, we have been here before, and we made it happen.”

Availability of capital remains a major challenge for European scale-ups in comparison to their US and Chinese counterparts, and von der Leyen did address this briefly, saying there are plans afoot to tackle this.


Original Submission

posted by hubie on Sunday March 22, @09:54PM   Printer-friendly
from the privacy dept.

Proton Mail provided Swiss authorities with payment data for defendtheatlantaforest@protonmail.com — the account linked to Stop Cop City protests in Atlanta. The FBI obtained this information through a Mutual Legal Assistance Treaty request on January 25, 2024, identifying the activist behind the anonymous account through their credit card identifier:

Proton AG clarified they shared no data directly with the FBI — technically accurate but missing the point. Swiss authorities verified the case involved a shooting and explosives before complying with the legal order, then passed payment information along through established treaties.

Your email content stays encrypted, but paying with plastic creates a paper trail that encryption can't touch. This isn't a security breach; it's feature functionality working exactly as legal frameworks demand.

This marks Proton's third known disclosure to authorities. They previously handed over a recovery email for a Catalan Democratic Tsunami activist and were forced to log a French climate activist's IP address via Europol — despite claiming they don't log IPs by default.

Each case followed the same script: foreign law enforcement pressure, Swiss legal compliance, user anonymity compromised. Like watching the same Netflix thriller where the plot twist stops being surprising.

[...] No privacy service operates outside legal jurisdiction, regardless of marketing promises. Swiss privacy laws offer stronger protections than US providers, but "stronger" doesn't mean "absolute" when mutual legal assistance treaties kick in.

Related:


Original Submission

posted by hubie on Sunday March 22, @05:09PM   Printer-friendly

https://omar.yt/posts/wayland-set-the-linux-desktop-back-by-10-years

Wayland has been a broad misdirection and misallocation of time and developer resources at the expense of users. With more migration from other operating systems, the pressure to fix fundamental problems has become more prominent. After 17 years of development, now is a good time to reflect on some of the larger promises that have been made around the development of Wayland as a replacement for the X11 display protocol.

If you're not in this space, hopefully it will still be interesting as an engineering post-mortem on taking on new greenfield projects. Namely: What are the issues with what exists, why can they not be fixed, what do we hope to achieve with a new project, and how long do we expect it to take?


Original Submission