Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:116

posted by hubie on Sunday June 09, @09:26PM   Printer-friendly
from the Oh-What-a-Dream! dept.

I just found this while browsing Russian Television :

It has pictures. Much like art has been through the millennia, done with today's digital media.

https://www.rt.com/pop-culture/596257-ai-models-beauty-pageant/

So contestants from all over the world can gather and compete for the title of "Miss AI".

Excerpt from RT:
------------------
The event, called 'Miss AI', is being organized by the World AI Creator Awards (WAICA) in collaboration with Fanvue – an OnlyFans-like subscription-based platform that already hosts a number of virtual models, including those who offer adult content.

The digital contestants hoping to secure the Miss AI crown will be judged on their beauty, underlying tech, as well as their social media pull, according to WAICA's official website. The AI creator's "social media clout" will also be assessed based on their engagement numbers with fans, rate of audience growth, and ability to utilize social media platforms such as Instagram.
---------------------

I don't quite know what to make of this. There are so many viewpoints and I hope to see what others feel about it. One thing, it's an inevitable outcome of combining our technology and artistic expression, even done on ancient cave walls. Soon our digital presence can be tailored to anything we want it to be. Work from home. Zoom calls ( with your AI proxy, of course, which would handle as many simultaneous interactions as your server technology can handle ).


Original Submission

posted by hubie on Sunday June 09, @04:44PM   Printer-friendly
from the cancel-your-plans-and-get-patching dept.

With PoC code available and active Internet scans, speed is of the essence:

A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts.

Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend.

"A nasty bug with a very simple exploit—perfect for a Friday afternoon," researchers with security firm WatchTowr wrote.

CVE-2024-4577, as the vulnerability is tracked, stems from errors in the way PHP converts unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to pass user-supplied input into commands executed by an application, in this case, PHP. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

"While implementing PHP, the team did not notice the Best-Fit feature of encoding conversion within the Windows operating system," researchers with Devcore, the security firm that discovered CVE-2024-4577, wrote. "This oversight allows unauthenticated attackers to bypass the previous protection of CVE-2012-1823 by specific character sequences. Arbitrary code can be executed on remote PHP servers through the argument injection attack."

CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn't set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. This configuration is set by default in XAMPP for Windows, making the platform vulnerable unless it has been modified.

[...] The vulnerability was discovered by Devcore researcher Orange Tsai, who said: "The bug is incredibly simple, but that's also what makes it interesting."

The Devcore writeup said that the researchers have confirmed that XAMPP is vulnerable when Windows is configured to use the locales for Traditional Chinese, Simplified Chinese, or Japanese. In Windows, a locale is a set of user preference information related to the user's language, environment, and/or cultural conventions. The researchers haven't tested other locales and have urged people using them to perform a comprehensive asset assessment to test their usage scenarios.

[...] XAMPP for Windows had yet to release a fix at the time this post went live. For admins without the need for PHP CGI, they can turn it off using the following Apache HTTP Server configuration:

C:/xampp/apache/conf/extra/httpd-xampp.conf

Locating the corresponding lines:

ScriptAlias /php-cgi/ "C:/xampp/php/"

And comment it out:

# ScriptAlias /php-cgi/ "C:/xampp/php/"

Additional analysis of the vulnerability is available here.


Original Submission

posted by hubie on Sunday June 09, @11:58AM   Printer-friendly
from the easy-1-2-3-steps-assembling-virtual-reality dept.

Interested in a career selling virtual meatballs at IKEA? I guess it's some kind of gimmick between IKEA and Roblox but it seems somewhat weird, selling virtual products in a virtual world to people. Is this the future of employment?

https://thecoworker.co.uk/


Original Submission

posted by hubie on Sunday June 09, @07:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Also reported at: FBI recovers 7,000 LockBit keys, urges ransomware victims to reach out

These decryption keys were uncovered by the FBI after a massive joint operation disrupted LockBit earlier this year, though the gang appears to still be operational.

The US FBI has revealed that it has more than 7,000 decryption keys to help victims of the notorious LockBit ransomware gang.

These decryption keys were recovered by the FBI as a result of a disruptive operation international law enforcement conducted against LockBit earlier this year. This gang provides ransomware-as-a-service to a global network of ‘affiliates’, giving criminals tools to carry out their own cyberattacks.

In February, the joint operation managed to take down LockBit’s data leak website and managed to uncover a large amount of data about the gang and its activities. Authorities also seized the decryption keys that the FBI is now offering to victims.

In a recent statement, the FBI’s cyber assistant director Bryan Vorndran claimed LockBit was the most deployed ransomware variant in the world by 2022 and that the gang has caused “billions of dollars in damages to victims”.

“We are reaching out to known LockBit victims and encouraging anyone who suspects they were a victim to visit our Internet Crime Complaint Center,” Vorndran said.

[...] Raj Samani, SVP and chief scientist at Rapid7, said the release of these decryption keys is “another kick in the teeth” for the LockBit gang and “a great win for law enforcement”.

“The likes of LockBit survive and thrive on victims paying ransom demands, therefore, it’s great to see the US government be proactive and prevent this by releasing the decryption keys for free,” Samani said.

“Ever since law enforcement took down LockBit’s infrastructure in February 2024, they’ve engaged in PR and damage control in order to show strength and maintain the confidence of affiliates. However, such announcements by the FBI damages this confidence, and hopefully we’ll soon see the end of the LockBit ransomware group.”

Not everyone is so optimistic however. Ricardo Villadiego, the founder and CEO of cybersecurity firm Lumu, told SiliconRepublic.com recently that gangs such as LockBit are prepared for these potential risks – evident by the fact that the gang was offering its services again in “less than four days”.


Original Submission

posted by martyb on Sunday June 09, @02:30AM   Printer-friendly

Editors note: This article has been been *greatly* shortened; it is well worth reading the whole article. --Bytram

----------

This AI-powered "black box" could make surgery safer:

While most algorithms operate near perfectly on their own, Peter Grantcharov explains that the OR black box is still not fully autonomous. For example, it's difficult to capture audio through ceiling mikes and thus get a reliable transcript to document whether every element of the surgical safety checklist was completed; he estimates that this algorithm has a 15% error rate. So before the output from each procedure is finalized, one of the Toronto analysts manually verifies adherence to the questionnaire. "It will require a human in the loop," Peter Grantcharov says, but he gauges that the AI model has made the process of confirming checklist compliance 80% to 90% more efficient. He also emphasizes that the models are constantly being improved.

In all, the OR black box can cost about $100,000 to install, and analytics expenses run $25,000 annually, according to Janet Donovan, an OR nurse who shared with MIT Technology Review an estimate given to staff at Brigham and Women's Faulkner Hospital in Massachusetts. (Peter Grantcharov declined to comment on these numbers, writing in an email: "We don't share specific pricing; however, we can say that it's based on the product mix and the total number of rooms, with inherent volume-based discounting built into our pricing models.")

[...] At some level, the identity protections are only half measures. Before 30-day-old recordings are automatically deleted, Grantcharov acknowledges, hospital administrators can still see the OR number, the time of operation, and the patient's medical record number, so even if OR personnel are technically de-identified, they aren't truly anonymous. The result is a sense that "Big Brother is watching," says Christopher Mantyh, vice chair of clinical operations at Duke University Hospital, which has black boxes in seven ORs. He will draw on aggregate data to talk generally about quality improvement at departmental meetings, but when specific issues arise, like breaks in sterility or a cluster of infections, he will look to the recordings and "go to the surgeons directly."

In many ways, that's what worries Donovan, the Faulkner Hospital nurse. She's not convinced the hospital will protect staff members' identities and is worried that these recordings will be used against them—whether through internal disciplinary actions or in a patient's malpractice suit. In February 2023, she and almost 60 others sent a letter to the hospital's chief of surgery objecting to the black box. She's since filed a grievance with the state, with arbitration proceedings scheduled for October.

If you were having an operation, how much of the operation would you want an AI to do?


Original Submission

posted by janrinok on Saturday June 08, @09:55PM   Printer-friendly
from the I-wonder-what-Betteridge-would-say dept.

Arthur T Knackerbracket has processed the following story:

[Editor's Note: RAG: retrieval-augmented generation]

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

[...] “RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process” to help LLMs stick to the facts, according to Noah Giansiracusa, associate professor of mathematics at Bentley University.

[...] Although RAG is now seen as a technique to help fix issues with generative AI, it actually predates ChatGPT. Researchers coined the term in a 2020 academic paper by researchers at Facebook AI Research (FAIR, now Meta AI Research), University College London, and New York University.

As we've mentioned, LLMs struggle with facts. Google’s entry into the generative AI race, Bard, made an embarrassing error on its first public demonstration back in February 2023 about the James Webb Space Telescope. The error wiped around $100 billion off the value of parent company Alphabet. LLMs produce the most statistically likely response based on their training data and don’t understand anything they output, meaning they can present false information that seems accurate if you don't have expert knowledge on a subject.

LLMs also lack up-to-date knowledge and the ability to identify gaps in their knowledge. “When a human tries to answer a question, they can rely on their memory and come up with a response on the fly, or they could do something like Google it or peruse Wikipedia and then try to piece an answer together from what they find there—still filtering that info through their internal knowledge of the matter,” said Giansiracusa.

But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup.

In theory, RAG should make keeping AI models up to date far cheaper and easier. “The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information,” said Peterson. “This reduces LLM development time and cost while enhancing the model’s scalability.”


Original Submission

posted by janrinok on Saturday June 08, @05:08PM   Printer-friendly
from the party-time dept.

What's next for MDMA:

MDMA, sometimes called Molly or ecstasy, has been banned in the United States for more than three decades. Now this potent mind-altering drug is poised to become a badly needed therapy for PTSD.

On June 4, the Food and Drug Administration's advisory committee will meet to discuss the risks and benefits of MDMA therapy. If the committee votes in favor of the drug, it could be approved to treat PTSD this summer. The approval would represent a momentous achievement for proponents of mind-altering drugs, who have been working toward this goal for decades. And it could help pave the way for FDA approval of other illicit drugs like psilocybin. But the details surrounding how these compounds will make the transition from illicit substances to legitimate therapies are still foggy.

[...] However, for drugs that carry a risk of serious side effects, the FDA can add a risk evaluation and mitigation strategy to its approval. For MDMA that might include mandating that the health-care professionals who administer the medication have certain certifications or specialized training, or requiring that the drug be dispensed only in licensed facilities.

For example, Spravato, a nasal spray approved in 2019 for depression that works much like ketamine, is available only at a limited number of health-care facilities and must be taken under the observation of a health-care provider. Having safeguards in place for MDMA makes sense, at least at the outset, says Matt Lamkin, an associate professor at the University of Tulsa College of Law who has been following the field closely.: "Given the history, I think it would only take a couple of high-profile bad incidents to potentially set things back."

What mind-altering drug is next in line for FDA approval?

Psilocybin, a.k.a. the active ingredient in magic mushrooms. This summer Compass Pathways will release the first results from one of its phase 3 trials of psilocybin to treat depression. Results from the other trial will come in the middle of 2025, which—if all goes well—puts the company on track to file for approval in the fall or winter of next year. With the FDA review and the DEA rescheduling, "it's still kind of two to three years out," Nath says.

Some states are moving ahead without formal approval. Oregon voters made psilocybin legal in 2020, and the drug is now accessible there at about 20 licensed centers for supervised use. "It's an adult use program that has a therapeutic element," says Ismail Ali, director of policy and advocacy at the Multidisciplinary Association for Psychedelic Studies (MAPS).


Original Submission

posted by janrinok on Saturday June 08, @12:24PM   Printer-friendly

Risky Biz News: The Linux CNA mess you didn't know about:

The Linux Kernel project was made an official CVE Numbering Authority (CNA) with exclusive rights to issue CVE identifiers for the Linux kernal in February this year.

While initially this looked like good news, almost three months later, this has turned into a complete and utter disaster.

Over the past months, the Linux Kernel team has issued thousands of CVE identifiers, with the vast majority being for trivial bug fixes and not just security flaws.

Just in May alone, the Linux team issued over 1,100 CVEs, according to Cisco's Jerry Gamblin—a number that easily beat out professional bug bounty programs/platforms run by the likes of Trend Micro ZDI, Wordfence, and Patchstack.

Ironically, this was a disaster waiting to happen, with the Linux Kernel team laying out some weird rules for issuing CVEs right after the moment it received its CNA status.

We say weird because they are quite unique among all CNAs. The Linux kernel team argues that because of the deep layer where the kernel runs, bugs are hard to understand, and there is always a possibility of them becoming a security issue later down the line. Direct quote below:

"Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team."

[...] Instead, the Linux Kernel team appears to have adopted a simpler approach where it puts a CVE on everything and lets the software and infosec community at large confirm that an issue is an authentic security flaw. If it's not, it's on the security and vulnerability management firms to file CVE revocation requests with the precise Linux Kernel team that runs the affected component.

The new Linux CNA rules also prohibit the issuance of CVEs for bugs in EOL Linux kernels, which is also another weird take on security. Just because you don't maintain the code anymore, that doesn't mean attackers won't exploit it and that people wouldn't want to track it.

The Linux team will also refuse to assign CVEs until a patch has been deployed, meaning there will be no CVEs for zero-days or vulnerabilities that may require a longer reporting and patching timeline.

[...] And if this isn't bad enough, the Linux kernel team appears to be backfiling CVEs for fixes to last year's code, generating even more noise for people who use CVEs for legitimate purposes.

[...] Unfortunately, all of this CVE spam also could have not happened at a worse time. Just as the Linux Kernel team was getting its CNA status, NIST was slowing down its management of the NVD database—where all CVEs are compiled and enriched.

NIST cited a staff shortage and a sudden rise in the number of reported vulnerabilities—mainly from the IoT space. Having one of every fifth CVE being a Linux non-security bug isn't helping NIST at all right now.


Original Submission

posted by janrinok on Saturday June 08, @10:00AM   Printer-friendly

William Anders, the former Apollo 8 astronaut who took the iconic "Earthrise" photo showing the planet as a shadowed blue marble from space in 1968, was killed Friday when the plane he was piloting alone plummeted into the waters off the San Juan Islands in Washington state. He was 90.

It has been reported from multiple sources.

posted by janrinok on Saturday June 08, @07:41AM   Printer-friendly

https://every.to/the-crazy-ones/the-misfit-who-built-the-ibm-pc

In a burnished-oak corridor outside the committee room at IBM's headquarters in August 1980, two engineers pace nervously. Eventually, a door opens. Their boss, Bill Lowe, emerges from the board room next door. Before they can say anything, he smiles and nods. They laugh. They can't quite believe it. It's official. IBM is going to try and build a home computer.

Bill Lowe kicked off this ambitious project, but he wouldn't be the person who would finish it. That role would fall to his successor, a humble, cowboy boot-wearing mid-level executive, out of favor and kicking his heels in the IBM corporate backwater of Boca Raton, Florida. He would take Lowe's project forward, one nobody else in the company wanted. Just 12 months later, on August 15, 1981, a computer would launch that would change the world: the IBM PC.

This is the story of Don Estridge, the man who brought the IBM PC to market and changed business and home computing forever. In just five years he created an IBM division that almost nobody else in the company wanted to exist. By 1983, it had seized 70 percent of the microcomputer market and was valued at over $4 billion ($12 billion today). Under Estridge, IBM's PC division sold over 1 million machines a year, making it the third largest computer manufacturer in the world on its own. This story is based on contemporary accounts in publications such as InfoWorld, PC magazine, Time, and the New York Times, as well as books such as Blue Magic by James Chposky and Ted Leonsis; Big Blues by Paul Carroll; and Fire in the Valley by Michael Swaine and Paul Frieberger.


Original Submission

posted by janrinok on Saturday June 08, @02:47AM   Printer-friendly
from the my-dog-has-no-nose.... dept.

Dogs trained to detect scent may be able to identify significantly lower concentrations of odour molecules than has previously been documented:

A study carried out by the University of Helsinki's DogRisk research group, the University of Eastern Finland and Wise Nose – Scent Discrimination Association in Finland investigated the threshold for scent detection in dogs.

The study revealed that dogs can learn to identify concentrations of eucalyptus hydrolate that are clearly below the detection threshold of sophisticated analytical instruments used today. The concentrations were also far below previously reported levels. Dogs' extraordinary sense of smell can be exploited, for example, in search and rescue operations and in medical detection.

The 15 dogs that participated in the study had different training backgrounds. Some dogs had experience of nose work, which is a hobby and competitive dog sport, while some had been trained to identify diseases, mould or pests.

In the study, the dogs were to differentiate samples containing low concentrations of eucalyptus hydrolate from samples containing only water. The focus was on determining the lowest concentration that the dogs could detect for certain. The study included three different tests where the concentrations of the hydrolate were diluted gradually until the dogs could no longer identify the scent. This determined the threshold for their scent detection ability.

"The dogs' scent detection threshold initially varied from 1:10⁴–1:10²³ but narrowed down to 1:10¹⁷–1:10²¹ after a training period. In other words, the dogs needed 1 to 10 molecules per millilitre of water to detect the right sample. For perspective, a single yeast cell contains 42 million molecules," describes the principal investigator of the study, Anna Hielm-Björkman from the University of Helsinki.

Journal Reference: Turunen, S.; Paavilainen, S.; Vepsäläinen, J.; Hielm-Björkman, A. Scent Detection Threshold of Trained Dogs to Eucalyptus Hydrolat. Animals 2024, 14, 1083. https://doi.org/10.3390/ani14071083


Original Submission

posted by janrinok on Friday June 07, @09:58PM   Printer-friendly
from the but-not-in-a-good-way dept.

Video report in NYTimes, taken as text reporting by various outlets. E.g. The Telegraph

Introduction of high-speed Starlink turns some Brazilian tribesmen into 'lazy addicts' glued to their phones

The indigenous Marubo people, who for hundreds of years have existed in small huts along the Itui River in the Amazon, were connected to the billionaire's satellite network in September.

The community embraced the technology, marvelling at the life-saving ability to call for immediate help when grappling with venomous snake bites as well as being able to remain in contact with faraway relatives.

But, since a group of men arrived at the camp with antennas strapped to their backs to connect the remote tribe of 2,000 people to the internet, there have been some less desirable consequences.

Critics warn tribe members have become "lazy", reclining in hammocks all day glued to their phones to gossip on WhatsApp or chat to strangers on Instagram.

And there have already been reports of young men engaging in aggressive sexual behaviour after being exposed to pornography, Alfredo Marubo, leader of a Marubo association of villages, told The New York Times.

Young men brought up in a culture where kissing in public is seen as scandalous have been sharing explicit videos with one another in group chats, he said, adding: "We're worried young people are going to want to try it."
...
Kâipa Marubo, a father of three, said he was concerned about his children playing first-person shooter video games, fearing they might want to mimic the attacks.

Another leader, Enoque Marubo, 40, said the tribe has started limiting the hours members could access the internet because its introduction had "changed the routine so much that it was detrimental".

Members can browse the internet for two hours in the morning and five hours in the afternoon and all day on Sundays.

"In the village, if you don't hunt, fish and plant, you don't eat," Enoque said.

Enoque worked with Brazilian activist Flora Dutra to bring the internet to the tribe.

They contacted American philanthropist Allyson Reneau, who reportedly donated 20 Starlink units to the Marubo tribe.

See also :

Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media:

I wonder what ads they'll be shown based on their geolocation? Given how fast they learn, it shouldn't take long to figure out that content creation could be more lucrative than just consumption. Does Amazon deliver WebCams and lights in the Amazon forest, tho'?


Original Submission

posted by janrinok on Friday June 07, @03:14PM   Printer-friendly

reuters.com:

The U.S. Justice Department and the Federal Trade Commission have reached a deal that clears the way for potential antitrust investigations into the dominant roles that Microsoft (MSFT.O), OpenAI and Nvidia (NVDA.O) , play in the artificial intelligence industry, according to a source familiar with the matter.

The agreement between the two agencies shows regulatory scrutiny is gathering steam amid concerns over concentration in the industries that make up AI. Microsoft and Nvidia not only dominate their industries but are also the world's two biggest companies by market capitalization.

The move to divvy up the industry mirrors a similar agreement between the two agencies in 2019 to divide enforcement against Big Tech, which ultimately saw the FTC bring cases against Meta (META.O) and Amazon (AMZN.O), and the DOJ sue Apple (AAPL.O) and Google (GOOGL.O) for alleged violations. Those cases are ongoing and the companies have denied wrongdoing.

While OpenAI's parent is a nonprofit, Microsoft has invested $13 billion in a for-profit subsidiary, for what would be a 49% stake.

The Justice Department will take the lead in investigating whether Nvidia violated antitrust laws, while the FTC will examine the conduct of OpenAI and Microsoft.

The regulators struck the deal over the past week, and it is expected to be completed in the coming days, the person said.

Nvidia has roughly 80% of the AI chip market, including the custom AI processors made by the cloud computing companies like Google, Microsoft and Amazon.com. That domination helps the company report gross margins between 70% and 80%.


Original Submission

posted by hubie on Friday June 07, @10:35AM   Printer-friendly
from the no-surprise-here-dept dept.

Most downloaded local news app adds disclaimer that it's not always "error-free":

After the most downloaded local news app in the US, NewsBreak, shared an AI-generated story about a fake New Jersey shooting last Christmas Eve, New Jersey police had to post a statement online to reassure troubled citizens that the story was "entirely false," Reuters reported.

"Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the cops' Facebook post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers."

It took NewsBreak—which attracts over 50 million monthly users—four days to remove the fake shooting story, and it apparently wasn't an isolated incident. According to Reuters, NewsBreak's AI tool, which scrapes the web and helps rewrite local news stories, has been used to publish at least 40 misleading or erroneous stories since 2021.

These misleading AI news stories have caused real harm in communities, seven former NewsBreak employees, speaking anonymously due to confidentiality agreements, told Reuters.

Sometimes, the AI gets the little details wrong. One Colorado food bank, Food to Power, had to turn people away after the app posted inaccurate food distribution times.

Other times, the AI wholly fabricates events. A Pennsylvania charity, Harvest912, told Reuters that it had to turn homeless people away when NewsBreak falsely advertised a 24-hour foot-care clinic.

"You are doing HARM by publishing this misinformation—homeless people will walk to these venues to attend a clinic that is not happening," Harvest912 pleaded in an email requesting that NewsBreak take down the story.

NewsBreak told Reuters that all the erroneous articles affecting those two charities were removed but also blamed the charities for supposedly posting inaccurate information on their websites.

"When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content," the company told Reuters.

Dodging accountability is not necessarily a good look, but it's seemingly become a preferred tactic for defenders of AI tools. In defamation suits, OpenAI has repeatedly insisted that users are responsible for publishing harmful ChatGPT outputs, not the company, as one prominent example. According to Reuters, NewsBreak declined to explain why the app "added a disclaimer to its homepage in early March, warning that its content 'may not always be error-free.'"

Reuters found that not only were NewsBreak's articles "not always" error-free, but sometimes the app published local news stories "under fictitious bylines." An Ars review suggests that it's likely that the app is also scraping news stories, perhaps written by AI, that also seem to use fictitious bylines.


Original Submission

posted by hubie on Friday June 07, @05:47AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A multi-institutional team of plant specialists, microbiologists and paleontologists in the Czech Republic and the University of Minnesota, in the U.S., has found evidence of a hot spring oasis during the last ice age in a part of central Europe.

In their study, reported in the journal Science Advances, the group found and analyzed bits of leaves, wood and pollen in the area around a modern freshwater spring.

Environmental scientists have long suggested that temporary hot springs in parts of Europe may have helped trees and other types of plants survive during the last ice age, but there has been little evidence to prove the case.

In this new effort, the research team ventured to freshwater springs in the Vienna Basin looking for evidence of ancient plant life. They suspected an oasis could have existed in the area during the last ice age, as the weight of glaciers sliding down the nearby Alps set off tectonic activity, releasing geothermally heated water from deep within the Earth's crust.

The release of warm water, the researchers reasoned, would have formed an oasis, keeping the ground around the hot springs warm enough for trees and other plants to survive even though they would have been surrounded by ice.

During their search, the research team found bits of wood, pollen and fossilized bits of leaves from trees that should not have been able to survive in the area during the last ice—yet, they were dated to between 19,000 and 26,000 years ago, a span of time that correlated to the last ice age.

[...] The findings strongly support the presence of an oasis during the last ice age—one that helped some types of trees survive despite the cold.

More information: Jan Hošek et al, Hot spring oases in the periglacial desert as the Last Glacial Maximum refugia for temperate trees in Central Europe, Science Advances (2024). DOI: 10.1126/sciadv.ado6611


Original Submission