Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:116

posted by hubie on Tuesday July 02, @10:22PM   Printer-friendly
from the I'll-come-up-with-a-snappy-Department-later dept.

Arthur T Knackerbracket has processed the following story:

Understanding the reasons behind our procrastination can help us regain productivity.

Procrastination, the intentional yet harmful delay of tasks, manifests in various forms. Sahiti Chebolu from the Max Planck Institute for Biological Cybernetics employs a precise mathematical framework to analyze its different patterns and underlying causes. Her insights could assist in creating personalized strategies to address this issue.

"Why did I not do this when I still had the time?" – Whether it is filing taxes, meeting a deadline at work, or cleaning the apartment before a family visit, most of us have already wondered why we tend to put off certain tasks, even in the face of unpleasant consequences. Why do we make decisions that are harmful to us – against our better knowledge? This is precisely the conundrum of procrastination. Procrastination, the deliberate but ultimately detrimental delaying of tasks, is not only hampering productivity but has also been linked to a host of mental health issues. So it is certainly worth asking why this much talked-about phenomenon has such a grip on us – and what it actually is.

"Procrastination is an umbrella term for different behaviors," says computational neuroscientist Sahiti Chebolu from the Max Planck Institute for Biological Cybernetics. "If we want to understand it, we need to differentiate between its various types." One common pattern is that we defect on our own decisions: we might, for example, set aside an evening for the tax return, but when the time has come we watch a movie instead. Something else is going on when we do not commit to a time in the first place: we might be waiting for the right conditions. The possible patterns of procrastination are myriad: from starting late to abandoning a task halfway through, Chebolu classified them all and identified possible explanations for each: misjudging the time needed or protecting the ego from prospective failure are just two of them.

Can such a classification really help you get stuff done? Chebolu is convinced that a mathematically precise understanding of the mechanism at play is the first step to tackling it. She frames procrastination as a series of temporal decisions. What exactly happens, for example, when we schedule our tax declaration for Friday night but then succumb to the temptations of a streaming service? One way to think of decision-making is that our brain adds up all the rewards and penalties we expect to gain from the alternative behaviors: watching a movie or doing annoying paperwork. Quite naturally, it then picks the course of action that promises to be most pleasant overall.

[...] Chebolu is confident that understanding procrastination as a series of temporal decisions and detecting where and why we usually take a wrong turn can inform interventions: If you discover, for instance, that your brain is a bit too biased towards instant gratification, giving yourself short-term rewards might help. Those who tend to underestimate the time needed for their grunt work could try setting themselves time-bound goals. And if you find yourself abandoning your chores quickly, you might want to avoid distracting environments.

No matter in which category of procrastination you fall (and you almost certainly fall into some of them sometimes): no, you are not just lazy. Recognizing this and forgiving yourself for procrastinating in the past is a good first step towards more productivity.

Reference: "Optimal and sub-optimal temporal decisions can explain procrastination in a real-world task" by Sahiti Chebolu and Peter Dayan, 22 May 2024. https://doi.org/10.31234/osf.io/69zhd

How do you deal with procrastination?


Original Submission

posted by hubie on Tuesday July 02, @05:36PM   Printer-friendly
from the every-step-you-take-I'll-be-watching-you dept.

Arthur T Knackerbracket has processed the following story:

The New South Wales Crime Commission commenced Project Hakea to investigate the use of tracking and other surveillance devices as an enabler of serious and organized crime in the southeastern Australian state.

The study looked at 5,163 trackers, purchased by 3,147 customers in 4,176 transactions. Using an extensive data matching process, it was discovered that 37% of customers were known to NSW police for criminal behavior. Moreover, 25% of customers had a recorded history of domestic and family violence, 15% were known for involvement in serious and organized crime activity, and 6% had a different criminal background.

It was also found that 126 customers were Apprehended Violence Order (AVO) defendants at the time they purchased a tracking device. An AVO is a court order issued to protect an individual who has a reasonable fear of violence or harassment from a specified person. Some customers bought the trackers days after the AVO was enforced.

The findings state that tracking and other surveillance devices are increasingly used to facilitate organized crime, including murder, kidnapping, and drug trafficking.

[...] The study recommends a change in the law to restrict the sale of tracking devices.

In May, Apple and Google announced that their previously confirmed industry specification for Bluetooth tracking devices was being rolled out to iOS and Android platforms, which should help prevent stalking by alerting users of suspicious Bluetooth trackers.


Original Submission

posted by hubie on Tuesday July 02, @12:52PM   Printer-friendly
from the artificial-marketing dept.

https://arstechnica.com/information-technology/2024/06/toys-r-us-riles-critics-with-first-ever-ai-generated-commercial-using-sora/

On Monday, Toys "R" Us announced that it had partnered with an ad agency called Native Foreign to create what it calls "the first-ever brand film using OpenAI's new text-to-video tool, Sora." OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys "R" Us founder Charles Lazarus using AI-generated video clips.

"We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that's gaining global attention," wrote Toys "R" Us on its website. "Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys "R" Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s."

Previously on SoylentNews:
Tyler Perry Puts $800 Million Studio Expansion on Hold Because of OpenAI's Sora - 20240225
OpenAI Teases a New Generative Video Model Called Sora - 20240222
Toys 'R' Us Files for Bankruptcy Protection in US - 20170919 (Toys 'R' Us is a "zombie brand" now. The entity in Canada was and is separate and still exists.)


Original Submission

posted by hubie on Tuesday July 02, @08:10AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A team of anthropologists and biologists from Canada, Poland, and the U.S., working with researchers at the American Museum of Natural History, in New York, has found via meta-analysis of data from prior research efforts that homosexual behavior is far more common in other animals than previously thought. The paper is published in PLOS ONE.

For many years, the biology community has accepted the notion that homosexuality is less common in animals than in humans, despite a lack of research on the topic. In this new effort, the researchers sought to find out if such assumptions are true.

[...] The researchers found that 76% of the studies mentioned observations of homosexual behavior, though they also noted that only 46% had collected data surrounding such behavior—and only 18.5% of those who had mentioned such behavior in their papers had focused their efforts on it to the extent of publishing work with homosexuality as it core topic.

They noted that homosexual behavior observed in other species included mounting, intromission and oral contact—and that researchers who identified as LGBTQ+ were no more or less likely to study the topic than other researchers.

The researchers point to a hesitancy in the biological community to study homosexuality in other species, and thus, little research has been conducted. They further suggest that some of the reluctance has been due to the belief that such behavior is too rare to warrant further study.

More information: Karyn A. Anderson et al, Same-sex sexual behaviour among mammals is widely observed, yet seldomly reported: Evidence from an online expert survey, PLOS ONE (2024). DOI: 10.1371/journal.pone.0304885


Original Submission

posted by hubie on Tuesday July 02, @03:27AM   Printer-friendly

https://arstechnica.com/gadgets/2024/07/bleeding-subscribers-cable-companies-force-their-way-into-streaming/

It's clear that streaming services are the present and future of video distribution. But that doesn't mean that cable companies are ready to give up on your monthly dollars.

A sign of this is Comcast, the US' second-biggest cable company, debuting a new streaming service today. Comcast already had an offering that let subscribers stream its Xfinity cable live channels and access some titles on demand. NOW TV Latino differs in being a separate, additional streaming service that people can subscribe to independently of Xfinity cable for $10 per month.

However, unlike streaming services like Netflix or Max, you can only subscribe to NOW TV Latino if Xfinity is sold in your area. NOW TV Latino subscriptions include the ability to stream live TV from Spanish-language channels that Xfinity offers, like Sony Cine and ViendoMovies. And because Comcast owns NBCUniversal, people who subscribe to NOW TV Latino get a free subscription to Peacock with commercials, which usually costs $6/month.


Original Submission

posted by janrinok on Monday July 01, @10:42PM   Printer-friendly
from the patch-firewall-tcpwrap-and-fail2ban dept.

New OpenSSH Vulnerability Could Lead to RCE as Root on Linux Systems:

OpenSSH maintainers have released security updates to contain a critical security flaw that could result in unauthenticated remote code execution with root privileges in glibc-based Linux systems.

The vulnerability has been assigned the CVE identifier CVE-2024-6387. It resides in the OpenSSH server component, also known as sshd, which is designed to listen for connections from any of the client applications.

"The vulnerability, which is a signal handler race condition in OpenSSH's server (sshd), allows unauthenticated remote code execution (RCE) as root on glibc-based Linux systems," Bharat Jogi, senior director of the threat research unit at Qualys, said in a disclosure published today. "This race condition affects sshd in its default configuration."

The cybersecurity firm said it identified no less than 14 million potentially vulnerable OpenSSH server instances exposed to the internet, adding it's a regression of an already patched 18-year-old flaw tracked as CVE-2006-5051, with the problem reinstated in October 2020 as part of OpenSSH version 8.5p1.

"Successful exploitation has been demonstrated on 32-bit Linux/glibc systems with [address space layout randomization]," OpenSSH said in an advisory. "Under lab conditions, the attack requires on average 6-8 hours of continuous connections up to the maximum the server will accept."

[...] The net effect of exploiting CVE-2024-6387 is full system compromise and takeover, enabling threat actors to execute arbitrary code with the highest privileges, subvert security mechanisms, data theft, and even maintain persistent access.

"A flaw, once fixed, has reappeared in a subsequent software release, typically due to changes or updates that inadvertently reintroduce the issue," Jogi said. "This incident highlights the crucial role of thorough regression testing to prevent the reintroduction of known vulnerabilities into the environment."

While the vulnerability has significant roadblocks due to its remote race condition nature, users are recommended to apply the latest patches to secure against potential threats. It's also advised to limit SSH access through network-based controls and enforce network segmentation to restrict unauthorized access and lateral movement.

See also:


Original Submission

posted by hubie on Monday July 01, @06:02PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A recent study published in Nature Communications reveals that a massive earthquake 2,500 years ago dramatically shifted the course of one of the world’s largest rivers. This previously undocumented seismic event rerouted the main channel of the Ganges River into present-day, densely populated Bangladesh, an area that continues to be at high risk for significant earthquakes.

Scientists have documented many river-course changes, called avulsions, including some in response to earthquakes. However, “I don’t think we have ever seen such a big one anywhere,” said study coauthor Michael Steckler, a geophysicist at Lamont-Doherty Earth Observatory, which is part of the Columbia Climate School. It could have easily inundated anyone and anything in the wrong place at the wrong time, he said.

[...] Like other rivers that run through major deltas, the Ganges periodically undergo minor or major course changes without any help from earthquakes. Sediments washed from upstream settle and build up in the channel, until eventually, the river bed grows subtly higher than the surrounding flood plain. At some point, the water breaks through and begins constructing a new path for itself. But this does not generally happen all at once—it may take successive floods over years or decades. An earthquake-related avulsion, on the other hand, can occur more or less instantaneously, said Steckler.

[...] Chamberlain and other researchers were exploring this area in 2018 when they came across a freshly dug excavation for a pond that had not yet been filled with water. On one flank, they spotted distinct vertical dikes of light-colored sand cutting up through horizontal layers of mud. This is a well-known feature created by earthquakes: In such watery areas, sustained shaking can pressurize buried layers of sand and inject them upward through overlying mud. The result: literal sand volcanoes, which can erupt at the surface. Called seismites, here, they were 30 or 40 centimeters wide, cutting up through 3 or 4 meters of mud.

Further investigation showed the seismites were oriented in a systematic pattern, suggesting they were all created at the same time. Chemical analyses of sand grains and particles of mud showed that the eruptions and the abandonment and infilling of the channel both took place about 2,500 years ago. Furthermore, there was a similar site some 85 kilometers downstream in the old channel that had filled in with mud at the same time. The authors’ conclusion: This was a big, sudden avulsion triggered by an earthquake, estimated to be magnitude 7 or 8.

The quake could have had one of two possible sources, they say. One is a subduction zone to the south and east, where a huge plate of oceanic crust is shoving itself under Bangladesh, Myanmar, and northeastern India. Or it could have come from giant splay faults at the foot of the Himalayas to the north, which are slowly rising because the Indian subcontinent is slowly colliding with the rest of Asia. A 2016 study led by Steckler shows that these zones are now building stress, and could produce earthquakes comparable to the one 2,500 years ago. The last one of this size occurred in 1762, producing a deadly tsunami that traveled up the river to Dhaka. Another may have occurred around 1140 CE.

[...] The Ganges is not the only river facing such hazards. Others cradled in tectonically active deltas include China’s Yellow River; Myanmar’s Irrawaddy; the Klamath, San Joaquin, and Santa Clara rivers, which flow off the U.S. West Coast; and the Jordan, spanning the borders of Syria, Jordan, the Palestinian West Bank and Israel.

Reference: “Cascading hazards of a major Bengal basin earthquake and abrupt avulsion of the Ganges River” by Elizabeth L. Chamberlain, Steven L. Goodbred, Michael S. Steckler, et al, 17 June 2024, Nature Communications DOI: 10.1038/s41467-024-47786-4


Original Submission

posted by hubie on Monday July 01, @01:17PM   Printer-friendly

https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never

We're living through one of those moments when millions of people become suddenly and overwhelmingly interested in fair use, one of the subtlest and worst-understood aspects of copyright law. It's not a subject you can master by skimming a Wikipedia article!

I've been talking about fair use with laypeople for more than 20 years. I've met so many people who possess the unshakable, serene confidence of the truly wrong, like the people who think fair use means you can take x words from a book, or y seconds from a song and it will always be fair, while anything more will never be.

Or the people who think that if you violate any of the four factors, your use can't be fair – or the people who think that if you fail all of the four factors, you must be infringing (people, the Supreme Court is calling and they want to tell you about the Betamax!).

You might think that you can never quote a song lyric in a book without infringing copyright, or that you must clear every musical sample. You might be rock solid certain that scraping the web to train an AI is infringing. If you hold those beliefs, you do not understand the "fact intensive" nature of fair use.


Original Submission

posted by janrinok on Monday July 01, @08:33AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

https://www.sciencealert.com/icelands-volcanic-eruptions-could-continue-for-decades-study-finds

After almost 800 years of relative dormancy, volcanoes on Iceland's Reykjanes Peninsula are returning to life with renewed ferocity. Eight eruptions have occurred since 2021 and new research suggests the upsurge in volcanic activity stems from a shallow pool of magma just 10 kilometers (6.2 miles) wide and only 9-12 kilometers below the surface.

Alerting authorities to this magma source is critical for the ongoing safety of residents in the region, with researchers claiming the magma pool could feed similarly-sized volcanic eruptions in the area for years or maybe decades more.

"A comparison of [current] eruptions with historical events provides strong evidence that Iceland will have to prepare and be ready for this volcanic episode to continue for some time, possibly even years to decades," says geologist Valentin Troll of Uppsala University in Sweden, who led the study.

Troll and his colleagues used seismic wave data from volcanic eruptions and earthquake 'swarms' to map the subsurface of the Reykjanes Peninsula in southwest Iceland, which is home to most of the country's population.

They found the 2021 eruptions of the Fagradalsfjall volcanic system were fed by a pocket of magma that then oozed along geological lines to Sundhnúkur, where volcanoes have been spewing lava since late 2023.

With both eruption zones expelling lavas with similar geochemical 'fingerprints', the findings suggest a "connected magma plumbing system" joins the two volcanic systems. Historical data indicates this shared magma pool likely formed sometime between 2002 and 2020, was recharged again in 2023, and continues to supply magma from shallow depths to surface fissures and vents via slightly sloped pathways. Melting rock deeper in the mantle replenishes the magma pool, so it may fuel eruptions for decades to come.

"There is a need for an improved understanding of the magma supply system that feeds the ongoing eruptive events," Troll and colleagues write in their published paper.

"Increased eruption frequencies should be expected for the foreseeable future."

Now that the magma pool has been identified, it can be mapped and monitored to prepare communities for what might eventuate.

Repeated evacuations would be an obvious but very necessary disruption to ensure people's safety. Frequent eruptions may also damage key infrastructure such as geothermal power plants that supply Iceland with electricity and heat, and experimental carbon sequestration facilities, injecting carbon dioxide (CO2) and other gaseous pollutants into porous rocks.

[...] "We don't know how long and how frequently it will continue for the next ten or even hundred years," says study author Ilya Bindeman, a volcanologist at the University of Oregon.

"A pattern will emerge, but nature always has exceptions and irregularities."

The study has been published in Terra Nova.

Journal Reference: DOI: https://onlinelibrary.wiley.com/doi/10.1111/ter.12733


Original Submission

posted by janrinok on Monday July 01, @03:49AM   Printer-friendly

Chrome will distrust CA certificates from Entrust later this year

A Certification Authority (CA) issues certificates that help guarantee you're visiting a legitimate website. Over the years, Chrome has had to distrust some CAs, and the Google browser is about to do that again with certificates from Entrust.

Over the past six years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports.

Google points to a list of "publicly disclosed incident reports" that highlight a "pattern of concerning behaviors by Entrust that fall short of the [Chrome Root Program Policy requirements], and has eroded confidence in their competence, reliability, and integrity as a publicly-trusted CA Owner."

When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the Internet ecosystem, it is our opinion that Chrome's continued trust in Entrust is no longer justified.

[...] Google's recommendation to website owners is to "transition to a new publicly-trusted CA Owner as soon as reasonably possible" before November 1. Meanwhile, other Google products might take similar actions in the future.

[...] More details of Google's roadmap and a FAQ can be found here.

Google cuts ties with Entrust in Chrome over trust issues

Google is severing its trust in Entrust after what it describes as a protracted period of failures around compliance and general improvements.

Entrust is one of the many certificate authorities (CA) used by Chrome to verify that the websites end users visit are trustworthy. From November 1 in Chrome 127, which recently entered beta, TLS server authentication certificates validating to Entrust or AffirmTrust roots won't be trusted by default.

Google pointed to a series of incident reports over the past few years concerning Entrust, saying they "highlighted a pattern of concerning behaviors" that have ultimately seen the security company fall down in Google's estimations.

The incidents have "eroded confidence in [Entrust's] competence, reliability, and integrity as a publicly trusted CA owner," Google stated in a blog.

It follows a May publication by Mozilla, which compiled a sprawling list of Entrust's certificate issues between March and May this year. In response, and after an initial reply that was greeted with harsh feedback from the Mozilla community, Entrust acknowledged its procedural failures, Mozilla noted, and said it was treating the feedback as a learning opportunity.

It now seems Google hasn't been as accepting of Entrust's apologetic response.

[...] Tim Callan, chief experience officer at Sectigo, said in an email to The Reg that the news serves as a reminder to CAs that they must hold themselves to the standards the industry expects of them.

"CAs have to hold themselves to the highest of standards, not only for the sake of their business but for all the people and businesses that depend on them. With a shorter lifecycle timeline of 90 days looming, and the implications of Quantum Computing also on the horizon, things aren't getting any less complicated.

[...] A spokeperson at Entrust sent a statement to The Register: "The decision by the Chrome Root Program comes as a disappointment to us as a long-term member of the CA/B Forum community. We are committed to the public TLS certificate business and are working on plans to provide continuity to our customers."

A little web scraping shows that there are some pretty big name websites that currently use Entrust certs.


Original Submission

posted by janrinok on Sunday June 30, @11:04PM   Printer-friendly
from the in-space-nobody-can-hear-your-data-scream dept.

(1) https://www.cnbc.com/2024/06/27/europe-wants-to-deploy-data-centers-into-space-study-says.html
(2) https://natick.research.microsoft.com/

New data center location -- Space. In about a decade or two they want to have data centers in space. It's somewhat unclear what the competitive edge would be to launch your data center into space. Wouldn't it make more sense to submerge them into the ocean? Which they have already tried and done to (2).

The total global electricity consumption from data centers could reach more than 1,000 terawatt-hours in 2026 — that's roughly equivalent to the electricity consumption of Japan, according to the International Energy Agency.

ASCEND's space-based data storage facilities would benefit from "infinite energy" captured from the sun and orbit at an altitude of around 1,400 kilometers (869.9 miles).

[...] The facilities that the study explored launching into space would orbit at an altitude of around 1,400 kilometers (869.9 miles) — about three times the altitude of the International Space Station. Dumestier explained that ASCEND would aim to deploy 13 space data center building blocks with a total capacity of 10 megawatts in 2036, in order to achieve the starting point for cloud service commercialization.

Each building block — with a surface area of 6,300 square meters — includes capacity for its own data center service and is launched within one space vehicle, he said.

In order to have a significant impact on the digital sector's energy consumption, the objective is to deploy 1,300 building blocks by 2050 to achieve 1 gigawatt, according to Dumestier.

[...] Michael Winterson, managing director of the European Data Centre Association, acknowledges that a space data center would benefit from increased efficiency from solar power without the interruption of weather patterns — but the center would require significant amounts of rocket fuel to keep it in orbit.

Winterson estimates that even a small 1 megawatt center in low earth orbit would need around 280,000 kilograms of rocket fuel per year at a cost of around $140 million in 2030 — a calculation based on a significant decrease in launch costs, which has yet to take place.

"There will be specialist services that will be suited to this idea, but it will in no way be a market replacement," said Winterson.

"Applications that might be well served would be very specific, such as military/surveillance, broadcasting, telecommunications and financial trading services. All other services would not competitively run from space," he added in emailed comments.

New work title -- space janitor. I wonder if he will have to attend meetings in the 'office'?


Original Submission

posted by janrinok on Sunday June 30, @06:16PM   Printer-friendly
from the uncanny-valley dept.

https://arstechnica.com/science/2024/06/researchers-craft-smiling-robot-face-from-living-human-skin-cells/

In a new study, researchers from the University of Tokyo, Harvard University, and the International Research Center for Neurointelligence have unveiled a technique for creating lifelike robotic skin using living human cells. As a proof of concept, the team engineered a small robotic face capable of smiling, covered entirely with a layer of pink living tissue.

[... Shoji Takeuchi, Michio Kawai, Minghao Nie, and Haruka Oda authored the study, titled "Perforation-type anchors inspired by skin ligament for robotic face covered with living skin," which is due for July publication in Cell Reports Physical Science. We learned of the study from a report published earlier this week by New Scientist.

[...] In their experiments, the researchers used commercially available human cells, purchasing what are called Normal Human Dermal Fibroblasts (NHDFs) and Normal Human Epidermal Keratinocytes (NHEKs) that were isolated from either juvenile foreskin or different skin locations from adult donors by a company called PromoCell GmbH.

[...] While ethical questions inevitably arise from using real human skin cells, the researchers state that their goal is to improve human-robot communication and advance tissue engineering. They hope their techniques will find applications not just in robotics but in fields like reconstructive medicine and drug testing. Instead of using real human test subjects, experimenters could grow artificial skin layers from real cells.

[...] With continued refinement, living robotic skin could create machine coverings that are not just lifelike but literally alive. Eventually, they may even live long enough to see attack ships on fire off the shoulder of Orion. Or watch C-beams glitter in the dark near the Tannhäuser Gate. But we're hoping those moments will not be lost in time—like tears in rain.


Original Submission

posted by janrinok on Sunday June 30, @01:33PM   Printer-friendly

Microsoft's CEO of AI said that content on the open web can be copied and used to create new content:

Microsoft may have opened a can of worms with recent comments made by the tech giant's CEO of AI Mustafa Suleyman. The CEO spoke with CNBC's Andrew Ross Sorkin at the Aspen Ideas Festival earlier this week. In his remarks, Suleyman claimed that all content shared on the web is available to be used for AI training unless a content producer says otherwise specifically.

"With respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding," said Suleyman.

"There's a separate category where a website or a publisher or a news organization had explicitly said, 'do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts."

[...] Generative AI is one of the hottest topics in tech in 2024. It's also a hot button topic among creators. Some claim that AI trained on other people's work is a form of theft. Others equate training AI on existing work to artists studying at school. Contention often circles around monetizing work that's derivative of other content.

YouTube has reportedly offered "lumps of cash" to train its AI models on music libraries from major record labels. The difference in that situation is that record labels and YouTube will have agreed to terms. Suleyman claims that a company could use any content on the web to train AI, as long as there was not an explicit statement demanding that not be done.

[...] Assuming I've understood Suleyman correctly, the CEO claimed that any content is freeware that anyone can use to make new content, unless the creator says otherwise. I'm not a lawyer, but Suleyman's claims sound a lot like those viral chain messages that get forwarded around Facebook and Instagram saying, "I DO NOT CONSENT TO MY CONTENT BEING USED." I always assumed copyright law was more complicated than a Facebook post.


Original Submission

posted by hubie on Sunday June 30, @08:48AM   Printer-friendly
from the who-coulda-seen-this-coming? dept.

https://www.oxfordmail.co.uk/news/24413873.ai-exams-found-earn-higher-grades-students/

AI exams found to earn higher grades than students

Exam submissions generated by artificial intelligence (AI) can not only evade detection but also earn higher grades than those submitted by university students, a real-world test has shown.

Last year Russell Group universities, which includes Oxford University, pledged to allow ethical use of AI in teaching and assessments, with many others following suit.

The findings come as concerns mount about students submitting AI-generated work as their own, with questions being raised about the academic integrity of universities and other higher education institutions.

It also shows even experienced markers could struggle to spot answers generated by AI, the University of Reading academics said.

Peter Scarfe, an associate professor at Reading's School of Psychology and Clinical Language Sciences said the findings should serve as a "wake-up call" for educational institutions as AI tools such as ChatGPT become more advanced and widespread.

He said: "The data in our study shows it is very difficult to detect AI-generated answers.

"There has been quite a lot of talk about the use of so-called AI detectors, which are also another form of AI but (the scope here) is limited."

For the study, published in the journal Plos One, Prof Scarfe and his team generated answers to exam questions using GPT-4 and submitted these on behalf of 33 fake students.

Exam markers at Reading's School of Psychology and Clinical Language Sciences were unaware of the study.

Journal Reference:
Scarfe P, Watcham K, Clarke A, Roesch E (2024) A real-world test of artificial intelligence infiltration of a university examinations system: A "Turing Test" case study. PLoS ONE 19(6): e0305354. https://doi.org/10.1371/journal.pone.0305354


Original Submission

posted by hubie on Sunday June 30, @03:59AM   Printer-friendly
from the still-not-doing-badly-though dept.

Arthur T Knackerbracket has processed the following story:

Nvidia has been riding high thanks to AI, the current center of attention in the tech industry. The chipmaker's silicon is among the only such hardware that can provide the necessary processing power to enable resource-intensive commercial AI models.

Because of this, Nvidia has been sort of a bellwether for the AI industry — rising sky high as an indication of AI's extreme rate of growth.

Over the past week, however, Nvidia has taken a huge tumble on the stock market, and has lost around $500 billion in value.

[...] Nvidia is still doing just fine of course. And it's likely to still be raking in plenty of revenue thanks to AI-related patronage from the likes of Elon Musk, who is reportedly building an Nvidia-based "supercomputer" via his AI company xAI. However, this recent downturn on the stock market might show that investors are sending a message that they're not so bullish on the AI industry's monumental claims of how they will change the world with their technology.

As multiple outlets have reported, AI companies have made big promises, but so far have had very little to show for it when it comes to actual meaningful change in the industry's AI claimed to soon disrupt. On top of that, studies have found AI to be a massive energy and resource drain, which will certainly give at least some AI backers second thoughts about where the industry is headed.


Original Submission

Today's News | July 3 | July 1  >