Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:116 | Votes:437

posted by jelizondo on Friday August 15, @10:22PM   Printer-friendly

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people."

[...] The standards don't necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.

"It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')."

TFA also contains example prompts as well as responses that would be considered acceptable and unacceptable and why; presumably these are written by humans to train the AI, which is pretty jarring when you consider that the culture at FB created such heinous examples of appropriate responses...

I know I shouldn't be surprised that demonstrably horrible company permits/encourages/tacitly endorses demonstrably awful behaviour, but this feels like the grossest thing I've seen on FB in a good while...


Original Submission

posted by jelizondo on Friday August 15, @05:35PM   Printer-friendly

https://phys.org/news/2025-08-adults-ai-views-technology-vary.html

Half of U.S. adults report using at least one "major AI tool," but public attitudes about artificial intelligence regulation remain divided nationwide, according to a new survey.

The 50-state report, published as part of the multi-university Civic Health and Institutions Project (CHIP50), found that views about how and whether to rein in AI tools don't follow typical red-blue state divides. Missouri and Washington, for example, expressed the strongest views about a lack of regulatory oversight, while New York and Tennessee were most worried about government overreach.

But concerns about workplace disruption are nearly universal. Majorities in all 50 states expect AI to impact their jobs within five years, especially in tech-heavy and Sun Belt states such as California, Massachusetts, Texas and Georgia. Meanwhile, regions like the Corn Belt and Rust Belt anticipate less immediate disruption.

John Wihbey, an associate professor of media innovation and technology at Northeastern University and co-author of the study, says the findings provide some insight into the public's view of a technology that has already become part of many Americans' daily life.

"At a time when state-level regulation for AI and public opinion is central to the national debate, this is perhaps the first look at how the states compare on usage, preferences and regulation," Wihbey says.

The researchers used data from a nationally representative online survey of nearly 21,000 respondents, the data of which was collected from April 10 to June 5. The study honed in on how the general public is "encountering AI in daily life," as well as their attitudes toward the emerging technologies.

"It really stood out to us that, in every single state, people expect AI to impact their jobs," Uslu says. "And that expectation is showing up in state legislatures too. The federal government can and should treat these state-level bills and citizens' perceptions as a kind of policy lab: a way to leverage American federalism to ensure safe deployment of AI while also staying globally competitive in the AI race."

The findings also point to deep demographic gaps as it pertains to AI use. Increasingly, AI adoption is led by younger, higher-income adults with college educations, with older, rural and lower-income adults lagging behind.

The study found that among AI tools, ChatGPT stands out, with 65% of Americans recognizing the name and 37% reporting they've used it. Gemini was next at 26%, then Microsoft Copilot at 18%. Notably, actual usage rates lag far behind name recognition—65% of respondents recognize ChatGPT, for example, but only about half report using it.

But frequent everyday use remains concentrated among a small slice of users, and awareness of AI consistently outpaces actual use across all platforms, the study says.

The question over how to regulate AI is ultimately a federalism policy debate, Wihbey says—a struggle playing out in real time over who gets to shape and control the technology. He points out that the Trump administration has pushed for a top-down regulatory approach, which he notes is "a little out of step" with conservatives' broader skepticism of federal regulatory power.

Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.

"The White House would say the big questions are unbridled innovation, which would allow for AI dominance over adversaries to ensure national security and prosperity, and this notion of 'woke' AI," Wihbey says.

A proposed moratorium on states' ability to regulate AI was included as a provision as part of President Donald Trump's sweeping Big Beautiful Bill before the Senate voted the measure down 99–1. The administration also recently unveiled an AI Action Plan, which identifies over 60 federal policy actions designed to bolster innovation in AI tech.

In the wake of the federal moratorium's defeat, state regulators have begun proposing their own frameworks. States like California and Michigan have introduced bills that would increase transparency requirements, strengthen whistleblower protections and require third-party auditing.

Wihbey notes there've been hundreds of bills under consideration across the country.

"Many of these bills want to set up a commission to study the impact of AI at the state level, and many address issues of bias, and the use of AI tools for hiring, health screening or other areas where bias and functional discrimination could be a result," Wihbey says.

"There's also some real questions about deepfakes, which is a huge issue—especially in the political arena."

"This isn't abstract, and it's no longer just about political campaigns or celebrities," Uslu says. "With Elon Musk's recent promotion of Grok's new Imagine feature for example, anyone can now turn a photo into a video that follows their prompts."

Uslu continues, "On their phone, in under a minute, for free. And this is just the beginning. When these kinds of tools become widely accessible, we need to know how prepared and aware the public is. That's what this kind of research helps us measure."

More information: AI Across America: Attitudes On AI Usage, Job Impact, And Federal Regulation, www.chip50.org/reports/ai-acro ... d-federal-regulation

How many people in our community use AI, and what for? What are its benefits and disadvantages for you?--JR


Original Submission

posted by jelizondo on Friday August 15, @12:51PM   Printer-friendly
from the pick-a-pixel-please dept.

Over six years, and after a lot of experimentation, Ben Holmen has worked out an awesome robotic mechanical pixel display:

Six years ago I had an idea to build a large, inefficient display with a web interface that anyone could interact with. I've enjoyed Danny Rozin's unconvenional mirrors over the years and was inspired by an eInk movie player that played at 24 frames per hour that got me thinking about a laborious display that could slowly assemble an image.

I landed on the idea of a 40×25 pixel grid of pixels, turned one by one by a single mechanism. Compared to our modern displays with millions of pixels changing 60 times a second, a wooden display that changes a single pixel 10 times a minute is an incredibly inefficient way to create an image. Conveniently, 40×25 = 1,000 pixels, leading to the name Kilopixel and the six-letter domain name kilopx.com. How do you back down from that? That's the best domain name I've ever owned.

So I got to work. This project has everything: a web app, a physical controller, a custom CNC build, generated gcode, tons of fabrication, 3d modeling, 3d printing, material sourcing - so much to get lost in. It's the most ambitious project I've ever built.

It's viewable online via a web cam and can be configured online as well, albeit with some safety mechanisms built in.

Previously:
(2025) Oh No, Wavy Dave! Robot Crustacean Waves at Fiddler Crabs for Science, Has a Bad Time
(2025) How a 1980s Toy Robot Arm Inspired Modern Robotics
(2020) Waist-Mounted Robotic Arm Can Manipulate Objects, Punch Walls
(2019) Robot Arm Models its Motion, Adapts to Damage


Original Submission

posted by jelizondo on Friday August 15, @08:07AM   Printer-friendly

https://phys.org/news/2025-08-culture-men-intimate-partner-violence.html

Historically, stereotypical ideas of intimate partner violence (IPV) have overlooked or minimized the experiences of male victims. Simultaneously, perspectives of men's experiences with IPV are influenced by country-specific cultural contexts.

A novel study by Denise Hines, professor in the Department of Social Work, published in Partner Abuse, compared the rates at which male victims experience IPV from a partner to acts of IPV they committed themselves in four English-speaking regions: U.S., Canada, UK/Ireland, Australia, and Aotearoa New Zealand.

Hines's findings offer key insights into differences and similarities among those countries in their experiences of male IPV victimization:

  • Self-identified male victims reported prevalence rates of victimization from 50.0% to 96.1% for sexual and physical IPV, respectively. Sexual IPV perpetration rates were estimated to be 21.1%, while physical IPV perpetration was reported at 54.0%.
  • Male IPV victims from the U.S. reported perpetrating and experiencing significantly more IPV than men from other countries, emphasizing the importance of national context in understanding IPV.
  • Gendered stereotypes that men cannot be victims that are embedded in legislation, support resources, and justice systems prevent male victims from seeking help, and individual countries must implement context-specific solutions tailored to the unique needs of their male IPV victim population.

Hines is working with Fairfax County Domestic and Sexual Violence Services on two projects, focusing on understanding and overcoming barriers to service access for underserved communities in Fairfax County, Virginia.

                                                                               

More information: Denise A. Hines et al, Prevalence of Men's Intimate Partner Violence Victimization and Perpetration Among Two Samples of Male Victims: An International Study of English-Speaking Countries, Partner Abuse (2025). DOI: 10.1891/PA-2024-0003
                                                                                                       


Original Submission

posted by jelizondo on Friday August 15, @03:20AM   Printer-friendly
from the no-software-patents dept.

At the beginning of last year, Manuel Hoffmann, Frank Nagle, and Yanuo Zhou published a working paper on the Value of Open Source Software [PDF] for comment and discussion only.

The value of a non-pecuniary (free) product is inherently difficult to assess. A pervasive example is open source software (OSS), a global public good that plays a vital role in the economy and is foundational for most technology we use today. However, it is difficult to measure the value of OSS due to its non-pecuniary nature and lack of centralized usage tracking. Therefore, OSS remains largely unaccounted for in economic measures. Although prior studies have estimated the supply-side costs to recreate this software, a lack of data has hampered estimating the much larger demand-side (usage) value created by OSS. Therefore, to understand the complete economic and social value of widely-used OSS, we leverage unique global data from two complementary sources capturing OSS usage by millions of global firms. We first estimate the supply-side value by calculating the cost to recreate the most widely used OSS once. We then calculate the demand-side value based on a replacement value for each firm that uses the software and would need to build it internally if OSS did not exist. We estimate the supply-side value of widely-used OSS is $4.15 billion, but that the demand-side value is much larger at $8.8 trillion. We find that firms would need to spend 3.5 times more on software than they currently do if OSS did not exist. The top six programming languages in our sample comprise 84% of the demand-side value of OSS. Further, 96% of the demand-side value is created by only 5% of OSS developers.

The working paper is especially interesting when considered in the context of similar, earlier works such as Ghosh et al in Study on the effect on the development of the information society of European public bodies making their own software available as open source [PDF] published by the European Commission back in 2007. One would think that both sides of the pond would be very interested in this valuable commons and work to not just protect it but cultivate it further, rather than work to saw the legs from under it by advancing software patents instead.

Previously:
(2025) Open Internet Stack: The EU Commission's Vague Plans for Open Source
(2023) The Four Freedoms and The One Obligation of Free Software
(2023) Opinion: FOSS Could be an Unintended Victim of EU Security Crusade
(2021) European Commission's Study on Open Source Software


Original Submission

posted by janrinok on Thursday August 14, @10:37PM   Printer-friendly

From late May to early June of this year, wildfires raged in Canada: the plumes crossed the Atlantic and were observed in Europe.

In the night of 12-13 August, the first of a next generation of weather satellites for EUMETSAT was launched aboard an Ariane 6 missile.

The satellite, named METOP-SGA1, carries a total of six atmospheric sounding and imaging instrument missions. The payload includes the Infrared Atmospheric Sounding Interferometer – New Generation (IASI-NG), METimage (a visual and infrared imager), the Microwave Sounder (MWS), a Radio Occultation sounder, and the Multi-Viewing, Multi-Channel, Multi-Polarisation Imager (3MI) – the latter being an entirely new instrument designed to enhance the monitoring of aerosols -- as e.g. created by the Canadian wildfires -- and cloud properties.

Metop-SGA1 also carries the European Union's Copernicus Sentinel-5 mission, which will supply detailed data on atmospheric composition and trace gases that affect air quality, helping health authorities to monitor air pollution.

One aim of the satellite is to improve weather forecasts from 6 hours before (now-casting) to up to 10 days ahead. Another aim is to further improve climate models. A crucial instrument here is the Microwave Sounder, which will create temperature and humidity profiles across the atmosphere by measuring microwave brightness temperatures at different altitudes, in all weather.

"Instruments on board Metop-SG satellites and other exciting new European missions span a much broader frequency range than we have had so far. By bridging gaps between the microwave and infrared parts of the electromagnetic spectrum, we can build a more complete picture of the Earth's atmosphere, land, water, and ice – data that are essential for enhancing the numerical prediction models behind weather forecasts.

"Lower microwave frequencies penetrate clouds to reveal surface conditions like soil moisture, snow cover, and sea ice – data often inaccessible to infrared and optical sensors, as we live on a very cloudy planet! Higher frequencies can be used to detect tiny ice particles in high-altitude clouds, helping refine how these clouds are represented in weather and climate models. And combined with infrared sounder data, microwave observations can also offer very detailed insights into atmospheric humidity and temperature, the two most important variables in weather forecasts."

The spacecraft's counterpart, Metop-SGB1, will be launched next year with a complementary payload that (amongst others) includes a Microwave Imager that will deliver data relevant for monitoring precipitation, clouds, and surface conditions; an Ice Cloud Imager to observe high-altitude cirrus clouds; and a Scatterometer to gauge ocean surface roughness and estimate wind speed, direction, and soil moisture.

Data generated by the METOP-SGA series of weather satellites will be shared with NOAA, as part of the Joint Polar System.


Original Submission

posted by janrinok on Thursday August 14, @05:52PM   Printer-friendly
from the bwwwweeeeeeEEEEEEEEEE-EEEEEEEEEEp dept.

Tom's Hardware Reports:

"AOL, now a Yahoo! property, will end its dial-up internet service, the Public Switched Telephone Network (PSTN)-based internet connectivity service, on September 30, 2025. Its dial-up service has been publicly available for 34 years, and has provided many an internet surfer's first taste of the WWW. AOL will also end its AOL Dialer software and AOL Shield browser."

"In large countries, with regions where traditional PSTN phone lines are still available, but newer internet connectivity options may not be, some might argue that dial-up is still viable. Also, sometimes it is advertised as a backup connectivity option. In the U.S., for instance, the latest government census data indicates approximately a quarter of a million remaining dial-up holdouts."

"Internet old timers might feel some slight pangs of PSTN-based nostalgia. However, the move to always-on, fast, and responsive connectivity - at a fixed price - from ADSL onwards, came with few or no drawbacks compared to dial-up service."

"On performance, remember that the best hobbyist modems would only deliver up to 0.056 Mbps data speeds. ADSL services comfortably moved the performance needle to around 25 Mbps for many users (depending on line quality). In 2025, anyone who wants the best internet performance will usually prefer fiber connectivity, with a fairly typical service offering 500 Mbps data speeds.

Taking the above figures as reasonable averages of the respective eras, we've definitely come a long way since the heydays of dial-up. However, there remain some niche providers in the U.S. and elsewhere, if you don't have any other connection options."

Earthlink discontinued their dial-up service at the beginning of 2024. Dial up services still out there include NetZero, MSN, and Juno.


Original Submission

posted by hubie on Thursday August 14, @01:03PM   Printer-friendly

New tests reveal Microsoft Recall still screenshots sensitive data:

Microsoft Recall launched in 2024 as an AI-powered screenshot tool for Copilot+ PCs. The feature captures everything users do on their computers for later searching.

A security researcher quickly found serious vulnerabilities in the original version, where the database stored sensitive information in plain text. Microsoft had to pull Recall from the preview builds of Windows after that.

The company reintroduced Recall a few months down the line with assurances of better security measures, including encryption, virtualization-based security enclaves, and mandatory Windows Hello authentication for access.

However, recent testing by The Register has revealed deeply troubling findings.

During testing, it was found out that Recall still captures sensitive data even when filters are enabled. Credit card numbers, passwords, and Social Security details were all recorded in plain view.

Despite Microsoft's assurances, banking information remains vulnerable. Recall screenshots included bank homepages and account balances while correctly blocking routing and account numbers.

Similarly, password protection proved inconsistent across scenarios. Chrome's password manager remained protected, and Recall skipped files explicitly labeled with "username" or "password". Plain text files that listed credentials without those words were captured instead.

Social Security numbers (SSNs) received partial filtering at best. The system blocked digits when prefixed with "My SS#" but captured everything when labeled "Soc:".

Remote access makes the situation worse. Using TeamViewer, the tester was able to view the complete Recall history from another computer with only a Windows Hello PIN; biometric authentication was bypassed entirely.

And guess what? Microsoft promotes Recall as if it were a fully stable feature that needs no second look, while the feature itself is still creepy and Orwellian at its best.

I still think this feature has no place on a computer. But that is how it goes with Big Tech. They shove these kinds of offerings down people's throats whether they want them or not.

Also at: https://archive.ph/PWlUK


Original Submission

posted by hubie on Thursday August 14, @08:22AM   Printer-friendly

The communication platform cited suspicions that AI companies were using the archiving site for AI training:

Reddit has announced that it will be severely limiting the Internet Archive's Wayback Machine's access to the communication platform following its accusation that AI companies have been scraping the website for Reddit data. The platform will only be allowing the Internet Archive to save the home page of its website.

The limits on the Internet Archive's access was set to start "ramping up" on Monday, according to the Verge. Reddit did not apparently name any of the AI companies involved in these website data scrapes.

[...] Some Reddit users pointed out that this move is a far cry from Reddit co-founder Aaron Swartz's philosophy. Swartz committed suicide in the weeks before he was set to stand trial for allegedly breaking into an MIT closet to download the paid JSTOR archive, which hosts thousands of academic journals. He was committed to making online content free for the public.

[...] [Reddit spokesman Tim] Rathschmidt emphasized that the change was made in order to protect users: "Until they're able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content), we're limiting some of their access to Reddit data to protect redditors," he told Return.

However, it has been speculated that this more aggressive move was financially motivated, given the fact that the platform has struck deals in the past with some AI companies but sued others for not paying its fees. Reddit announced a partnership with OpenAI in May 2024 but sued Anthropic in June of this year for not complying with its demands.

Related: Americans, Be Warned: Lessons From Reddit's Chaotic UK Age Verification Rollout


Original Submission

posted by janrinok on Thursday August 14, @03:37AM   Printer-friendly

Debian -- News -- Debian 13 "trixie" released:

Debian 13 trixie released

August 9th, 2025

After 2 years, 1 month, and 30 days of development, the Debian project is proud to present its new stable version 13 (code name trixie).

trixie will be supported for the next 5 years thanks to the combined work of the Debian Security team and the Debian Long Term Support team.

Debian 13 trixie ships with several desktop environments, such as:

  • Gnome 48,
  • KDE Plasma 6.3,
  • LXDE 13,
  • LXQt 2.1.0,
  • Xfce 4.20

This release contains over 14,100 new packages for a total count of 69,830 packages, while over 8,840 packages have been removed as obsolete. 44,326 packages were updated in this release. The overall disk usage for trixie is 403,854,660 kB (403 GB), and is made up of 1,463,291,186 lines of code.

Thanks to our translators who have made the man-pages for trixie available in multiple languages.

The manpages-l10n project has contributed many improved and new translations for manual pages. Especially Romanian and Polish translations are greatly enhanced since bookworm. All architectures other than i386 now use a 64-bit time_t ABI, supporting dates beyond 2038. Debian contributors have made significant progress towards ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian's overall statistics for trixie and newer.

Debian 13 trixie includes numerous updated software packages (over 63% of all packages from the previous release), such as:

  • Apache 2.4.64
  • Bash 5.2.37
  • BIND DNS Server 9.20
  • Cryptsetup 2.7
  • curl/libcurl 8.14.1
  • Emacs 30.1
  • Exim (default email server) 4.98
  • GNUcash 5.10
  • GNU Compiler Collection 14.2
  • GIMP 3.0.4
  • GnuPG 2.4.7
  • Inkscape 1.4
  • the GNU C Library 2.41
  • LibreOffice 25.2
  • Linux kernel 6.12 LTS series
  • LLVM/Clang toolchain 19 (default), 17 and 18 available
  • MariaDB 11.8
  • Nginx 1.26
  • OpenJDK 21
  • OpenLDAP 2.6.10
  • OpenSSH 10.0p1
  • OpenSSL 3.5
  • Perl 5.40
  • PHP 8.4
  • Postfix 3.10
  • PostgreSQL 17
  • Python 3, 3.13
  • Rustc 1.85
  • Samba 4.22
  • Systemd 257
  • Vim 9.1
  • trixie
  • 64-bit PC (amd64),
  • 64-bit ARM (arm64),
  • ARM EABI (armel),
  • ARMv7 (EABI hard-float ABI, armhf),
  • 64-bit little-endian PowerPC (ppc64el),
  • 64-bit little-endian RISC-V (riscv64),
  • IBM System z (s390x)

i386 is no longer supported as a regular architecture: there is no official kernel and no Debian installer for i386 systems. The i386 architecture is now only intended to be used on a 64-bit (amd64) CPU. Users running i386 systems should not upgrade to trixie. Instead, Debian recommends either reinstalling them as amd64, where possible, or retiring the hardware.

trixie will be the last release for the armel architecture. See 5.1.3. Last release for armel in the release notes for more information on our ARM EABI support.

As a separate item of news submitted by Anonymous Coward, Debian Hurd 2025 has also been released.

https://distrowatch.com/dwres.php?resource=showheadline&story=20043

While the Debian project is best known for its Linux distribution, branches of Debian also experiment with alternative kernels. There is a port of Debian which runs on the GNU Hurd kernel and it supports approximately 72% of the same software as Debian's Linux distribution. The Debian GNU/Hurd team have released a new snapshot which is built with mostly the same source software as Debian 13. "Debian GNU/Hurd is currently available for the i386 and amd64 architectures with about 72% of the Debian archive, and more to come! 64-bit support is now complete, with the same archive coverage as i386 (actually a bit more since some packages are 64-bit-only).

This 64=bit support is completely using userland disk drivers from NetBSD thanks to the Rump layer. We now use xattr by default for recording translators, allowing to bootstrap seamlessly from other OSes, with mmdebstrap for instance. Rust was ported to GNU/Hurd. Support for USB disk and CD-ROM was added through Rump. Packages are now available for SMP support, which is quite working. The console is now using xkb for keyboard layouts, and supports multiboot-provided framebuffer. Various other support were added (acpi, rtc, apic, hpet, ...)" Download options and documentation can be found through the team's mailing list post.


Original Submission

posted by janrinok on Wednesday August 13, @10:56PM   Printer-friendly
from the not-holding-out-much-hope dept.

The plaintiff says that Microsoft's tactic of "forced obsolescence" is an "attempt to monopolize the generative AI market."

https://www.courthousenews.com/microsoft-sued-for-discontinuing-windows-10-support/
https://archive.ph/evqhf

A Southern California man sued Microsoft on Thursday over the software giant's plan to discontinue support for the old version of its widely used operating system Windows.

Though Windows 11 was launched nearly four years ago, many of its billion or so worldwide users are clinging to the decade-old Windows 10.

In fact, the newer Windows only just recently overtook its predecessor, in July.

According to StatCounter, nearly 43% of Windows users still use the old version on their desktop computers. The bad news for them is that Microsoft is discontinuing its routine support for Windows 10 in nearly two months on Oct. 14.

Not that computers running Windows 10 will completely stop working on that day. But they will no longer receive new features or security updates.

The plaintiff, Lawrence Klein, says in his complaint filed in San Diego Superior Court, that he owns two laptops, both of which run Windows 10. Both laptops, he says in his complaint, will become obsolete in October, when Microsoft ends support for Window. [...] Klein says that the end of Windows 10 is part of Microsoft's strategy to force customers to purchase new devices and to "monopolize the generative AI market."

Windows 11 comes with Microsoft's suite of generative artificial intelligence software, including the chatbot Copilot. To run optimally, Microsoft's AI needs a piece of hardware called a neural processing unit, which newer tablets, laptops and desktop computers have — and which the older devices do not.

"With only three months until support ends for Windows 10, it is likely that many millions of users will not buy new devices or pay for extended support," Klein writes in his complaint. "These users — some of whom are businesses storing sensitive consumer data — will be at a heightened risk of a cyberattack or other data security incident, a reality of which Microsoft is well aware."

"In other words, Microsoft's long-term business strategy to secure market dominance will have the effect of jeopardizing data security not only of Microsoft's customers but also of persons who may not use Microsoft's products at all," he adds.
Although the Windows 11 upgrade is free, an estimated 240 million personal computers don't have the right hardware to run the new operating system. And without security updates, they will be increasingly vulnerable to malware and viruses. Those customers will have the option of extended security, which will last until 2028, but at a price: $30 for individuals and $61 per device for businesses, increasing to $244 by the third year.

According to one market analyst writing in 2023, Microsoft's shift away from Windows 10 will lead millions of customers to buy new devices and thrown out their old ones, consigning as many as 240 million PCs to the landfill.

"If these were all folded laptops, stacked one on top of another, they would make a pile 600km taller than the moon," the analyst wrote.
Klein is asking a judge to order Microsoft to continue supporting Windows 10 without additional charge, until the number of devices running the older operating system falls bellow 10% of total Windows users. He says nothing about any money he seeking for himself, though it does ask for attorneys' fees.


Original Submission

posted by hubie on Wednesday August 13, @06:10PM   Printer-friendly

Java-like move could land those expecting free trial with a new bill:

Oracle has introduced new licensing terms that some users may see as hidden within the terms for VirtualBox, the general-purpose virtualization software for x86_64 hardware.

An eagle-eyed licensing consultant in Germany has spotted that licensing terms for downloads from the VirtualBox website have changed, effectively ending the opportunity for a free three-month trial once the user downloads the software.

Bernhard Halbetel, who works for advisory firm DBConcepts, has pointed out that anyone who has VirtualBox 7.1 or later might be liable for a licensing charge under the updated terms and conditions, even if they are not using the software.

"Before the change, Oracle would email those who downloaded the VirtualBox Extension Pack and say, 'Thank you for downloading, this is a commercial license, and now we have to talk about your license fees.' And the user could just say, 'We downloaded only for evaluation, and we de-installed it a couple of months ago, and therefore we don't need to pay your fee.' And Oracle has to go away," he told The Register.

"Now they changed in the licensing that the evaluation is not part of the Personal Use and Evaluation License (PUEL) anymore... so if you download it, then you are trapped, because then you have to pay the fee," Halbetel said. He warned users who have downloaded VirtualBox version 7.1 or later not to ignore such emails from Oracle.

However, users can still get a free evaluation if they get the download from elsewhere. Those who check the Licensing FAQ will find the free evaluation version is available from Oracle Software Delivery Cloud, which requires a login, so users need to sign up.

Eric Guyer, founding partner at Oracle and SAP advisory and consultancy Remend, said there is no difference in the Extension Pack code and no requirement for license keys in the new download. "This is surely bad for customers as there is less contractual ambiguity when Oracle pursues companies based on the download activity it tracks."

Craig Guarente, founder and CEO of Palisade Compliance, said it was a sign that Oracle had started soft auditing its customers in a similar fashion to its Java playbook.

"They track downloads, make accusations, get people worried, try to force them to prove a negative, and drive sales through fear. Having said that, Palisade clients are in compliance and haven't paid a penny to Oracle. It is not a big money maker for Oracle. Just another example of how they treat customers," he said.


Original Submission

posted by hubie on Wednesday August 13, @01:22PM   Printer-friendly
from the well-that-is-worrying,or-so-my-watch-says dept.

Academic study suggests devices cannot differentiate between someone being overworked and being excited:

They are supposed to monitor you throughout the working day and help make sure that life is not getting on top of you.

But a study has concluded that smartwatches cannot accurately measure your stress levels – and may think you are overworked when really you are just excited.

Researchers found almost no relationship between the stress levels reported by the smartwatch and the levels that participants said they experienced. However, recorded fatigue levels had a very slight association with the smartwatch data, while sleep had a stronger correlation.

Eiko Fried, an author of the study, said the correlation between the smartwatch and self-reported stress scores was "basically zero".

He added: "This is no surprise to us given that the watch measures heart rate and heart rate doesn't have that much to do with the emotion you're experiencing – it also goes up for sexual arousal or joyful experiences."

[...] Fried said although there was a lot of academic work looking for physiological signals that can act as proxies for emotional states, most were not precise enough. This is because there is an overlap between positive and negative feelings – for example, hair standing on end can signal anxiety as well as excitement.

Fried, an associate professor in the department of clinical psychology at Leiden University in the Netherlands, and his team tracked stress, fatigue and sleep for three months on 800 young adults wearing Garmin vivosmart 4 watches. They asked them to report four times a day on how stressed, fatigued or sleepy users were feeling before cross-referencing the data.

And the results, published in the Journal of Psychopathology and Clinical Science, found that none of the participants saw the stress scores on their watches meet the baseline for significant change when they recorded feeling stressed. And for a quarter of participants, their smartwatch told them they were stressed or unstressed when they self-reported feeling the opposite.

[...] The research is intended to feed into an early warning system for depression, in which wearable tech users receive data that will help them receive preventive treatments before an episode begins.

So far, there are promising signs that lower activity levels could be a predictor, though Fried has been unable to identify whether this is because of exercise's protective effect against depression or because people feel less energetic as their mental state deteriorates. "Wearable data can offer valuable insights into people's emotions and experiences, but it's crucial to understand its potential and limitations," said Margarita Panayiotou, a researcher at the University of Manchester, after reading the study.

"This research helps clarify what such data can reliably reveal and makes an important contribution to ongoing discussions about the role of technology in understanding wellbeing. It's important to remember that wearable data does not necessarily represent objective truth and should be interpreted alongside broader context, including individuals' perceptions and lived experiences."

Journal Reference: Siepe, B. S., Tutunji, R., Rieble, C. L., et al. (2025). Associations between ecological momentary assessment and passive sensor data in a large student sample. Journal of Psychopathology and Clinical Science. Advance online publication. https://doi.org/10.1037/abn0001013


Original Submission

posted by hubie on Wednesday August 13, @08:34AM   Printer-friendly

Small clouds out as VMware again changes partner program:

VMware has advised partners its current channel program will end, and it seems that smaller players won't be invited back.

[...] This is the second major shakeup for VMware partners in eighteen months, after the Broadcom business unit's January 2024 decision to terminate members that operated VMware-powered clouds that ran on fewer than 3,500 processor cores.

That change caused great unease. Axed service providers could not secure licenses to run VMware-powered clouds, leaving them with hardware they could not legally use for its intended purpose. Customers of axed partners faced forced migrations.

VMware responded to community concerns by creating a "white label program" that allowed small cloud operators – now known as "secondary partners" – to acquire licenses from the "primary partner" that remained in its channel.

The white label program will soon be history, meaning many VMware users will need to find a new home.

[...] The VMware ecosystem now has good reason to fear Broadcom is capricious, because just last March the company hailed its revised partner program as ideal for customers and partners alike.

By changing its partner program twice within 18 months, Broadcom will therefore anger and disappoint many customers by forcing them to make a costly and complex cloud migration.

Partners that made the cut a year ago and have now been ejected will likely be furious – and with good cause because they will have invested in VMware practices that may soon be dust.

[...] Broadcom points to growing VMware revenue as evidence its approach is working.

Acquisitions are seldom quick or clean. While Broadcom can point to improved software and product development prowess, this one has been painful for VMware customers who surely now deserve a period of calm and predictability, even if that's not the best outcome for Broadcom shareholders.


Original Submission

posted by hubie on Wednesday August 13, @03:51AM   Printer-friendly

Experts working to benchmark resource use of AI models say new version's enhanced capabilities come at a steep cost:

In mid-2023, if a user asked OpenAI's ChatGPT for a recipe for artichoke pasta or instructions on how to make a ritual offering to the ancient Canaanite deity Moloch, its response might have taken – very roughly – 2 watt-hours, or about as much electricity as an incandescent bulb consumes in 2 minutes.

OpenAI released a model on Thursday that will underpin the popular chatbot – GPT-5. Ask that version of the AI for an artichoke recipe, and the same amount of pasta-related text could take several times – even 20 times – that amount of energy, experts say.

As it rolled out GPT-5, the company highlighted the model's breakthrough capabilities: its ability to create websites, answer PhD-level science questions, and reason through difficult problems.

But experts who have spent the past years working to benchmark the energy and resource usage of AI models say those new powers come at a cost: a response from GPT-5 may take a significantly larger amount of energy than a response from previous versions of ChatGPT.

OpenAI, like most of its competitors, has released no official information on the power usage of its models since GPT-3, which came out in 2020. Sam Altman, its CEO, tossed out some numbers on ChatGPT's resource consumption on his blog this June. However, these figures, 0.34 watt-hours and 0.000085 gallons of water per query, do not refer to a specific model and have no supporting documentation.

"A more complex model like GPT-5 consumes more power both during training and during inference. It's also targeted at long thinking ... I can safely say that it's going to consume a lot more power than GPT-4," said Rakesh Kumar, a professor at the University of Illinois, currently working on the energy consumption of computation and AI models.

The day GPT-5 was released, researchers at the University of Rhode Island's AI lab found that the model can use up to 40 watt-hours of electricity to generate a medium-length response of about 1,000 tokens, which are the building blocks of text for an AI model and are approximately equivalent to words.

[...] As large as these numbers are, researchers in the field say they align with their broad expectations for GPT-5's energy consumption, given that GPT-5 is believed to be several times larger than OpenAI's previous models. OpenAI has not released the parameter counts – which determine a model's size – for any of its models since GPT-3, which had 175bnparameters.

[...] In order to calculate an AI model's resource consumption, the group at the University of Rhode Island multiplied the average time that model takes to respond to a query – be it for a pasta recipe or an offering to Moloch – by the model's average power draw during its operation.

Estimating a model's power draw was "a lot of work", said Abdeltawab Hendawi, a professor of data science at the University of Rhode Island. The group struggled to find information on how different models are deployed within data centers. Their final paper contains estimates for which chips are used for a given model, and how different queries are parceled out between different chips in a datacenter.

Altman's June blog post confirmed their findings. The figure he gave for ChatGPT's energy consumption per query, 0.34 watt-hours per query, closely matches what the group found for GPT-4o.

Hendawi, Jegham and others in their group said that their findings underscored the need for more transparency from AI companies as they release ever-larger models.

"It's more critical than ever to address AI's true environmental cost," said Marwan Abdelatti, a professor at URI. "We call on OpenAI and other developers to use this moment to commit to full transparency by publicly disclosing GPT-5's environmental impact."


Original Submission