Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:59 | Votes:104

posted by jelizondo on Sunday September 07, @09:26PM   Printer-friendly

This Unlikely Chemical Could Be a Powerful Weapon Against Climate Change:

New research provides a framework for carbon capture driven by photochemistry, a potentially cheaper and less energy-intensive alternative to leading technologies.

Year after year, humans pump more carbon dioxide (CO2) into the atmosphere than nature can remove, fueling global warming. As the need to mitigate climate change becomes increasingly urgent, scientists are developing ways to actively remove CO2 from the atmosphere in addition to cutting emissions.

One of the biggest hurdles to scaling current carbon capture technologies is the vast amount of energy they consume, but what if there was an alternative that uses an abundant, cheap power source? A team of researchers at Harvard University recently took a major step toward that goal. Their technique, outlined in a Nature Chemistry study published August 13, harnesses sunlight to efficiently trap CO2.

They're not talking about slapping solar panels on direct air capture systems that run on heat and electricity. This approach is based on specially designed molecules that use light to change their chemical state and reversibly trap CO2.

The methodology the researchers developed is a significant departure from leading direct air capture technologies. These systems tend to rely on chemical solvents or porous sorbents that readily bond to CO2, pulling it out of the air. But the ability to reuse those materials—and harness the carbon for practical use—requires a huge input of energy (usually heat) to release the trapped carbon into a container.

"If you want a practical way to pull carbon dioxide out of the atmosphere and then release it into a tank where you can use it, you need the solution—or whatever medium you're going to use—to be able to both capture and release. That's the key," co-author Richard Liu, an assistant professor of chemistry and chemical biology at Harvard, told Gizmodo.

"Our innovation here is that we began thinking about whether you could use light directly to do that," he explained.

To that end, Liu's team synthesized organic molecules called "fluorenyl photobases" that do exactly that. When exposed to sunlight, they rapidly release hydroxide ions that capture CO2 from ambient air by chemically binding to it. In the absence of light, the reaction reverses, releasing the trapped CO2 and reverting the photobase back to its original state.

Through a series of experiments, the researchers determined that the most effective fluorenyl photobase for CO2 capture was PBMeOH. This molecule showed no CO2 capture in the dark but the highest capture rate when exposed to light. What's more, testing showed that a PBMeOH-based carbon capture system is stable and can complete many cycles with minimal loss of efficiency.

"They only fade about 1% per cycle, so you could imagine only replenishing every 100 cycles," Liu explained.

This work demonstrates a reversible system for carbon capture that relies solely on sunlight as the direct energy input, highlighting photobases as a promising alternative to traditional sorbents.

The results are encouraging, but Liu and his colleagues will need to clear several hurdles before they can turn their framework into real-world technologies. They're already working to address several challenges, such as the engineering aspects of how the system will expose these compounds to light and dark.

While the "best" approach is still unknown, photochemical systems present some key advantages over existing technologies, Liu said. Exploring new ways to remove CO2 from the atmosphere is more urgent than ever before. "Because we can't get rid of every source in the short term, carbon capture from the atmosphere—and from point sources, especially—is going to be an important part of the solution."


Original Submission

posted by jelizondo on Sunday September 07, @04:47PM   Printer-friendly

What the hell is going on right now?:

Engineers are burning out. Orgs expect their senior engineering staff to be able to review and contribute to "vibe-coded" features that don't work. My personal observation is that the best engineers are highly enthusiastic about helping newer team members contribute and learn.

Instead of their comments being taken to heart, reflected on, and used as learning opportunities, hapless young coders are instead using feedback as simply the next prompt in their "AI" masterpiece. I personally have witnessed and heard first-hand accounts where it was incredibly obvious a junior engineer was (ab)using LLM tools.

In a recent company town-hall, I watched as a team of junior engineers demoed their latest work. I couldn't tell you what exactly it did, or even what it was supposed to do - it didn't seem like they themselves understood. However, at a large enough organization, it's not about what you do, its about what people think you do. Championing their "success", a senior manager goaded them into bragging about their use of "AI" tools to which they responded "This is four thousand lines of code written by Claude". Applause all around.

I was asked to add a small improvement to an existing feature. After reviewing the code, I noticed a junior engineer was the most recent to work on that feature. As I always do, I reached out to let them know what I'd be doing and to see if they had any insight that would be useful to me. Armed with the Github commit URL, I asked for context around their recent change. I can't know for sure, but I'd be willing to put money down that my exact question and the commit were fed directly into an LLM which was then copy and pasted back to me. I'm not sure why, but I felt violated. It felt wrong.

A friend recently confided in me that he's been on a team of at least 5 others that have been involved in reviewing a heavily vibe-coded PR over the past month. A month. Reviewing slop produced by an LLM. What are the cost savings of paying ChatGPT $20 a month and then having a literal team of engineers try and review and merge the code?

Another friend commiserated the difficulty of trying to help an engineer contribute at work. "I review the code, ask for changes, and then they immediately hit me with another round of AI slop."

Here's the thing - we want to help. We want to build good things. Things that work well, that make people's lives easier. We want to teach people how to do software engineering! Any engineer is standing entirely on the shoulders of their mentors and managers who've invested time and energy into them and their careers. But what good is that investment if it's simply copy-pasted into the latest "model" that "is literally half a step from artificial general intelligence"? Should we instead focus our time and energy into training the models and eliminate the juniors altogether?

What a sad, dark world that would be.

Here's an experiment for you: stop using "AI". Try it for a day. For a week. For a month.

Recently, I completely reset my computer. I like to do it from time to time. As part of that process I prune out any software that I no longer use. I've been paying for Claude Pro for about 6 months. But slowly, I've felt like it's just a huge waste of time. Even if I have to do a few independent internet searches and read through a few dozen stack overflow and doc pages, my own conclusion is so much more reliable and accurate than anything an LLM could ever spit out.

So what good are these tools? Do they have any value whatsoever?

Objectively, it would seem the answer is no. But at least they make a lot of money, right?

Is anyone making money on AI right now? I see a pipeline that looks like this:

  • "AI" is applied to some specific, existing area, and a company spins up around it because it's so much more "efficient"
  • AI company gets funding from venture capitalists
  • AI company give funding to AI service providers such as OpenAI in the form of paying for usage credits
  • AI company evaporates

This isn't necessarily all that different than the existing VC pipeline, but the difference is that not even OpenAI is making money right now. I believe this is because the technology is inherently flawed and cannot scale to meet the demand. It simply consumes too much electricity to ever be economically viable, not to mention the serious environmental concerns.

We can say our prayers that Moore's Law will come back from the dead and save us. We can say our prayers that the heat death of the universe will be sufficiently prolonged in order for every human to become a billionaire. We can also take an honestly not even hard look at reality and realize this is a scam.

The emperor is wearing no clothes.


Original Submission

posted by jelizondo on Sunday September 07, @11:56AM   Printer-friendly
from the Come-to-the-Dark-Side-we-have-cookies dept.

France fines Google, SHEIN for undercooked cookie policies:

France's data protection authority levied massive fines against Google and SHEIN for dropping cookies on customers without securing their permission, and also whacked Google for showing ads in email service.

The Commission nationale de l'informatique et des libertés (CNIL) announced [source in French] the fines on Wednesday, and explained it found Google broke local laws as its signup process for new accounts, which encouraged users to approve use of cookies tied to advertising services, but didn't inform them that accepting those cookies was a condition for using Google's services.

The CNIL found Google's cookie advice mean locals created 74 million accounts under circumstances that breached French law, and 53 million people therefore saw ads in the "Promotions" and "Social" tabs of their email accounts.

The regulator fined Google LLC € 200 million ($233 million) and levied a €125 million ($145 million) fine against Google Ireland for its role in the mess.

CNIL ordered [source in French] Chinese e-tailer SHEIN to pay €150 million ($175 million) for not properly securing permission before dropping cookies on 12 million people residing in France who visited shein.com.

The regulator found the Chinese site didn't properly explain how it used cookies, opting out of cookies had no effect because even if users clicked "Reject All" Shein sent more cookies anyway, and kept reading those already present.

CNIL's notice points out that in recent years it's found and punished many similar misuses of cookies, suggesting Shein should have understood its obligations.

Shein intends to appeal the decision. Google is reviewing it.

Diplomats are probably also poring over the CNIL's decision, given US president Donald Trump's recent warning that he could impose tariffs on nations which dare to regulate US tech companies.

Trump seems to regard other sovereign nations' digital regulations as illegitimate if they cost American companies money or slow their growth. CNIL argues that Google and Shein harmed local consumers by breaching their privacy.

Another reason for Trump's ire is that he feels regulators don't target Chinese companies. CNIL has proved him wrong on this occasion.


Original Submission

posted by jelizondo on Sunday September 07, @07:11AM   Printer-friendly

Nvidia's next-gen AI chip could double the price of H20 if China export is approved:

The Nvidia B30A, which the company is developing as its next-generation replacement for the China-market H20 AI chip, is expected to cost double the price of the earlier model. Reuters reports that the H20 currently sells between $10,000 and $12,000, meaning the B30A will likely be priced from $20,000 to $25,000. Despite this massive increase, many Chinese companies, like TikTok owner ByteDance, are keen on getting their hands on Nvidia's latest chips, with some sources saying that these chips are considered great deals.

It's estimated that the B30A will be six times more powerful than the H20, despite being a watered-down version of the full-fat B300 AI chip. Nvidia has reportedly already developed the B30A, but it's still waiting for approval from the U.S. government to proceed with the marketing and sale of the Blackwell-based GPU. In the meantime, many Chinese tech firms are still buying H20 chips despite Beijing's guidance to stop buying them.

U.S. President Donald Trump has previously banned the sales of H20 chips, resulting in a $5.5-billion write-off for Nvidia. However, he reversed course in mid-July 2025, allowing the company to resume deliveries to Chinese customers. But instead of a blanket provision allowing it to sell the H20 to anyone, the U.S. instead started issuing export licenses to Nvidia, allowing it to ship its products.

Because of this, there's currently a massive license backlog at the U.S. Department of Commerce —the worst it has experienced in 30 years — turning the H20 taps into a trickle. The White House is also still ironing out the 15% deal that AMD and Nvidia signed, so that the companies can market their products to China. We've also seen reports that Nvidia is asking its suppliers to wind down H20-related production, with the company only telling Tom's Hardware, "We constantly manage our supply chain to address market conditions."

So, with all these delays, red tape, and rumors, many of Nvidia's Chinese customers are getting apprehensive and want assurance that their H20 orders are delivered. This shows that there is still massive demand for these powerful chips, thus turning it into a massive geopolitical tool to be wielded on the negotiating table by both the East and the West.


Original Submission

posted by jelizondo on Sunday September 07, @02:27AM   Printer-friendly

Porsche's New Cayenne Will Charge Itself Like No Other EV:

Those who closely follow electric cars will have heard whispers of wireless charging for a while now. And if you're not an EV aficionado, you've probably wondered why it hasn't happened. Well, that's all about to change. Porsche announced on Thursday that it's rolling out wireless charging on the upcoming all-electric Cayenne later this year.

The goal is to put an end to wrangling thick and bulky charging cables. Instead, Porsche is stepping in as the first electric car maker to offer wireless charging that's actually going into production.

Porsche's inductive charging system delivers up to 11kW with around 90% efficiency, which is on par with traditional wired AC charging. But unlike most EV solutions that involve a jungle of wall-mounted boxes, Porsche's setup requires just one unassuming floor plate in your garage or driveway. Given that Porsche says roughly 75% of electric charging happens at home, it's not hard to see the appeal.

This one-box system does away with the wall box and bulky control units, making the process look effortless. Just park your Cayenne Electric over the slab, and you're good to go. The car even lowers itself slightly to align with the plate -- which makes charging as efficient as possible.

Many startups have tried and failed to make wireless charging for EVs happen over the years, said Antuan Goodwin, CNET's senior cars writer. "Challenges that have kept the tech from widespread adoption include: fragile hardware (it will be run over by drivers), alignment issues, energy losses that make it significantly slower than plugging in or excessive/dangerous heat generation from sending high amperage over air."

Porsche thinks it's managed to overcome these roadblocks. The system works via a transmitter coil embedded in the base plate and a corresponding receiver in the vehicle's underbody, sandwiched between the front wheels. It transfers energy using a magnetic field over a gap of a few centimeters, and it has all the safety features you'd expect: motion sensors, object detection and a big red pause button.

The Cayenne Electric, which will be the first to offer this tech, is due for its release later this year. As for the floor plate, it will go on sale in 2026 through Porsche Centres and online. Pricing hasn't been detailed yet, but expect it to land at the premium end.


Original Submission

posted by janrinok on Saturday September 06, @09:42PM   Printer-friendly

Bill Gates' 48-year-old Microsoft 6502 BASIC goes open source:

Microsoft has released 'BASIC for 6502 Microprocessor - Version 1.1' on Github, under the MIT license. Now anyone is free to go and download, modify, share, and even resell source code originally crafted by Bill Gates. This is a hugely significant code release, as close derivatives of this BASIC ended up at the heart of several iconic computers, including the best-selling computer of all time, the Commodore 64.

The Microsoft Blog provides a potted history of its BASIC, sharing some important facts. Microsoft BASIC was the firm's first product, and started out as a BASIC language interpreter for the Intel 8080, written by Bill Gates and Paul Allen for the Altair 8800, in 1975.

What we are seeing shared on Github under the MIT license is the BASIC interpreter code ported by Bill Gates and Ric Weiland to the MOS 6502 Microprocessor (hence the name). This was released in 1976.

Something fun to note is the commit date for the m6502.asm file and its related markdown documents. July 27, 1978. Well before Git was even created. An easily done task, all we need to do is amend the commit and pass the date.

Importantly for widespread adoption, and to fuel what would become Microsoft's signature business model, this MOS 6502 assembly code formed the foundation of BASIC interpreters that shipped with the Apple II, Commodore PET, VIC-20 and C64.

Notably, Commodore licensed this 6502 port of Microsoft BASIC for a flat fee of $25,000. On the surface this doesn't sound stellar in terms of Microsoft revenue generation but, as the firm says, the decision put Microsoft software in front of millions of new programmers, who would make their first tentative coding steps by typing:

10 PRINT "HELLO" 20 GOTO 10 RUN

The 1.1 release on GitHub specifically supports the Apple II, Commodore PET, Ohio Scientific (OSI), the MOS Technology KIM-1, and PDP-10 Simulation systems. Microsoft notes that 1.1 includes "fixes to the garbage collector identified by Commodore and jointly implemented in 1978 by Commodore engineer John Feagans and Bill Gates, when Feagans traveled to Microsoft's Bellevue offices."

In total, the release shares 6,955 lines of assembly language code for anyone who is interested to peruse and play with. Microsoft characterizes this BASIC interpreter as one of the most historically significant pieces of software from the early personal computer era.

Microsoft says its BASIC for 6502 Microprocessor - Version 1.1 source code release, which comes with a clear, modern license, builds on its earlier release of GW-BASIC which first shipped in the original IBM PC's ROM, evolved into QBASIC, and later into Visual Basic.


Original Submission

posted by janrinok on Saturday September 06, @04:59PM   Printer-friendly
from the chump-change dept.

Jury orders Google to pay $425 million for unlawfully tracking millions of users:

A federal jury in San Francisco has ruled that Google must pay $425 million for unlawfully tracking millions of users who believed they had disabled data collection on their accounts. The verdict concludes a trial in which plaintiffs argued that Google violated its own privacy assurances through the Web & App Activity setting, collecting information from mobile devices over an eight-year period.

The Web & App Activity feature is a central component of Google's privacy controls, designed to let users manage whether their searches, location history, and interactions with Google services or partner websites and apps are stored.

When enabled, the setting can retain details such as search history, activity in Google apps, general location based on device or IP address, and browsing performed while signed into Chrome or Android devices. Users have the option to disable feature, which, according to Google's privacy documentation, should prevent this data from being added to their accounts.

However, the trial revealed that Google continued collecting data even for users who had disabled tracking. Through partnerships with major third-party apps including Uber, Venmo, Instagram, and Facebook, the company's analytics services gathered activity and usage data independently of the user's Web & App Activity settings.

Plaintiffs alleged that this enabled Google to amass vast amounts of behavioral information across its ecosystem even after users supposedly opted out.

The jury found Google liable for two of the three privacy-related claims but rejected the assertion that the company acted maliciously, declining to award additional punitive damages beyond the initial $425 million. The plaintiffs had originally sought damages exceeding $31 billion in the class-action lawsuit.

Google defended its practices, stating that it plans to contest the verdict and emphasizing that its technology gives users control over their personal data. The company argued that the jury had misunderstood how its products function and stressed that when users turn off personalization, their choices are respected.

During the proceedings, Google asserted that the data collected was nonpersonal, pseudonymous, and stored in secure, encrypted environments not linked to Google accounts or individual identities. Despite these assurances, the scope of data collection – particularly the records captured through third-party applications – played a key role in the jury's decision.

Judge Richard Seeborg certified the case to include approximately 98 million Google users and account for over 174 million devices. The ruling adds to Google's ongoing legal challenges concerning user privacy.

Earlier in 2025, the company agreed to pay nearly $1.4 billion to settle claims with the state of Texas over alleged privacy violations. In another settlement reached in April 2024, Google agreed to destroy billions of records linked to users' private browsing activities, following accusations that it tracked individuals while using Incognito mode and similar features.

Google has announced that it will pursue an appeal.


Original Submission

posted by janrinok on Saturday September 06, @12:13PM   Printer-friendly

China likely to land on Moon before US does again:

A former NASA administrator has told the US Senate Commerce Committee that it is "highly unlikely" the US will return humans to the Moon before a Chinese taikonaut plants a flag on the lunar surface.

The problem, according to Jim Bridenstine, is the architecture NASA selected to return to the Moon, and in particular the choice of SpaceX's Starship to land humans on the regolith.

Bridenstine waved away the issues with NASA's Space Launch System (SLS) – the massive rocket intended to launch humans to the Moon – noting that "it has been expensive, it had overruns, but it's behind us."

Of the Orion capsule, which will be used to transport the crew from Earth and back again, Bridenstine said: "The Orion crew capsule is not only usable today, but ultimately the cost is going down because more and more of it is reusable every time we use the Orion crew capsule. Those two elements are in good shape."

What isn't in good shape is the architecture, including the choice of SpaceX's Starship. The former administrator listed the issues. First, there was the task of getting the Human Landing System (HLS) variant of Starship to the Moon, which would require an unknown number of launches from Earth to refuel it. "By the way," said Bridenstine, "that whole in-space refueling thing has never been tested either."

Then there is human-rating the HLS variant, a process that Bridenstine noted "hasn't even started yet." He continued, noting more issues with NASA's lunar architecture. How long could the HLS variant of Starship loiter in orbit around the Moon before the crew arrived? Was putting a crew on the surface of the Moon with no means of returning to the Orion spacecraft for seven days acceptable?

"The biggest decision in the history of NASA – at least since I've been paying attention – happened in the absence of a NASA administrator, and that decision was instead of buying a moonlander, we're going to buy a big rocket."

Biggest decision? Maybe. Maybe not. Sending Apollo 8 around the Moon on the first crewed launch of a Saturn V would have to be up there, but Bridenstine's passion is undeniable.

"This is an architecture that no NASA administrator that I'm aware of would have selected had they had the choice. But it was a decision that was made in the absence of a NASA administrator. It's a problem. It needs to be solved."

The decision was taken in 2021. As well as SpaceX, Jeff Bezos' Blue Origin threw its hat into the ring alongside US-based Dynetics. At the time, NASA believed that choosing a single partner would reduce costs. Blue Origin later sued NASA over the contract award.

Neither SpaceX nor Blue Origin was present at the hearing. Bridenstine also did not offer a solution to the problem.


Original Submission

posted by janrinok on Saturday September 06, @07:29AM   Printer-friendly

New hollow-core fiber outperforms glass, pushing data closer to light speed:

A Microsoft-backed research team has set a new benchmark for optical fiber performance, developing a hollow-core cable that posts the lowest optical loss ever recorded in the industry, according to findings published in Nature Photonics. The milestone comes after years of effort by Lumenisity, a spinout from the University of Southampton's Optoelectronics Research Center, now operating under Microsoft's wing following an acquisition in 2022.

This novel fiber, utilizing a design known as double-nested antiresonant nodeless fiber (DNANF), exhibits an attenuation of just 0.091 dB/km at the 1,550-nm wavelength. For comparison, today's best silica fibers bottom out at roughly 0.14 dB/km – a figure that has remained relatively unchanged since the 1980s.

Francesco Poletti, who co-authored the research paper and helped invent the design, told The Register that the development ranks as "one of the most noteworthy improvements in waveguided optical technology for the past 40 years."

The main draw behind hollow-core fiber is the medium: while standard optical fiber guides photons through solid glass, limiting signal speed to just under 200 million meters per second, the hollow variant lets light travel through air, which increases velocity closer to the maximum of 300 million meters per second.

More speed means lower latency – an especially important metric for moving data between cloud datacenters and accelerating mobile network performance. However, earlier hollow-core designs suffered high energy loss (over 1 dB/km), restricting practical use to short and specialized links.

The DNANF fiber counters this by using concentric, micron-thin glass tubes acting as tiny mirrors, bouncing light through an air core while suppressing unwanted transmission modes. Tests conducted on kilometer-scale spools confirmed attenuation below the critical 0.1 dB/km mark using multiple measurement methods, including optical time-domain reflectometry.

The researchers report that loss remained under 0.2 dB/km across a 66-terahertz spectral band – significantly more bandwidth and flexibility than conventional fibers can offer. They also measured chromatic dispersion at levels seven times lower than those of legacy architectures, which could simplify transceiver designs and reduce network energy costs. "So you need to amplify less," Poletti explained. "It can lead to greener networks if this is how you want to exploit it."

Microsoft's acquisition of Lumenisity in 2022 signaled its intention to move hollow-core technology from academic circles into real-world infrastructure. Initial iterations achieved losses of 2.5 dB/km – respectable but uncompetitive with legacy glass fiber. Through ongoing development, Microsoft confirmed to Network World that approximately 1,200 kilometers of the new fiber are already carrying live cloud traffic. At the 2024 Ignite conference, CEO Satya Nadella announced plans to deploy 15,000 kilometers of DNANF fiber across Azure's global backbone within two years, aiming to support the explosive growth in AI workloads.

While the technology promises transmission speeds up to 45 percent faster than current solid-core options and holds the potential for five to ten times wider bandwidth, certain hurdles remain. Scaling production will require new manufacturing tools, and the fiber must undergo international standardization processes before entering mass-market use – a milestone that could take around five years. Still, the latest results mark the first time fiber that carries light through air has outpaced and outperformed the glass it was engineered to replace – a watershed moment for telecom and cloud infrastructure.


Original Submission

posted by hubie on Saturday September 06, @02:45AM   Printer-friendly

New law emboldens complaints against digital content rentals labled as purchases:

Words have meaning. Proper word selection is integral to strong communication, whether it's about relaying one's feelings to another or explaining the terms of a deal, agreement, or transaction.

Language can be confusing, but typically when something is available to "buy," ownership of that good or access to that service is offered in exchange for money. That's not really the case, though, when it comes to digital content.

Often, streaming services like Amazon Prime Video offer customers the options to "rent" digital content for a few days or to "buy" it. Some might think that picking "buy" means that they can view the content indefinitely. But these purchases are really just long-term licenses to watch the content for as long as the streaming service has the right to distribute it—which could be for years, months, or days after the transaction.

A lawsuit [PDF] recently filed against Prime Video challenges this practice and accuses the streaming service of misleading customers by labeling long-term rentals as purchases. The conclusion of the case could have implications for how streaming services frame digital content.

[...] Despite the dismissal of similar litigation years ago, Reingold's complaint stands a better chance due to a California law that took effect in January banning the selling of a "digital good to a purchaser with the terms 'buy,' 'purchase,' or any other term which a reasonable person would understand to confer an unrestricted ownership interest in the digital good, or alongside an option for a time-limited rental."

There are some instances where the law allows digital content providers to use words like "buy." One example is if, at the time of transaction, the seller receives acknowledgement from the customer that the customer is receiving a license to access the digital content; that they received a complete list of the license's conditions; and that they know that access to the digital content may be "unilaterally revoked."

A seller can also use words like "buy" if it provides to the customer ahead of the transaction a statement that "states in plain language that 'buying' or 'purchasing' the digital good is a license," as well as online access to terms and conditions, the law states.

The California legislation helps strengthen the lawsuit filed by Reingold, a California resident. The case is likely to hinge on whether or not fine print and lengthy terms of use are appropriate and sufficient communication.

[...] Streaming is now the most popular way to watch TV, yet many are unaware of what they're buying. As Reingold's lawsuit points out, paying for content in the streaming era is different from buying media from physical stores. Physical media nets control over your ability to watch stuff for years. But you also had to have retrieved the media from a store (or website) and to have maintained that physical copy, as well the necessary hardware and/or software for playing it. Streaming services can rip purchased content from customers in bulk, but they also offer access to a much broader library that's instantly watchable with technology that most already have (like a TV and Internet).

We can debate the best approach to distributing media. What's clearer is the failure of digital content providers to ensure that customers fully understand they're paying for access to content, and that this access could be revoked at any time.


Original Submission

posted by janrinok on Friday September 05, @09:57PM   Printer-friendly

AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex:

A UK government department's three-month trial of Microsoft's M365 Copilot has revealed no discernible gain in productivity – speeding up some tasks yet making others slower due to lower quality outputs.

The Department for Business and Trade received 1,000 licenses for use between October and December 2024, with the majority of these allocated to volunteers and 30 percent to randomly selected participants. Some 300 of these people consented to their data being analyzed.

An evaluation of time savings, quality assurance, and productivity was then calculated in the assessment.

Overall, 72 percent of users were satisfied or very satisfied with their digital assistant and voiced disappointment when the test ended. However, the reality of productivity gains was more nuanced than Microsoft's marketing materials might suggest.

Around two-thirds of the employees in the trial used M365 at least once a week, and 30 percent used it at least once a day – which doesn't sound like great value for money.

In the UK, commercial prices range from £4.90 per user per month to £18.10, depending on business plan. This means that across a government department, those expenses could quickly mount up.

According to the M365 Copilot monitoring dashboard made available in the trial, an average of 72 M365 Copilot actions were taken per user.

"Based on there being 63 working days during the pilot, this is an average of 1.14 M365 Copilot actions taken per user per day," the study says. Word, Teams, and Outlook were the most used, and Loop and OneNote usage rates were described as "very low," less than 1 percent and 3 percent per day, respectively.

"PowerPoint and Excel were slightly more popular; both experienced peak activity of 7 percent of license holders using M365 Copilot in a single day within those applications," the study states.

The three most popular tasks involved transcribing or summarizing a meeting, writing an email, and summarizing written comms. These also had the highest satisfaction levels, we're told.

Participants were asked to record the time taken for each task with M365 Copilot compared to colleagues not involved in the trial. The assessment report adds: "Observed task sessions showed that M365 Copilot users produced summaries of reports and wrote emails faster and to a higher quality and accuracy than non-users. Time savings observed for writing emails were extremely small.

"However, M365 Copilot users completed Excel data analysis more slowly and to a worse quality and accuracy than non-users, conflicting time savings reported in the diary study for data analysis.

"PowerPoint slides [were] over 7 minutes faster on average, but to a worse quality and accuracy than non-users." This means corrective action was required.

A cross-section of participants was asked questions in an interview – qualitative findings – and they claimed routine admin tasks could be carried out with greater efficiency with M365 Copilot, letting them "redirect time towards tasks seen as more strategic or of higher value, while others reported using these time savings to attend training sessions or take a lunchtime walk."

Nevertheless, M365 Copilot did not necessarily make them more productive, the assessment found. This is something Microsoft has worked on with customers to quantify the benefits and justify the greater expense of a license for M365 Copilot.

"We did not find robust evidence to suggest that time savings are leading to improved productivity," the report says. "However, this was not a key aim of the evaluation and therefore limited data was collected to identify if time savings have led to productivity gains."

And hallucinations? 22 percent of the Department for Business and Trade guinea pigs that responded to the assessors said they did identify hallucinations, 43 percent did not, and 11 percent were unsure.

Users reported mixed experiences with colleagues' attitudes, with some teams embracing their AI-augmented workers while others turned decidedly frosty. Line managers' views appeared to significantly influence adoption rates, proving that office politics remain refreshingly human.

The department is still crunching numbers on environmental costs and value for money, suggesting the full reckoning of AI's corporate invasion remains some way off. An MIT survey published last month, for example, found that 95 percent of companies that had collectively sunk $35-40 billion into generative AI had little to show for it.

For now, it seems M365 Copilot excels at the mundane while stumbling over the complex – an apt summary of GenAI in 2024.


Original Submission

posted by janrinok on Friday September 05, @05:13PM   Printer-friendly

'Doomer science fiction': Nvidia blasts proposed US bill that would force it to give American buyers 'first option' in AI GPU purchases before selling chips to other countries, including allies:

"The AI Diffusion Rule was a self-defeating policy, based on doomer science fiction, and should not be revived. Our sales to customers worldwide do not deprive U.S. customers of anything — and in fact expand the market for many U.S. businesses and industries. The pundits feeding fake news to Congress about chip supply are attempting to overturn President Trump's AI Action Plan and surrender America's chance to lead in AI and computing worldwide."

Original Article

The U.S. Senate on Tuesday unveiled a preliminary version of the annual defense policy package that includes a requirement for American developers of AI processors to prioritize domestic orders for high-performance AI processors before supplying them to overseas buyers and explicitly calls to deny exports of highest-end AI GPUs. The legislators call their initiative the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2025 (GAIN AI Act) and their goal is to ensure that American 'small businesses, start-ups, and universities' can lay their hands on the latest AI GPUs from AMD, Nvidia, etc before clients in other countries. However, if the bill becomes a law, it will hit American companies hard.

"Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade," said Brad Carson, president of Americans for Responsible Innovation (ARI). "Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth. As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security."

This GAIN AI Act demands developers of AI processors, such as AMD or Nvidia, to give U.S. buyers first opportunity to purchase advanced AI hardware before selling to foreign nations, including allied nations like European countries or the U.K. and adversaries like China. To do so, the Act proposes to establish export controls on all 'advanced' GPUs (more on this later) to be shipped outside of the U.S. and deny export licenses on the 'most powerful chips.'

To get the export license, the exporter must confirm to certain conditions:

  • U.S. customers received a right to decline first;
  • There is no backlog of pending U.S. orders;
  • The intended export will not cause stock delays or reduce manufacturing capacity for U.S. purchasers;
  • Pricing or contract terms being offered do not favor foreign recipients over U.S. customers;
  • The export will not be used by foreign entities to undercut U.S. competitors outside of their domestic market.

If one of the certifications is missing, the export request must be denied, according to the proposal.

What is perhaps no less important about the act is that it sets precise criteria of what U.S. legislators consider 'advanced integrated circuit,' or advanced AI GPU. To qualify as 'advanced', a processor has meet any one of the following criteria:

  • Offers a total processing performance (TPP) score of 2400 or higher, which is counted as listed processing power in TFLOPS multiplied by the length of operation (e.g., TFLOPS or TOPS of 8/16/32/64 bits) without sparsity. Processors with a TPP of 4800 or higher are considered too powerful to be exported regardless of destination country.
  • Offers performance density (PD) metric of over 3.2. PD is counted by dividing TPP by the die area measured in square millimeters.
  • Has DRAM bandwidth of over 1.4 TB/s, interconnect bandwidth of over 1.1 TB/s, or combined DRAM and interconnect bandwidth of over 1.7 TB/s.

Essentially, the legislators plan to export control all fairly sophisticated processors, including Nvidia's HGX H20 (because of high memory bandwidth) or L2 PCIe (because of high performance density) that are now about two years old. As a result, if the proposed bill passes as a law and is signed by the President, then it will again restrict sales of AMD's Instinct MI308 or Nvidia's HGX H20 both to all customers outside of the U.S. Furthermore, GPUs with a TPP of 4800 or higher will be prohibited for exports, so Nvidia will be unable to sell its H100 and more advaced GPUs outside of the U.S. as even H100 has a TPP score of 16,000 (B300 has a TPP score of 60,000).

Coincidentally, Nvidia on Wednesday issued a statement claiming that shipments of its H20 to customers in China does not affect its ability to serve clients in the U.S.

"The rumor that H20 reduced our supply of either H100/H200 or Blackwell is also categorically false — selling H20 has no impact on our ability to supply other NVIDIA products," the statement reads.


Original Submission

posted by hubie on Friday September 05, @12:28PM   Printer-friendly

Ice generates electricity when it gets stressed in a very specific way, new research suggests:

Don't mess with ice. When it's stressed, ice can get seriously sparky.

Scientists have discovered that ordinary ice—the same substance found in iced coffee or the frosty sprinkle on mountaintops—is imbued with remarkable electromechanical properties. Ice is flexoelectric, so when it's bent, stretched, or twisted, it can generate electricity, according to a Nature Physics paper published August 27. What's more, ice's peculiar electric properties appear to change with temperature, leading researchers to wonder what else it's hiding.

[...] An unsolved mystery in molecular chemistry is why the structure of ice prevents it from being piezoelectric. By piezoelectricity, scientists refer to the generation of an electric charge when mechanical stress changes a solid's overall polarity, or electric dipole moment.

The water molecules that make up an ice crystal are polarized. But when these individual molecules organize into a hexagonal crystal, the geometric arrangement randomly orients the dipoles of these water molecules. As a result, the final system can't generate any piezoelectricity.

However, it's well known that ice can naturally generate electricity, an example being how lightning strikes emerge from the collisions between charged ice particles. Because ice doesn't appear to be piezoelectric, scientists were confused as to how the ice particles became charged in the first place.

"Despite the ongoing interest and large body of knowledge on ice, new phases and anomalous properties continue to be discovered," the researchers noted in the paper, adding that this unsatisfactory knowledge gap suggests "our understanding of this ubiquitous material is incomplete."

[...] For the experiment, they placed a slab of ice between two electrodes while simultaneously confirming that any electricity produced wasn't piezoelectric. To their excitement, bending the ice slab created an electric charge, and at all temperatures, too. What they didn't expect, however, was a thin ferroelectric layer that formed at the ice slab surface below -171.4 degrees Fahrenheit (-113 degrees Celsius).

"This means that the ice surface can develop a natural electric polarization, which can be reversed when an external electric field is applied—similar to how the poles of a magnet can be flipped," Wen explained in a statement.

Surprisingly, "ice may have not just one way to generate electricity but two: ferroelectricity at very low temperatures and flexoelectricity at higher temperatures all the way to 0 [degrees C]," Wen added.

The finding is both useful and informative, the researchers said. First, the "flip" between flexoelectricity and ferroelectricity puts ice "on par with electroceramic materials such as titanium dioxide, which are currently used in advanced technologies like sensors and capacitors," they noted.

Perhaps more apparent is the finding's connection to natural phenomena, namely thunderstorms. According to the paper, the electric potential generated from flexoelectricity in the experiment closely matched that of the energy produced by colliding ice particles. At the very least, it would make sense for flexoelectricity to be partly involved in how ice particles interact inside thunderclouds.

"With this new knowledge of ice, we will revisit ice-related processes in nature to find if there is any other profound consequence of ice flexoelectricity that has been overlooked all the way," Wen told Gizmodo.

Both conclusions will need further scrutiny, the researchers admitted. Nevertheless, the findings offer illuminating new insight into something as common as ice—and demonstrate how much there's still to be learned about our world.

Journal: Flexoelectricity and surface ferroelectricity of water ice". X.Wen et all


Original Submission

posted by hubie on Friday September 05, @07:47AM   Printer-friendly

Google's penalty for being a search monopoly does not include selling Chrome:

Google has avoided the worst-case scenario in the pivotal search antitrust case brought by the US Department of Justice. DC District Court Judge Amit Mehta has ruled that Google doesn't have to give up the Chrome browser to mitigate its illegal monopoly in online search. The court will only require a handful of modest behavioral remedies, forcing Google to release some search data to competitors and limit its ability to make exclusive distribution deals.

More than a year ago, the Department of Justice (DOJ) secured a major victory when Google was found to have violated the Sherman Antitrust Act. The remedy phase took place earlier this year, with the DOJ calling for Google to divest the market-leading Chrome browser. That was the most notable element of the government's proposed remedies, but it also wanted to explore a spin-off of Android, force Google to share search technology, and severely limit the distribution deals Google is permitted to sign.

Mehta has decided on a much narrower set of remedies. While there will be some changes to search distribution, Google gets to hold onto Chrome. The government contended that Google's dominance in Chrome was key to its search lock-in, but Google claimed no other company could hope to operate Chrome and Chromium like it does. Mehta has decided that Google's use of Chrome as a vehicle for search is not illegal in itself, though. "Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints," the ruling reads.

Google's proposed remedies were, unsurprisingly, much more modest. Google fully opposed the government's Chrome penalties, but it was willing to accept some limits to its search deals and allow Android OEMs to choose app preloads. That's essentially what Mehta has ruled. Under the court's ruling, Google will still be permitted to pay for search placement—those multi-billion-dollar arrangements with Apple and Mozilla can continue. However, Google cannot require any of its partners to distribute Search, Chrome, Google Assistant, or Gemini. That means Google cannot, for example, make access to the Play Store contingent on bundling its other apps on phones.

There is one spot of good news for Google's competitors. The court has ruled that Google will have to provide some search index data and user metrics to "qualified competitors." This could help alternative search engines improve their service despite Google's significant data lead.

While this ruling is a pretty clear win for Google, it still technically lost the case. Google isn't going to just accept the "monopolist" label, though. The company previously said it planned to appeal the case, and now it has that option.

The court's remedies are supposed to be enforced by a technical committee, which will oversee the company's operations for six years. The order says that the group must be set up within 60 days. However, Google will most likely ask to pause the order while it pursues an appeal. It did something similar with the Google Play case brought by Epic Games, but it just lost that appeal.

With the high likelihood of an appeal, it's possible Google won't have to make any changes for years—if ever. If the company chooses, it could take the case to the US Supreme Court. If a higher court overturns the verdict, Google could go back to business as usual, avoiding even the very narrow remedies chosen by Mehta.

For a slightly different viewpoint see also A let-off or tougher than it looks? What the Google monopoly ruling means [JR]


Original Submission

posted by hubie on Friday September 05, @03:03AM   Printer-friendly

But TSMC vows to continue making chips on the mainland:

The U.S. has decided to revoke its special allowance for TSMC to export advanced chipmaking tools from the U.S. to its Fab 16 in Nanjing, China, by the end of the year. The decision will force the company's American suppliers to get individual government approvals for future shipments. If approvals are not granted on time, this could could affect the plant's operations.

Until now, TSMC benefited from a general approval system — enabled by its validated end-user (VEU) status with the U.S. government — that allowed routine shipments of tools produced by American companies like Applied Materials, KLA, and LAM Research without delay. Once the rule change takes effect, any covered tool, spare part, or chemical sent to the site will need to pass a separate U.S. export review, which will be made with a presumption of denial.

"TSMC has received notification from the U.S. government that our VEU authorization for TSMC Nanjing will be revoked effective December 31, 2025," a statement by TSMC sent to Tom's Hardware reads. "While we are evaluating the situation and taking appropriate measures, including communicating with the U.S. government, we remain fully committed to ensuring the uninterrupted operation of TSMC Nanjing."

TSMC currently operates two fabs in China: a 200-mm Fab 10 in Shanghai, and a 300-mm Fab 16 in Nanjing. The 200-mm fab produces chips on legacy process technologies (such as 150nm and less advanced) and flies below the U.S. government's radar. By contrast, the 300-mm semiconductor production facility makes a variety of chips (e.g., automotive chips, 5G RF components, consumer SoCs, etc.) on TSMC's 12nm FinFET, 16nm FinFET, and 28nm-class production nodes and logic technologies of 16nm and below are restricted by the U.S. government even though they debuted about 10 years ago.

[...] One of the ways for TSMC to keep its Fab 16 in Nanjing running without U.S. equipment is to replace some of the tools it imports from the U.S. with similar equipment produced in China. However, it is unclear whether it is possible, particularly for lithography.

[...] In a normal situation, TSMC would likely resist such disruption, especially for legacy nodes meant to be cost-effective, but it may be forced to switch at least some of its tools even despite the fact that it cannot fully replace American and European tools at its Fab 16 in China.

[...] If TSMC is forced to halt or drastically reduce output at its Nanjing Fab 16, the ripple effects would be favorable to Chinese foundries like SMIC and Hua Hong as China-based customers will have to reallocate their production to SMIC (which offers 14nm and 28nm) or HuaHong (which has a 28nm node), which will boost their utilization and balance sheet (assuming of course they have enough capacity).

Furthermore, a forced TSMC slowdown in China will validate People's Republic's push for semiconductor self-sufficiency, which could mean increased subsidies for chipmakers and tool makers.


Original Submission