Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:71 | Votes:119

posted by janrinok on Saturday September 06, @09:42PM   Printer-friendly

Bill Gates' 48-year-old Microsoft 6502 BASIC goes open source:

Microsoft has released 'BASIC for 6502 Microprocessor - Version 1.1' on Github, under the MIT license. Now anyone is free to go and download, modify, share, and even resell source code originally crafted by Bill Gates. This is a hugely significant code release, as close derivatives of this BASIC ended up at the heart of several iconic computers, including the best-selling computer of all time, the Commodore 64.

The Microsoft Blog provides a potted history of its BASIC, sharing some important facts. Microsoft BASIC was the firm's first product, and started out as a BASIC language interpreter for the Intel 8080, written by Bill Gates and Paul Allen for the Altair 8800, in 1975.

What we are seeing shared on Github under the MIT license is the BASIC interpreter code ported by Bill Gates and Ric Weiland to the MOS 6502 Microprocessor (hence the name). This was released in 1976.

Something fun to note is the commit date for the m6502.asm file and its related markdown documents. July 27, 1978. Well before Git was even created. An easily done task, all we need to do is amend the commit and pass the date.

Importantly for widespread adoption, and to fuel what would become Microsoft's signature business model, this MOS 6502 assembly code formed the foundation of BASIC interpreters that shipped with the Apple II, Commodore PET, VIC-20 and C64.

Notably, Commodore licensed this 6502 port of Microsoft BASIC for a flat fee of $25,000. On the surface this doesn't sound stellar in terms of Microsoft revenue generation but, as the firm says, the decision put Microsoft software in front of millions of new programmers, who would make their first tentative coding steps by typing:

10 PRINT "HELLO" 20 GOTO 10 RUN

The 1.1 release on GitHub specifically supports the Apple II, Commodore PET, Ohio Scientific (OSI), the MOS Technology KIM-1, and PDP-10 Simulation systems. Microsoft notes that 1.1 includes "fixes to the garbage collector identified by Commodore and jointly implemented in 1978 by Commodore engineer John Feagans and Bill Gates, when Feagans traveled to Microsoft's Bellevue offices."

In total, the release shares 6,955 lines of assembly language code for anyone who is interested to peruse and play with. Microsoft characterizes this BASIC interpreter as one of the most historically significant pieces of software from the early personal computer era.

Microsoft says its BASIC for 6502 Microprocessor - Version 1.1 source code release, which comes with a clear, modern license, builds on its earlier release of GW-BASIC which first shipped in the original IBM PC's ROM, evolved into QBASIC, and later into Visual Basic.


Original Submission

posted by janrinok on Saturday September 06, @04:59PM   Printer-friendly
from the chump-change dept.

Jury orders Google to pay $425 million for unlawfully tracking millions of users:

A federal jury in San Francisco has ruled that Google must pay $425 million for unlawfully tracking millions of users who believed they had disabled data collection on their accounts. The verdict concludes a trial in which plaintiffs argued that Google violated its own privacy assurances through the Web & App Activity setting, collecting information from mobile devices over an eight-year period.

The Web & App Activity feature is a central component of Google's privacy controls, designed to let users manage whether their searches, location history, and interactions with Google services or partner websites and apps are stored.

When enabled, the setting can retain details such as search history, activity in Google apps, general location based on device or IP address, and browsing performed while signed into Chrome or Android devices. Users have the option to disable feature, which, according to Google's privacy documentation, should prevent this data from being added to their accounts.

However, the trial revealed that Google continued collecting data even for users who had disabled tracking. Through partnerships with major third-party apps including Uber, Venmo, Instagram, and Facebook, the company's analytics services gathered activity and usage data independently of the user's Web & App Activity settings.

Plaintiffs alleged that this enabled Google to amass vast amounts of behavioral information across its ecosystem even after users supposedly opted out.

The jury found Google liable for two of the three privacy-related claims but rejected the assertion that the company acted maliciously, declining to award additional punitive damages beyond the initial $425 million. The plaintiffs had originally sought damages exceeding $31 billion in the class-action lawsuit.

Google defended its practices, stating that it plans to contest the verdict and emphasizing that its technology gives users control over their personal data. The company argued that the jury had misunderstood how its products function and stressed that when users turn off personalization, their choices are respected.

During the proceedings, Google asserted that the data collected was nonpersonal, pseudonymous, and stored in secure, encrypted environments not linked to Google accounts or individual identities. Despite these assurances, the scope of data collection – particularly the records captured through third-party applications – played a key role in the jury's decision.

Judge Richard Seeborg certified the case to include approximately 98 million Google users and account for over 174 million devices. The ruling adds to Google's ongoing legal challenges concerning user privacy.

Earlier in 2025, the company agreed to pay nearly $1.4 billion to settle claims with the state of Texas over alleged privacy violations. In another settlement reached in April 2024, Google agreed to destroy billions of records linked to users' private browsing activities, following accusations that it tracked individuals while using Incognito mode and similar features.

Google has announced that it will pursue an appeal.


Original Submission

posted by janrinok on Saturday September 06, @12:13PM   Printer-friendly

China likely to land on Moon before US does again:

A former NASA administrator has told the US Senate Commerce Committee that it is "highly unlikely" the US will return humans to the Moon before a Chinese taikonaut plants a flag on the lunar surface.

The problem, according to Jim Bridenstine, is the architecture NASA selected to return to the Moon, and in particular the choice of SpaceX's Starship to land humans on the regolith.

Bridenstine waved away the issues with NASA's Space Launch System (SLS) – the massive rocket intended to launch humans to the Moon – noting that "it has been expensive, it had overruns, but it's behind us."

Of the Orion capsule, which will be used to transport the crew from Earth and back again, Bridenstine said: "The Orion crew capsule is not only usable today, but ultimately the cost is going down because more and more of it is reusable every time we use the Orion crew capsule. Those two elements are in good shape."

What isn't in good shape is the architecture, including the choice of SpaceX's Starship. The former administrator listed the issues. First, there was the task of getting the Human Landing System (HLS) variant of Starship to the Moon, which would require an unknown number of launches from Earth to refuel it. "By the way," said Bridenstine, "that whole in-space refueling thing has never been tested either."

Then there is human-rating the HLS variant, a process that Bridenstine noted "hasn't even started yet." He continued, noting more issues with NASA's lunar architecture. How long could the HLS variant of Starship loiter in orbit around the Moon before the crew arrived? Was putting a crew on the surface of the Moon with no means of returning to the Orion spacecraft for seven days acceptable?

"The biggest decision in the history of NASA – at least since I've been paying attention – happened in the absence of a NASA administrator, and that decision was instead of buying a moonlander, we're going to buy a big rocket."

Biggest decision? Maybe. Maybe not. Sending Apollo 8 around the Moon on the first crewed launch of a Saturn V would have to be up there, but Bridenstine's passion is undeniable.

"This is an architecture that no NASA administrator that I'm aware of would have selected had they had the choice. But it was a decision that was made in the absence of a NASA administrator. It's a problem. It needs to be solved."

The decision was taken in 2021. As well as SpaceX, Jeff Bezos' Blue Origin threw its hat into the ring alongside US-based Dynetics. At the time, NASA believed that choosing a single partner would reduce costs. Blue Origin later sued NASA over the contract award.

Neither SpaceX nor Blue Origin was present at the hearing. Bridenstine also did not offer a solution to the problem.


Original Submission

posted by janrinok on Saturday September 06, @07:29AM   Printer-friendly

New hollow-core fiber outperforms glass, pushing data closer to light speed:

A Microsoft-backed research team has set a new benchmark for optical fiber performance, developing a hollow-core cable that posts the lowest optical loss ever recorded in the industry, according to findings published in Nature Photonics. The milestone comes after years of effort by Lumenisity, a spinout from the University of Southampton's Optoelectronics Research Center, now operating under Microsoft's wing following an acquisition in 2022.

This novel fiber, utilizing a design known as double-nested antiresonant nodeless fiber (DNANF), exhibits an attenuation of just 0.091 dB/km at the 1,550-nm wavelength. For comparison, today's best silica fibers bottom out at roughly 0.14 dB/km – a figure that has remained relatively unchanged since the 1980s.

Francesco Poletti, who co-authored the research paper and helped invent the design, told The Register that the development ranks as "one of the most noteworthy improvements in waveguided optical technology for the past 40 years."

The main draw behind hollow-core fiber is the medium: while standard optical fiber guides photons through solid glass, limiting signal speed to just under 200 million meters per second, the hollow variant lets light travel through air, which increases velocity closer to the maximum of 300 million meters per second.

More speed means lower latency – an especially important metric for moving data between cloud datacenters and accelerating mobile network performance. However, earlier hollow-core designs suffered high energy loss (over 1 dB/km), restricting practical use to short and specialized links.

The DNANF fiber counters this by using concentric, micron-thin glass tubes acting as tiny mirrors, bouncing light through an air core while suppressing unwanted transmission modes. Tests conducted on kilometer-scale spools confirmed attenuation below the critical 0.1 dB/km mark using multiple measurement methods, including optical time-domain reflectometry.

The researchers report that loss remained under 0.2 dB/km across a 66-terahertz spectral band – significantly more bandwidth and flexibility than conventional fibers can offer. They also measured chromatic dispersion at levels seven times lower than those of legacy architectures, which could simplify transceiver designs and reduce network energy costs. "So you need to amplify less," Poletti explained. "It can lead to greener networks if this is how you want to exploit it."

Microsoft's acquisition of Lumenisity in 2022 signaled its intention to move hollow-core technology from academic circles into real-world infrastructure. Initial iterations achieved losses of 2.5 dB/km – respectable but uncompetitive with legacy glass fiber. Through ongoing development, Microsoft confirmed to Network World that approximately 1,200 kilometers of the new fiber are already carrying live cloud traffic. At the 2024 Ignite conference, CEO Satya Nadella announced plans to deploy 15,000 kilometers of DNANF fiber across Azure's global backbone within two years, aiming to support the explosive growth in AI workloads.

While the technology promises transmission speeds up to 45 percent faster than current solid-core options and holds the potential for five to ten times wider bandwidth, certain hurdles remain. Scaling production will require new manufacturing tools, and the fiber must undergo international standardization processes before entering mass-market use – a milestone that could take around five years. Still, the latest results mark the first time fiber that carries light through air has outpaced and outperformed the glass it was engineered to replace – a watershed moment for telecom and cloud infrastructure.


Original Submission

posted by hubie on Saturday September 06, @02:45AM   Printer-friendly

New law emboldens complaints against digital content rentals labled as purchases:

Words have meaning. Proper word selection is integral to strong communication, whether it's about relaying one's feelings to another or explaining the terms of a deal, agreement, or transaction.

Language can be confusing, but typically when something is available to "buy," ownership of that good or access to that service is offered in exchange for money. That's not really the case, though, when it comes to digital content.

Often, streaming services like Amazon Prime Video offer customers the options to "rent" digital content for a few days or to "buy" it. Some might think that picking "buy" means that they can view the content indefinitely. But these purchases are really just long-term licenses to watch the content for as long as the streaming service has the right to distribute it—which could be for years, months, or days after the transaction.

A lawsuit [PDF] recently filed against Prime Video challenges this practice and accuses the streaming service of misleading customers by labeling long-term rentals as purchases. The conclusion of the case could have implications for how streaming services frame digital content.

[...] Despite the dismissal of similar litigation years ago, Reingold's complaint stands a better chance due to a California law that took effect in January banning the selling of a "digital good to a purchaser with the terms 'buy,' 'purchase,' or any other term which a reasonable person would understand to confer an unrestricted ownership interest in the digital good, or alongside an option for a time-limited rental."

There are some instances where the law allows digital content providers to use words like "buy." One example is if, at the time of transaction, the seller receives acknowledgement from the customer that the customer is receiving a license to access the digital content; that they received a complete list of the license's conditions; and that they know that access to the digital content may be "unilaterally revoked."

A seller can also use words like "buy" if it provides to the customer ahead of the transaction a statement that "states in plain language that 'buying' or 'purchasing' the digital good is a license," as well as online access to terms and conditions, the law states.

The California legislation helps strengthen the lawsuit filed by Reingold, a California resident. The case is likely to hinge on whether or not fine print and lengthy terms of use are appropriate and sufficient communication.

[...] Streaming is now the most popular way to watch TV, yet many are unaware of what they're buying. As Reingold's lawsuit points out, paying for content in the streaming era is different from buying media from physical stores. Physical media nets control over your ability to watch stuff for years. But you also had to have retrieved the media from a store (or website) and to have maintained that physical copy, as well the necessary hardware and/or software for playing it. Streaming services can rip purchased content from customers in bulk, but they also offer access to a much broader library that's instantly watchable with technology that most already have (like a TV and Internet).

We can debate the best approach to distributing media. What's clearer is the failure of digital content providers to ensure that customers fully understand they're paying for access to content, and that this access could be revoked at any time.


Original Submission

posted by janrinok on Friday September 05, @09:57PM   Printer-friendly

AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex:

A UK government department's three-month trial of Microsoft's M365 Copilot has revealed no discernible gain in productivity – speeding up some tasks yet making others slower due to lower quality outputs.

The Department for Business and Trade received 1,000 licenses for use between October and December 2024, with the majority of these allocated to volunteers and 30 percent to randomly selected participants. Some 300 of these people consented to their data being analyzed.

An evaluation of time savings, quality assurance, and productivity was then calculated in the assessment.

Overall, 72 percent of users were satisfied or very satisfied with their digital assistant and voiced disappointment when the test ended. However, the reality of productivity gains was more nuanced than Microsoft's marketing materials might suggest.

Around two-thirds of the employees in the trial used M365 at least once a week, and 30 percent used it at least once a day – which doesn't sound like great value for money.

In the UK, commercial prices range from £4.90 per user per month to £18.10, depending on business plan. This means that across a government department, those expenses could quickly mount up.

According to the M365 Copilot monitoring dashboard made available in the trial, an average of 72 M365 Copilot actions were taken per user.

"Based on there being 63 working days during the pilot, this is an average of 1.14 M365 Copilot actions taken per user per day," the study says. Word, Teams, and Outlook were the most used, and Loop and OneNote usage rates were described as "very low," less than 1 percent and 3 percent per day, respectively.

"PowerPoint and Excel were slightly more popular; both experienced peak activity of 7 percent of license holders using M365 Copilot in a single day within those applications," the study states.

The three most popular tasks involved transcribing or summarizing a meeting, writing an email, and summarizing written comms. These also had the highest satisfaction levels, we're told.

Participants were asked to record the time taken for each task with M365 Copilot compared to colleagues not involved in the trial. The assessment report adds: "Observed task sessions showed that M365 Copilot users produced summaries of reports and wrote emails faster and to a higher quality and accuracy than non-users. Time savings observed for writing emails were extremely small.

"However, M365 Copilot users completed Excel data analysis more slowly and to a worse quality and accuracy than non-users, conflicting time savings reported in the diary study for data analysis.

"PowerPoint slides [were] over 7 minutes faster on average, but to a worse quality and accuracy than non-users." This means corrective action was required.

A cross-section of participants was asked questions in an interview – qualitative findings – and they claimed routine admin tasks could be carried out with greater efficiency with M365 Copilot, letting them "redirect time towards tasks seen as more strategic or of higher value, while others reported using these time savings to attend training sessions or take a lunchtime walk."

Nevertheless, M365 Copilot did not necessarily make them more productive, the assessment found. This is something Microsoft has worked on with customers to quantify the benefits and justify the greater expense of a license for M365 Copilot.

"We did not find robust evidence to suggest that time savings are leading to improved productivity," the report says. "However, this was not a key aim of the evaluation and therefore limited data was collected to identify if time savings have led to productivity gains."

And hallucinations? 22 percent of the Department for Business and Trade guinea pigs that responded to the assessors said they did identify hallucinations, 43 percent did not, and 11 percent were unsure.

Users reported mixed experiences with colleagues' attitudes, with some teams embracing their AI-augmented workers while others turned decidedly frosty. Line managers' views appeared to significantly influence adoption rates, proving that office politics remain refreshingly human.

The department is still crunching numbers on environmental costs and value for money, suggesting the full reckoning of AI's corporate invasion remains some way off. An MIT survey published last month, for example, found that 95 percent of companies that had collectively sunk $35-40 billion into generative AI had little to show for it.

For now, it seems M365 Copilot excels at the mundane while stumbling over the complex – an apt summary of GenAI in 2024.


Original Submission

posted by janrinok on Friday September 05, @05:13PM   Printer-friendly

'Doomer science fiction': Nvidia blasts proposed US bill that would force it to give American buyers 'first option' in AI GPU purchases before selling chips to other countries, including allies:

"The AI Diffusion Rule was a self-defeating policy, based on doomer science fiction, and should not be revived. Our sales to customers worldwide do not deprive U.S. customers of anything — and in fact expand the market for many U.S. businesses and industries. The pundits feeding fake news to Congress about chip supply are attempting to overturn President Trump's AI Action Plan and surrender America's chance to lead in AI and computing worldwide."

Original Article

The U.S. Senate on Tuesday unveiled a preliminary version of the annual defense policy package that includes a requirement for American developers of AI processors to prioritize domestic orders for high-performance AI processors before supplying them to overseas buyers and explicitly calls to deny exports of highest-end AI GPUs. The legislators call their initiative the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2025 (GAIN AI Act) and their goal is to ensure that American 'small businesses, start-ups, and universities' can lay their hands on the latest AI GPUs from AMD, Nvidia, etc before clients in other countries. However, if the bill becomes a law, it will hit American companies hard.

"Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade," said Brad Carson, president of Americans for Responsible Innovation (ARI). "Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth. As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security."

This GAIN AI Act demands developers of AI processors, such as AMD or Nvidia, to give U.S. buyers first opportunity to purchase advanced AI hardware before selling to foreign nations, including allied nations like European countries or the U.K. and adversaries like China. To do so, the Act proposes to establish export controls on all 'advanced' GPUs (more on this later) to be shipped outside of the U.S. and deny export licenses on the 'most powerful chips.'

To get the export license, the exporter must confirm to certain conditions:

  • U.S. customers received a right to decline first;
  • There is no backlog of pending U.S. orders;
  • The intended export will not cause stock delays or reduce manufacturing capacity for U.S. purchasers;
  • Pricing or contract terms being offered do not favor foreign recipients over U.S. customers;
  • The export will not be used by foreign entities to undercut U.S. competitors outside of their domestic market.

If one of the certifications is missing, the export request must be denied, according to the proposal.

What is perhaps no less important about the act is that it sets precise criteria of what U.S. legislators consider 'advanced integrated circuit,' or advanced AI GPU. To qualify as 'advanced', a processor has meet any one of the following criteria:

  • Offers a total processing performance (TPP) score of 2400 or higher, which is counted as listed processing power in TFLOPS multiplied by the length of operation (e.g., TFLOPS or TOPS of 8/16/32/64 bits) without sparsity. Processors with a TPP of 4800 or higher are considered too powerful to be exported regardless of destination country.
  • Offers performance density (PD) metric of over 3.2. PD is counted by dividing TPP by the die area measured in square millimeters.
  • Has DRAM bandwidth of over 1.4 TB/s, interconnect bandwidth of over 1.1 TB/s, or combined DRAM and interconnect bandwidth of over 1.7 TB/s.

Essentially, the legislators plan to export control all fairly sophisticated processors, including Nvidia's HGX H20 (because of high memory bandwidth) or L2 PCIe (because of high performance density) that are now about two years old. As a result, if the proposed bill passes as a law and is signed by the President, then it will again restrict sales of AMD's Instinct MI308 or Nvidia's HGX H20 both to all customers outside of the U.S. Furthermore, GPUs with a TPP of 4800 or higher will be prohibited for exports, so Nvidia will be unable to sell its H100 and more advaced GPUs outside of the U.S. as even H100 has a TPP score of 16,000 (B300 has a TPP score of 60,000).

Coincidentally, Nvidia on Wednesday issued a statement claiming that shipments of its H20 to customers in China does not affect its ability to serve clients in the U.S.

"The rumor that H20 reduced our supply of either H100/H200 or Blackwell is also categorically false — selling H20 has no impact on our ability to supply other NVIDIA products," the statement reads.


Original Submission

posted by hubie on Friday September 05, @12:28PM   Printer-friendly

Ice generates electricity when it gets stressed in a very specific way, new research suggests:

Don't mess with ice. When it's stressed, ice can get seriously sparky.

Scientists have discovered that ordinary ice—the same substance found in iced coffee or the frosty sprinkle on mountaintops—is imbued with remarkable electromechanical properties. Ice is flexoelectric, so when it's bent, stretched, or twisted, it can generate electricity, according to a Nature Physics paper published August 27. What's more, ice's peculiar electric properties appear to change with temperature, leading researchers to wonder what else it's hiding.

[...] An unsolved mystery in molecular chemistry is why the structure of ice prevents it from being piezoelectric. By piezoelectricity, scientists refer to the generation of an electric charge when mechanical stress changes a solid's overall polarity, or electric dipole moment.

The water molecules that make up an ice crystal are polarized. But when these individual molecules organize into a hexagonal crystal, the geometric arrangement randomly orients the dipoles of these water molecules. As a result, the final system can't generate any piezoelectricity.

However, it's well known that ice can naturally generate electricity, an example being how lightning strikes emerge from the collisions between charged ice particles. Because ice doesn't appear to be piezoelectric, scientists were confused as to how the ice particles became charged in the first place.

"Despite the ongoing interest and large body of knowledge on ice, new phases and anomalous properties continue to be discovered," the researchers noted in the paper, adding that this unsatisfactory knowledge gap suggests "our understanding of this ubiquitous material is incomplete."

[...] For the experiment, they placed a slab of ice between two electrodes while simultaneously confirming that any electricity produced wasn't piezoelectric. To their excitement, bending the ice slab created an electric charge, and at all temperatures, too. What they didn't expect, however, was a thin ferroelectric layer that formed at the ice slab surface below -171.4 degrees Fahrenheit (-113 degrees Celsius).

"This means that the ice surface can develop a natural electric polarization, which can be reversed when an external electric field is applied—similar to how the poles of a magnet can be flipped," Wen explained in a statement.

Surprisingly, "ice may have not just one way to generate electricity but two: ferroelectricity at very low temperatures and flexoelectricity at higher temperatures all the way to 0 [degrees C]," Wen added.

The finding is both useful and informative, the researchers said. First, the "flip" between flexoelectricity and ferroelectricity puts ice "on par with electroceramic materials such as titanium dioxide, which are currently used in advanced technologies like sensors and capacitors," they noted.

Perhaps more apparent is the finding's connection to natural phenomena, namely thunderstorms. According to the paper, the electric potential generated from flexoelectricity in the experiment closely matched that of the energy produced by colliding ice particles. At the very least, it would make sense for flexoelectricity to be partly involved in how ice particles interact inside thunderclouds.

"With this new knowledge of ice, we will revisit ice-related processes in nature to find if there is any other profound consequence of ice flexoelectricity that has been overlooked all the way," Wen told Gizmodo.

Both conclusions will need further scrutiny, the researchers admitted. Nevertheless, the findings offer illuminating new insight into something as common as ice—and demonstrate how much there's still to be learned about our world.

Journal: Flexoelectricity and surface ferroelectricity of water ice". X.Wen et all


Original Submission

posted by hubie on Friday September 05, @07:47AM   Printer-friendly

Google's penalty for being a search monopoly does not include selling Chrome:

Google has avoided the worst-case scenario in the pivotal search antitrust case brought by the US Department of Justice. DC District Court Judge Amit Mehta has ruled that Google doesn't have to give up the Chrome browser to mitigate its illegal monopoly in online search. The court will only require a handful of modest behavioral remedies, forcing Google to release some search data to competitors and limit its ability to make exclusive distribution deals.

More than a year ago, the Department of Justice (DOJ) secured a major victory when Google was found to have violated the Sherman Antitrust Act. The remedy phase took place earlier this year, with the DOJ calling for Google to divest the market-leading Chrome browser. That was the most notable element of the government's proposed remedies, but it also wanted to explore a spin-off of Android, force Google to share search technology, and severely limit the distribution deals Google is permitted to sign.

Mehta has decided on a much narrower set of remedies. While there will be some changes to search distribution, Google gets to hold onto Chrome. The government contended that Google's dominance in Chrome was key to its search lock-in, but Google claimed no other company could hope to operate Chrome and Chromium like it does. Mehta has decided that Google's use of Chrome as a vehicle for search is not illegal in itself, though. "Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints," the ruling reads.

Google's proposed remedies were, unsurprisingly, much more modest. Google fully opposed the government's Chrome penalties, but it was willing to accept some limits to its search deals and allow Android OEMs to choose app preloads. That's essentially what Mehta has ruled. Under the court's ruling, Google will still be permitted to pay for search placement—those multi-billion-dollar arrangements with Apple and Mozilla can continue. However, Google cannot require any of its partners to distribute Search, Chrome, Google Assistant, or Gemini. That means Google cannot, for example, make access to the Play Store contingent on bundling its other apps on phones.

There is one spot of good news for Google's competitors. The court has ruled that Google will have to provide some search index data and user metrics to "qualified competitors." This could help alternative search engines improve their service despite Google's significant data lead.

While this ruling is a pretty clear win for Google, it still technically lost the case. Google isn't going to just accept the "monopolist" label, though. The company previously said it planned to appeal the case, and now it has that option.

The court's remedies are supposed to be enforced by a technical committee, which will oversee the company's operations for six years. The order says that the group must be set up within 60 days. However, Google will most likely ask to pause the order while it pursues an appeal. It did something similar with the Google Play case brought by Epic Games, but it just lost that appeal.

With the high likelihood of an appeal, it's possible Google won't have to make any changes for years—if ever. If the company chooses, it could take the case to the US Supreme Court. If a higher court overturns the verdict, Google could go back to business as usual, avoiding even the very narrow remedies chosen by Mehta.

For a slightly different viewpoint see also A let-off or tougher than it looks? What the Google monopoly ruling means [JR]


Original Submission

posted by hubie on Friday September 05, @03:03AM   Printer-friendly

But TSMC vows to continue making chips on the mainland:

The U.S. has decided to revoke its special allowance for TSMC to export advanced chipmaking tools from the U.S. to its Fab 16 in Nanjing, China, by the end of the year. The decision will force the company's American suppliers to get individual government approvals for future shipments. If approvals are not granted on time, this could could affect the plant's operations.

Until now, TSMC benefited from a general approval system — enabled by its validated end-user (VEU) status with the U.S. government — that allowed routine shipments of tools produced by American companies like Applied Materials, KLA, and LAM Research without delay. Once the rule change takes effect, any covered tool, spare part, or chemical sent to the site will need to pass a separate U.S. export review, which will be made with a presumption of denial.

"TSMC has received notification from the U.S. government that our VEU authorization for TSMC Nanjing will be revoked effective December 31, 2025," a statement by TSMC sent to Tom's Hardware reads. "While we are evaluating the situation and taking appropriate measures, including communicating with the U.S. government, we remain fully committed to ensuring the uninterrupted operation of TSMC Nanjing."

TSMC currently operates two fabs in China: a 200-mm Fab 10 in Shanghai, and a 300-mm Fab 16 in Nanjing. The 200-mm fab produces chips on legacy process technologies (such as 150nm and less advanced) and flies below the U.S. government's radar. By contrast, the 300-mm semiconductor production facility makes a variety of chips (e.g., automotive chips, 5G RF components, consumer SoCs, etc.) on TSMC's 12nm FinFET, 16nm FinFET, and 28nm-class production nodes and logic technologies of 16nm and below are restricted by the U.S. government even though they debuted about 10 years ago.

[...] One of the ways for TSMC to keep its Fab 16 in Nanjing running without U.S. equipment is to replace some of the tools it imports from the U.S. with similar equipment produced in China. However, it is unclear whether it is possible, particularly for lithography.

[...] In a normal situation, TSMC would likely resist such disruption, especially for legacy nodes meant to be cost-effective, but it may be forced to switch at least some of its tools even despite the fact that it cannot fully replace American and European tools at its Fab 16 in China.

[...] If TSMC is forced to halt or drastically reduce output at its Nanjing Fab 16, the ripple effects would be favorable to Chinese foundries like SMIC and Hua Hong as China-based customers will have to reallocate their production to SMIC (which offers 14nm and 28nm) or HuaHong (which has a 28nm node), which will boost their utilization and balance sheet (assuming of course they have enough capacity).

Furthermore, a forced TSMC slowdown in China will validate People's Republic's push for semiconductor self-sufficiency, which could mean increased subsidies for chipmakers and tool makers.


Original Submission

posted by hubie on Thursday September 04, @10:16PM   Printer-friendly

Think twice before you put anything in a form—even if it looks legit:

One of the lesser-known apps in the Google Drive online suite is Google Forms. It's an easy, intuitive way to create a web form for other people to enter information into. You can use it for employee surveys, for organizing social gatherings, for giving people a way to contact you, and much more. But Google Forms can also be used for malicious purposes.

These forms can be created in minutes, with clean and clear formatting, official-looking images and video, and—most importantly of all—a genuine Google Docs URL that your web browser will see no problem with. Scammers can then use these authentic-looking forms to ask for payment details or login information.

It's a type of scam that continues to spread, with Google itself issuing a warning about the issue in February. Students and staff at Stanford University were among those targeted with a Google Forms link that asked for login details for the academic portal there, and the attack beat standard email malware protection.

These scams can take a variety of guises, but they'll typically start with a phishing email that will try to trick you into believing it's an official and genuine communication. It might be designed to look like it's from a colleague, an administrator, or someone from a reputable organization.

[...] Even worse, the instigating email might actually come from a legitimate email address, if someone in your social circle, family, or office has had their account hijacked. In this case you wouldn't be able to run the usual checks on the sender identity and email address, because everything would look genuine—though the wording and style would be off.

This email (or perhaps a direct message on social media) will be used to deliver a Google Forms link, which is the second half of the scam. This form will most often be set up to look genuine, and may be trying to spoof a recognized site like your place of work or your bank. The form might prompt you for sensitive information, offer up a link to malware, or feature a phone number or email address to lead you into further trouble.

[...] The same set of common sense measures are usually enough to keep yourself safe against most scams, including this one. Be wary of any unexpected communications, like unusual requests for friends or password reset processes you haven't yourself triggered. If you're unsure, check with the sender of the email (be it your bank or your boss) by calling them, rather than relying on what's said in the email.

In general, you shouldn't be entering any login information or payment details into a Google Forms document (it will start with docs.google.com in your browser's address bar). These forms may look reasonably well presented, but they'll lack any advanced formatting or layouts, and will feature Submit and Clear form buttons at the bottom.

Google Forms should also have either a "never submit passwords" or "content is neither created nor endorsed by Google" message at the bottom, depending on how the form has been set up. These are all signs you can look for, even if the link to the form appears to have come from a trusted source. And if you're being asked for important information, then get in touch with that source directly.

All forms created through Google Forms have a Report button at the bottom you can use if you think you've spotted a scam. If you've already submitted information before you've realized what's going on, the standard safeguarding measures apply: Change your passwords as soon as you can, and notify whoever is running the account that may have been compromised that you might need help.

Even just knowing that this kind of scam exists is a step toward being better protected. As always, keeping your mobile and desktop software up-to-date helps too. This won't necessarily flag a suspect form, for the reasons we've already mentioned, but it should mean that any malicious links you're directed to are recognized.


Original Submission

posted by hubie on Thursday September 04, @05:29PM   Printer-friendly
from the vendor-lock-in dept.

Andrew Eikum has updated his blog post on passkeys. The revised title, Passkeys are incompatible with open-source software (was: "Passkey marketing is lying to you"), says it all.

Update: After reading more of the spec authors’ comments on open-source Passkey implementations, I cannot support this tech. In addition to what I covered at the bottom of this blog post, I found more instances where the spec authors have expressed positions that are incompatible with open-source software and user freedom:

When required, the authenticator must perform user verification (PIN, biometric, or some other unlock mechanism). If this is not possible, the authenticator should not handle the request.

This implementation is not spec compliant and has the potential to be blocked by relying parties.

Then you should require its use when passkeys are enabled … [You may be blocked because] you have a passkey provider that is known to not be spec compliant.

I suspect we’ll see [biometrics] required by regulation in some geo-regions.

I’ll leave the rest of the blog post as it was below, but I no longer think Passkeys are an acceptable technology. The spec authors’ statements, refusal to have a public discussion about the issues, and Passkey’s marketing, have all shown this tech is intended to support lock-in to proprietary software. While open source implementations are allowed for now, attestation provides a backdoor to lock the protocol down only to blessed implementations.

So long as the Passkey spec provides the attestation anti-feature, Passkeys are not an acceptable authentication mechanism. As a result, I’ve deleted the Passkeys I set up below in order to avoid increasing their adoption statistics.

Passkeys are cryptographic credentials marketed as operating through locally executed programs to provide authentication for remote systems and services. They are sometimes additionally tied to biometrics or hardware tokens. The jury is still out as to whether they actually improve security, or will merely continue as another vehicle for vendor lock-in. It's looking more like the latter.

Previously:
(2024) Why Passwords Still Rock


Original Submission

posted by jelizondo on Thursday September 04, @12:44PM   Printer-friendly
from the If-it-bleeds-it-leads dept.

People of a certain age remember when artificial blood was all the rage. Only problem was, it's recipients kept dying at a pesky rate. But now the Department of Defense is taking it seriously.

Blood runs through every human body. And yet there's still not enough of it. For one, not enough people are donating it. But it's also really hard to store, and it takes very special conditions to keep it healthy.

But there's a potential solution: an artificial version that wouldn't need to be treated quite as gently or refrigerated. The Department of Defense recently granted $46 million to the group responsible for the development of a synthetic blood called ErythroMer.

"If this synthetic blood substitute works, it could be absolutely game-changing because it can be freeze-dried, it can be reconstituted on demand, and it's universal," journalist Nicola Twilley says. It would save many lives: As she reported for the New Yorker, 30,000 preventable deaths occur each year because people didn't get blood in time.

See also: Japanese Scientists Develop Artificial Blood


Original Submission

posted by jelizondo on Thursday September 04, @07:57AM   Printer-friendly
from the I-love-mystery-shoppers dept.

TechSpot published an interesting article about NVIDIA revenue:

While Nvidia's soaring revenues continue to command attention, its heavy reliance on a small group of clients introduces both opportunity and uncertainty. Market observers remain watchful for greater clarity around customer composition and future cloud spending, as these factors increasingly shape forecasts for the chipmaker's next phase of growth.

A new financial filing from Nvidia revealed that just two customers were responsible for 39 percent of the company's revenue during its July quarter – a concentration that is drawing renewed scrutiny from analysts and investors alike. According to documents submitted to the Securities and Exchange Commission, "Customer A" accounted for 23 percent of Nvidia's total revenue, while "Customer B" represented 16 percent.

This level of revenue concentration is significantly higher than in the same period one year ago, when Nvidia's top two customers contributed 14 percent and 11 percent, respectively.

Nvidia routinely discloses its leading customers on a quarterly basis. However, these latest numbers have prompted a fresh discussion about whether Nvidia's growth trajectory is heavily dependent on a small group of enormous buyers, particularly large cloud service providers.


Original Submission

posted by jelizondo on Thursday September 04, @03:14AM   Printer-friendly
from the that's-just-what-they-want-you-to-believe dept.

People who believe in conspiracy theories process information differently at the neural level:

A new brain imaging study published in Scientific Reports provides evidence that conspiracy beliefs are linked to distinct patterns of brain activity when people evaluate information. The research indicates that people who score high on conspiracy belief scales tend to engage different cognitive systems when reading conspiracy-related statements compared to factual ones. These individuals relied more heavily on regions associated with subjective value and belief uncertainty.

"Our motivation for this study came from a striking gap in the literature. While conspiracy theories have a profound impact on society—from shaping political engagement to influencing health decisions—we still know very little about how the brain processes such beliefs," said study author Shuguang Zhao of the Research Center of Journalism and Social Development at Renmin University of China.

"Previous research has focused on general belief processing, but the specific neural mechanisms that sustain conspiracy thinking remained unclear. The COVID-19 pandemic made this gap even more urgent. During this global crisis, conspiracy theories about the virus—whether it was deliberately created, or that vaccines were part of hidden agendas—spread rapidly across digital platforms."

"Such crises tend to heighten uncertainty and fear, creating fertile ground for conspiracy narratives," Zhao continued. "Observing how these beliefs flourished in real time underscored the importance of understanding not just what people believe, but how their brains evaluate and sustain these beliefs when confronted with threatening or ambiguous information."

[...] Inside the MRI scanner, participants were shown 72 statements formatted to resemble posts from Weibo, a popular Chinese social media platform. Half of the statements reflected conspiracy theories, and the other half presented factual information from state media sources. Each post was presented both visually and through audio recordings to control reading speed. After each statement, participants rated how much they believed the information using a seven-point scale ranging from "not at all" to "completely."

The researchers used high-resolution brain imaging to record blood oxygen level-dependent (BOLD) signals throughout the task. This method allows scientists to infer which brain regions are more active based on changes in blood flow. They then analyzed the data to compare brain activity during the evaluation of conspiracy versus factual information across the two belief groups.

At the behavioral level, both high- and low-belief participants were more likely to believe factual statements than conspiratorial ones. However, high-belief individuals were significantly more likely to endorse conspiracy-related content compared to their low-belief counterparts. Notably, the groups did not differ significantly in their belief ratings for factual information, suggesting that the belief gap was specific to conspiracy-related content.

"One surprising finding was that individuals with high conspiracy beliefs did not differ from low-belief individuals when evaluating factual information," Zhao told psyPost. "Their bias only appeared with conspiracy-related content. This suggests that conspiracy beliefs are not about being generally gullible, but rather reflect a selective way of processing information that aligns with conspiratorial worldviews."

[...] "The most important takeaway is that conspiracy beliefs are not simply a matter of being more gullible or less intelligent," Zhao said. "Instead, our results show that people with high levels of conspiracy belief process information through different neural pathways compared to those with low levels."

"At the behavioral level, we found that high-belief individuals were more likely to endorse conspiracy-related statements, but they evaluated factual information just as accurately as low belief individuals. This suggests that conspiracy thinking is a selective bias—it shapes how people respond to conspiratorial content, rather than causing a general inability to recognize facts."

[...] But the study, like all research, includes some limitations. The sample size was relatively small, and all participants were native Chinese speakers recruited from a university setting. This raises questions about how well the findings would apply to other cultural contexts, particularly in societies with different media landscapes or conspiracy traditions. Additionally, the use of Weibo-style posts added ecological validity within China, but results may differ in other social media formats or in more interactive communication settings.

"Our next step is to move beyond conspiracy beliefs and examine how people process misinformation more broadly, especially in the context of rapidly advancing AI," Zhao explained. "As AI systems are increasingly capable of generating large volumes of content, including plausible but misleading explanations, it is crucial to understand how such information affects trust, reasoning, and decision-making."

"In the long run, our goal is to uncover the cognitive and neural mechanisms that make people vulnerable to AI-driven misinformation, and to explore how labeling, explanation style, or source cues might mitigate these risks. By doing so, we hope to contribute to strategies that help individuals navigate an information environment where truth and falsehood are becoming harder to distinguish."

Journal Reference: Zhao, S., Wang, T. & Xiong, B. Neural correlates of conspiracy beliefs during information evaluation. Sci Rep 15, 18375 (2025). https://doi.org/10.1038/s41598-025-03723-z


Original Submission