Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:59 | Votes:104

posted by janrinok on Friday September 05, @09:57PM   Printer-friendly

AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex:

A UK government department's three-month trial of Microsoft's M365 Copilot has revealed no discernible gain in productivity – speeding up some tasks yet making others slower due to lower quality outputs.

The Department for Business and Trade received 1,000 licenses for use between October and December 2024, with the majority of these allocated to volunteers and 30 percent to randomly selected participants. Some 300 of these people consented to their data being analyzed.

An evaluation of time savings, quality assurance, and productivity was then calculated in the assessment.

Overall, 72 percent of users were satisfied or very satisfied with their digital assistant and voiced disappointment when the test ended. However, the reality of productivity gains was more nuanced than Microsoft's marketing materials might suggest.

Around two-thirds of the employees in the trial used M365 at least once a week, and 30 percent used it at least once a day – which doesn't sound like great value for money.

In the UK, commercial prices range from £4.90 per user per month to £18.10, depending on business plan. This means that across a government department, those expenses could quickly mount up.

According to the M365 Copilot monitoring dashboard made available in the trial, an average of 72 M365 Copilot actions were taken per user.

"Based on there being 63 working days during the pilot, this is an average of 1.14 M365 Copilot actions taken per user per day," the study says. Word, Teams, and Outlook were the most used, and Loop and OneNote usage rates were described as "very low," less than 1 percent and 3 percent per day, respectively.

"PowerPoint and Excel were slightly more popular; both experienced peak activity of 7 percent of license holders using M365 Copilot in a single day within those applications," the study states.

The three most popular tasks involved transcribing or summarizing a meeting, writing an email, and summarizing written comms. These also had the highest satisfaction levels, we're told.

Participants were asked to record the time taken for each task with M365 Copilot compared to colleagues not involved in the trial. The assessment report adds: "Observed task sessions showed that M365 Copilot users produced summaries of reports and wrote emails faster and to a higher quality and accuracy than non-users. Time savings observed for writing emails were extremely small.

"However, M365 Copilot users completed Excel data analysis more slowly and to a worse quality and accuracy than non-users, conflicting time savings reported in the diary study for data analysis.

"PowerPoint slides [were] over 7 minutes faster on average, but to a worse quality and accuracy than non-users." This means corrective action was required.

A cross-section of participants was asked questions in an interview – qualitative findings – and they claimed routine admin tasks could be carried out with greater efficiency with M365 Copilot, letting them "redirect time towards tasks seen as more strategic or of higher value, while others reported using these time savings to attend training sessions or take a lunchtime walk."

Nevertheless, M365 Copilot did not necessarily make them more productive, the assessment found. This is something Microsoft has worked on with customers to quantify the benefits and justify the greater expense of a license for M365 Copilot.

"We did not find robust evidence to suggest that time savings are leading to improved productivity," the report says. "However, this was not a key aim of the evaluation and therefore limited data was collected to identify if time savings have led to productivity gains."

And hallucinations? 22 percent of the Department for Business and Trade guinea pigs that responded to the assessors said they did identify hallucinations, 43 percent did not, and 11 percent were unsure.

Users reported mixed experiences with colleagues' attitudes, with some teams embracing their AI-augmented workers while others turned decidedly frosty. Line managers' views appeared to significantly influence adoption rates, proving that office politics remain refreshingly human.

The department is still crunching numbers on environmental costs and value for money, suggesting the full reckoning of AI's corporate invasion remains some way off. An MIT survey published last month, for example, found that 95 percent of companies that had collectively sunk $35-40 billion into generative AI had little to show for it.

For now, it seems M365 Copilot excels at the mundane while stumbling over the complex – an apt summary of GenAI in 2024.


Original Submission

posted by janrinok on Friday September 05, @05:13PM   Printer-friendly

'Doomer science fiction': Nvidia blasts proposed US bill that would force it to give American buyers 'first option' in AI GPU purchases before selling chips to other countries, including allies:

"The AI Diffusion Rule was a self-defeating policy, based on doomer science fiction, and should not be revived. Our sales to customers worldwide do not deprive U.S. customers of anything — and in fact expand the market for many U.S. businesses and industries. The pundits feeding fake news to Congress about chip supply are attempting to overturn President Trump's AI Action Plan and surrender America's chance to lead in AI and computing worldwide."

Original Article

The U.S. Senate on Tuesday unveiled a preliminary version of the annual defense policy package that includes a requirement for American developers of AI processors to prioritize domestic orders for high-performance AI processors before supplying them to overseas buyers and explicitly calls to deny exports of highest-end AI GPUs. The legislators call their initiative the Guaranteeing Access and Innovation for National Artificial Intelligence Act of 2025 (GAIN AI Act) and their goal is to ensure that American 'small businesses, start-ups, and universities' can lay their hands on the latest AI GPUs from AMD, Nvidia, etc before clients in other countries. However, if the bill becomes a law, it will hit American companies hard.

"Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade," said Brad Carson, president of Americans for Responsible Innovation (ARI). "Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth. As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security."

This GAIN AI Act demands developers of AI processors, such as AMD or Nvidia, to give U.S. buyers first opportunity to purchase advanced AI hardware before selling to foreign nations, including allied nations like European countries or the U.K. and adversaries like China. To do so, the Act proposes to establish export controls on all 'advanced' GPUs (more on this later) to be shipped outside of the U.S. and deny export licenses on the 'most powerful chips.'

To get the export license, the exporter must confirm to certain conditions:

  • U.S. customers received a right to decline first;
  • There is no backlog of pending U.S. orders;
  • The intended export will not cause stock delays or reduce manufacturing capacity for U.S. purchasers;
  • Pricing or contract terms being offered do not favor foreign recipients over U.S. customers;
  • The export will not be used by foreign entities to undercut U.S. competitors outside of their domestic market.

If one of the certifications is missing, the export request must be denied, according to the proposal.

What is perhaps no less important about the act is that it sets precise criteria of what U.S. legislators consider 'advanced integrated circuit,' or advanced AI GPU. To qualify as 'advanced', a processor has meet any one of the following criteria:

  • Offers a total processing performance (TPP) score of 2400 or higher, which is counted as listed processing power in TFLOPS multiplied by the length of operation (e.g., TFLOPS or TOPS of 8/16/32/64 bits) without sparsity. Processors with a TPP of 4800 or higher are considered too powerful to be exported regardless of destination country.
  • Offers performance density (PD) metric of over 3.2. PD is counted by dividing TPP by the die area measured in square millimeters.
  • Has DRAM bandwidth of over 1.4 TB/s, interconnect bandwidth of over 1.1 TB/s, or combined DRAM and interconnect bandwidth of over 1.7 TB/s.

Essentially, the legislators plan to export control all fairly sophisticated processors, including Nvidia's HGX H20 (because of high memory bandwidth) or L2 PCIe (because of high performance density) that are now about two years old. As a result, if the proposed bill passes as a law and is signed by the President, then it will again restrict sales of AMD's Instinct MI308 or Nvidia's HGX H20 both to all customers outside of the U.S. Furthermore, GPUs with a TPP of 4800 or higher will be prohibited for exports, so Nvidia will be unable to sell its H100 and more advaced GPUs outside of the U.S. as even H100 has a TPP score of 16,000 (B300 has a TPP score of 60,000).

Coincidentally, Nvidia on Wednesday issued a statement claiming that shipments of its H20 to customers in China does not affect its ability to serve clients in the U.S.

"The rumor that H20 reduced our supply of either H100/H200 or Blackwell is also categorically false — selling H20 has no impact on our ability to supply other NVIDIA products," the statement reads.


Original Submission

posted by hubie on Friday September 05, @12:28PM   Printer-friendly

Ice generates electricity when it gets stressed in a very specific way, new research suggests:

Don't mess with ice. When it's stressed, ice can get seriously sparky.

Scientists have discovered that ordinary ice—the same substance found in iced coffee or the frosty sprinkle on mountaintops—is imbued with remarkable electromechanical properties. Ice is flexoelectric, so when it's bent, stretched, or twisted, it can generate electricity, according to a Nature Physics paper published August 27. What's more, ice's peculiar electric properties appear to change with temperature, leading researchers to wonder what else it's hiding.

[...] An unsolved mystery in molecular chemistry is why the structure of ice prevents it from being piezoelectric. By piezoelectricity, scientists refer to the generation of an electric charge when mechanical stress changes a solid's overall polarity, or electric dipole moment.

The water molecules that make up an ice crystal are polarized. But when these individual molecules organize into a hexagonal crystal, the geometric arrangement randomly orients the dipoles of these water molecules. As a result, the final system can't generate any piezoelectricity.

However, it's well known that ice can naturally generate electricity, an example being how lightning strikes emerge from the collisions between charged ice particles. Because ice doesn't appear to be piezoelectric, scientists were confused as to how the ice particles became charged in the first place.

"Despite the ongoing interest and large body of knowledge on ice, new phases and anomalous properties continue to be discovered," the researchers noted in the paper, adding that this unsatisfactory knowledge gap suggests "our understanding of this ubiquitous material is incomplete."

[...] For the experiment, they placed a slab of ice between two electrodes while simultaneously confirming that any electricity produced wasn't piezoelectric. To their excitement, bending the ice slab created an electric charge, and at all temperatures, too. What they didn't expect, however, was a thin ferroelectric layer that formed at the ice slab surface below -171.4 degrees Fahrenheit (-113 degrees Celsius).

"This means that the ice surface can develop a natural electric polarization, which can be reversed when an external electric field is applied—similar to how the poles of a magnet can be flipped," Wen explained in a statement.

Surprisingly, "ice may have not just one way to generate electricity but two: ferroelectricity at very low temperatures and flexoelectricity at higher temperatures all the way to 0 [degrees C]," Wen added.

The finding is both useful and informative, the researchers said. First, the "flip" between flexoelectricity and ferroelectricity puts ice "on par with electroceramic materials such as titanium dioxide, which are currently used in advanced technologies like sensors and capacitors," they noted.

Perhaps more apparent is the finding's connection to natural phenomena, namely thunderstorms. According to the paper, the electric potential generated from flexoelectricity in the experiment closely matched that of the energy produced by colliding ice particles. At the very least, it would make sense for flexoelectricity to be partly involved in how ice particles interact inside thunderclouds.

"With this new knowledge of ice, we will revisit ice-related processes in nature to find if there is any other profound consequence of ice flexoelectricity that has been overlooked all the way," Wen told Gizmodo.

Both conclusions will need further scrutiny, the researchers admitted. Nevertheless, the findings offer illuminating new insight into something as common as ice—and demonstrate how much there's still to be learned about our world.

Journal: Flexoelectricity and surface ferroelectricity of water ice". X.Wen et all


Original Submission

posted by hubie on Friday September 05, @07:47AM   Printer-friendly

Google's penalty for being a search monopoly does not include selling Chrome:

Google has avoided the worst-case scenario in the pivotal search antitrust case brought by the US Department of Justice. DC District Court Judge Amit Mehta has ruled that Google doesn't have to give up the Chrome browser to mitigate its illegal monopoly in online search. The court will only require a handful of modest behavioral remedies, forcing Google to release some search data to competitors and limit its ability to make exclusive distribution deals.

More than a year ago, the Department of Justice (DOJ) secured a major victory when Google was found to have violated the Sherman Antitrust Act. The remedy phase took place earlier this year, with the DOJ calling for Google to divest the market-leading Chrome browser. That was the most notable element of the government's proposed remedies, but it also wanted to explore a spin-off of Android, force Google to share search technology, and severely limit the distribution deals Google is permitted to sign.

Mehta has decided on a much narrower set of remedies. While there will be some changes to search distribution, Google gets to hold onto Chrome. The government contended that Google's dominance in Chrome was key to its search lock-in, but Google claimed no other company could hope to operate Chrome and Chromium like it does. Mehta has decided that Google's use of Chrome as a vehicle for search is not illegal in itself, though. "Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints," the ruling reads.

Google's proposed remedies were, unsurprisingly, much more modest. Google fully opposed the government's Chrome penalties, but it was willing to accept some limits to its search deals and allow Android OEMs to choose app preloads. That's essentially what Mehta has ruled. Under the court's ruling, Google will still be permitted to pay for search placement—those multi-billion-dollar arrangements with Apple and Mozilla can continue. However, Google cannot require any of its partners to distribute Search, Chrome, Google Assistant, or Gemini. That means Google cannot, for example, make access to the Play Store contingent on bundling its other apps on phones.

There is one spot of good news for Google's competitors. The court has ruled that Google will have to provide some search index data and user metrics to "qualified competitors." This could help alternative search engines improve their service despite Google's significant data lead.

While this ruling is a pretty clear win for Google, it still technically lost the case. Google isn't going to just accept the "monopolist" label, though. The company previously said it planned to appeal the case, and now it has that option.

The court's remedies are supposed to be enforced by a technical committee, which will oversee the company's operations for six years. The order says that the group must be set up within 60 days. However, Google will most likely ask to pause the order while it pursues an appeal. It did something similar with the Google Play case brought by Epic Games, but it just lost that appeal.

With the high likelihood of an appeal, it's possible Google won't have to make any changes for years—if ever. If the company chooses, it could take the case to the US Supreme Court. If a higher court overturns the verdict, Google could go back to business as usual, avoiding even the very narrow remedies chosen by Mehta.

For a slightly different viewpoint see also A let-off or tougher than it looks? What the Google monopoly ruling means [JR]


Original Submission

posted by hubie on Friday September 05, @03:03AM   Printer-friendly

But TSMC vows to continue making chips on the mainland:

The U.S. has decided to revoke its special allowance for TSMC to export advanced chipmaking tools from the U.S. to its Fab 16 in Nanjing, China, by the end of the year. The decision will force the company's American suppliers to get individual government approvals for future shipments. If approvals are not granted on time, this could could affect the plant's operations.

Until now, TSMC benefited from a general approval system — enabled by its validated end-user (VEU) status with the U.S. government — that allowed routine shipments of tools produced by American companies like Applied Materials, KLA, and LAM Research without delay. Once the rule change takes effect, any covered tool, spare part, or chemical sent to the site will need to pass a separate U.S. export review, which will be made with a presumption of denial.

"TSMC has received notification from the U.S. government that our VEU authorization for TSMC Nanjing will be revoked effective December 31, 2025," a statement by TSMC sent to Tom's Hardware reads. "While we are evaluating the situation and taking appropriate measures, including communicating with the U.S. government, we remain fully committed to ensuring the uninterrupted operation of TSMC Nanjing."

TSMC currently operates two fabs in China: a 200-mm Fab 10 in Shanghai, and a 300-mm Fab 16 in Nanjing. The 200-mm fab produces chips on legacy process technologies (such as 150nm and less advanced) and flies below the U.S. government's radar. By contrast, the 300-mm semiconductor production facility makes a variety of chips (e.g., automotive chips, 5G RF components, consumer SoCs, etc.) on TSMC's 12nm FinFET, 16nm FinFET, and 28nm-class production nodes and logic technologies of 16nm and below are restricted by the U.S. government even though they debuted about 10 years ago.

[...] One of the ways for TSMC to keep its Fab 16 in Nanjing running without U.S. equipment is to replace some of the tools it imports from the U.S. with similar equipment produced in China. However, it is unclear whether it is possible, particularly for lithography.

[...] In a normal situation, TSMC would likely resist such disruption, especially for legacy nodes meant to be cost-effective, but it may be forced to switch at least some of its tools even despite the fact that it cannot fully replace American and European tools at its Fab 16 in China.

[...] If TSMC is forced to halt or drastically reduce output at its Nanjing Fab 16, the ripple effects would be favorable to Chinese foundries like SMIC and Hua Hong as China-based customers will have to reallocate their production to SMIC (which offers 14nm and 28nm) or HuaHong (which has a 28nm node), which will boost their utilization and balance sheet (assuming of course they have enough capacity).

Furthermore, a forced TSMC slowdown in China will validate People's Republic's push for semiconductor self-sufficiency, which could mean increased subsidies for chipmakers and tool makers.


Original Submission

posted by hubie on Thursday September 04, @10:16PM   Printer-friendly

Think twice before you put anything in a form—even if it looks legit:

One of the lesser-known apps in the Google Drive online suite is Google Forms. It's an easy, intuitive way to create a web form for other people to enter information into. You can use it for employee surveys, for organizing social gatherings, for giving people a way to contact you, and much more. But Google Forms can also be used for malicious purposes.

These forms can be created in minutes, with clean and clear formatting, official-looking images and video, and—most importantly of all—a genuine Google Docs URL that your web browser will see no problem with. Scammers can then use these authentic-looking forms to ask for payment details or login information.

It's a type of scam that continues to spread, with Google itself issuing a warning about the issue in February. Students and staff at Stanford University were among those targeted with a Google Forms link that asked for login details for the academic portal there, and the attack beat standard email malware protection.

These scams can take a variety of guises, but they'll typically start with a phishing email that will try to trick you into believing it's an official and genuine communication. It might be designed to look like it's from a colleague, an administrator, or someone from a reputable organization.

[...] Even worse, the instigating email might actually come from a legitimate email address, if someone in your social circle, family, or office has had their account hijacked. In this case you wouldn't be able to run the usual checks on the sender identity and email address, because everything would look genuine—though the wording and style would be off.

This email (or perhaps a direct message on social media) will be used to deliver a Google Forms link, which is the second half of the scam. This form will most often be set up to look genuine, and may be trying to spoof a recognized site like your place of work or your bank. The form might prompt you for sensitive information, offer up a link to malware, or feature a phone number or email address to lead you into further trouble.

[...] The same set of common sense measures are usually enough to keep yourself safe against most scams, including this one. Be wary of any unexpected communications, like unusual requests for friends or password reset processes you haven't yourself triggered. If you're unsure, check with the sender of the email (be it your bank or your boss) by calling them, rather than relying on what's said in the email.

In general, you shouldn't be entering any login information or payment details into a Google Forms document (it will start with docs.google.com in your browser's address bar). These forms may look reasonably well presented, but they'll lack any advanced formatting or layouts, and will feature Submit and Clear form buttons at the bottom.

Google Forms should also have either a "never submit passwords" or "content is neither created nor endorsed by Google" message at the bottom, depending on how the form has been set up. These are all signs you can look for, even if the link to the form appears to have come from a trusted source. And if you're being asked for important information, then get in touch with that source directly.

All forms created through Google Forms have a Report button at the bottom you can use if you think you've spotted a scam. If you've already submitted information before you've realized what's going on, the standard safeguarding measures apply: Change your passwords as soon as you can, and notify whoever is running the account that may have been compromised that you might need help.

Even just knowing that this kind of scam exists is a step toward being better protected. As always, keeping your mobile and desktop software up-to-date helps too. This won't necessarily flag a suspect form, for the reasons we've already mentioned, but it should mean that any malicious links you're directed to are recognized.


Original Submission

posted by hubie on Thursday September 04, @05:29PM   Printer-friendly
from the vendor-lock-in dept.

Andrew Eikum has updated his blog post on passkeys. The revised title, Passkeys are incompatible with open-source software (was: "Passkey marketing is lying to you"), says it all.

Update: After reading more of the spec authors’ comments on open-source Passkey implementations, I cannot support this tech. In addition to what I covered at the bottom of this blog post, I found more instances where the spec authors have expressed positions that are incompatible with open-source software and user freedom:

When required, the authenticator must perform user verification (PIN, biometric, or some other unlock mechanism). If this is not possible, the authenticator should not handle the request.

This implementation is not spec compliant and has the potential to be blocked by relying parties.

Then you should require its use when passkeys are enabled … [You may be blocked because] you have a passkey provider that is known to not be spec compliant.

I suspect we’ll see [biometrics] required by regulation in some geo-regions.

I’ll leave the rest of the blog post as it was below, but I no longer think Passkeys are an acceptable technology. The spec authors’ statements, refusal to have a public discussion about the issues, and Passkey’s marketing, have all shown this tech is intended to support lock-in to proprietary software. While open source implementations are allowed for now, attestation provides a backdoor to lock the protocol down only to blessed implementations.

So long as the Passkey spec provides the attestation anti-feature, Passkeys are not an acceptable authentication mechanism. As a result, I’ve deleted the Passkeys I set up below in order to avoid increasing their adoption statistics.

Passkeys are cryptographic credentials marketed as operating through locally executed programs to provide authentication for remote systems and services. They are sometimes additionally tied to biometrics or hardware tokens. The jury is still out as to whether they actually improve security, or will merely continue as another vehicle for vendor lock-in. It's looking more like the latter.

Previously:
(2024) Why Passwords Still Rock


Original Submission

posted by jelizondo on Thursday September 04, @12:44PM   Printer-friendly
from the If-it-bleeds-it-leads dept.

People of a certain age remember when artificial blood was all the rage. Only problem was, it's recipients kept dying at a pesky rate. But now the Department of Defense is taking it seriously.

Blood runs through every human body. And yet there's still not enough of it. For one, not enough people are donating it. But it's also really hard to store, and it takes very special conditions to keep it healthy.

But there's a potential solution: an artificial version that wouldn't need to be treated quite as gently or refrigerated. The Department of Defense recently granted $46 million to the group responsible for the development of a synthetic blood called ErythroMer.

"If this synthetic blood substitute works, it could be absolutely game-changing because it can be freeze-dried, it can be reconstituted on demand, and it's universal," journalist Nicola Twilley says. It would save many lives: As she reported for the New Yorker, 30,000 preventable deaths occur each year because people didn't get blood in time.

See also: Japanese Scientists Develop Artificial Blood


Original Submission

posted by jelizondo on Thursday September 04, @07:57AM   Printer-friendly
from the I-love-mystery-shoppers dept.

TechSpot published an interesting article about NVIDIA revenue:

While Nvidia's soaring revenues continue to command attention, its heavy reliance on a small group of clients introduces both opportunity and uncertainty. Market observers remain watchful for greater clarity around customer composition and future cloud spending, as these factors increasingly shape forecasts for the chipmaker's next phase of growth.

A new financial filing from Nvidia revealed that just two customers were responsible for 39 percent of the company's revenue during its July quarter – a concentration that is drawing renewed scrutiny from analysts and investors alike. According to documents submitted to the Securities and Exchange Commission, "Customer A" accounted for 23 percent of Nvidia's total revenue, while "Customer B" represented 16 percent.

This level of revenue concentration is significantly higher than in the same period one year ago, when Nvidia's top two customers contributed 14 percent and 11 percent, respectively.

Nvidia routinely discloses its leading customers on a quarterly basis. However, these latest numbers have prompted a fresh discussion about whether Nvidia's growth trajectory is heavily dependent on a small group of enormous buyers, particularly large cloud service providers.


Original Submission

posted by jelizondo on Thursday September 04, @03:14AM   Printer-friendly
from the that's-just-what-they-want-you-to-believe dept.

People who believe in conspiracy theories process information differently at the neural level:

A new brain imaging study published in Scientific Reports provides evidence that conspiracy beliefs are linked to distinct patterns of brain activity when people evaluate information. The research indicates that people who score high on conspiracy belief scales tend to engage different cognitive systems when reading conspiracy-related statements compared to factual ones. These individuals relied more heavily on regions associated with subjective value and belief uncertainty.

"Our motivation for this study came from a striking gap in the literature. While conspiracy theories have a profound impact on society—from shaping political engagement to influencing health decisions—we still know very little about how the brain processes such beliefs," said study author Shuguang Zhao of the Research Center of Journalism and Social Development at Renmin University of China.

"Previous research has focused on general belief processing, but the specific neural mechanisms that sustain conspiracy thinking remained unclear. The COVID-19 pandemic made this gap even more urgent. During this global crisis, conspiracy theories about the virus—whether it was deliberately created, or that vaccines were part of hidden agendas—spread rapidly across digital platforms."

"Such crises tend to heighten uncertainty and fear, creating fertile ground for conspiracy narratives," Zhao continued. "Observing how these beliefs flourished in real time underscored the importance of understanding not just what people believe, but how their brains evaluate and sustain these beliefs when confronted with threatening or ambiguous information."

[...] Inside the MRI scanner, participants were shown 72 statements formatted to resemble posts from Weibo, a popular Chinese social media platform. Half of the statements reflected conspiracy theories, and the other half presented factual information from state media sources. Each post was presented both visually and through audio recordings to control reading speed. After each statement, participants rated how much they believed the information using a seven-point scale ranging from "not at all" to "completely."

The researchers used high-resolution brain imaging to record blood oxygen level-dependent (BOLD) signals throughout the task. This method allows scientists to infer which brain regions are more active based on changes in blood flow. They then analyzed the data to compare brain activity during the evaluation of conspiracy versus factual information across the two belief groups.

At the behavioral level, both high- and low-belief participants were more likely to believe factual statements than conspiratorial ones. However, high-belief individuals were significantly more likely to endorse conspiracy-related content compared to their low-belief counterparts. Notably, the groups did not differ significantly in their belief ratings for factual information, suggesting that the belief gap was specific to conspiracy-related content.

"One surprising finding was that individuals with high conspiracy beliefs did not differ from low-belief individuals when evaluating factual information," Zhao told psyPost. "Their bias only appeared with conspiracy-related content. This suggests that conspiracy beliefs are not about being generally gullible, but rather reflect a selective way of processing information that aligns with conspiratorial worldviews."

[...] "The most important takeaway is that conspiracy beliefs are not simply a matter of being more gullible or less intelligent," Zhao said. "Instead, our results show that people with high levels of conspiracy belief process information through different neural pathways compared to those with low levels."

"At the behavioral level, we found that high-belief individuals were more likely to endorse conspiracy-related statements, but they evaluated factual information just as accurately as low belief individuals. This suggests that conspiracy thinking is a selective bias—it shapes how people respond to conspiratorial content, rather than causing a general inability to recognize facts."

[...] But the study, like all research, includes some limitations. The sample size was relatively small, and all participants were native Chinese speakers recruited from a university setting. This raises questions about how well the findings would apply to other cultural contexts, particularly in societies with different media landscapes or conspiracy traditions. Additionally, the use of Weibo-style posts added ecological validity within China, but results may differ in other social media formats or in more interactive communication settings.

"Our next step is to move beyond conspiracy beliefs and examine how people process misinformation more broadly, especially in the context of rapidly advancing AI," Zhao explained. "As AI systems are increasingly capable of generating large volumes of content, including plausible but misleading explanations, it is crucial to understand how such information affects trust, reasoning, and decision-making."

"In the long run, our goal is to uncover the cognitive and neural mechanisms that make people vulnerable to AI-driven misinformation, and to explore how labeling, explanation style, or source cues might mitigate these risks. By doing so, we hope to contribute to strategies that help individuals navigate an information environment where truth and falsehood are becoming harder to distinguish."

Journal Reference: Zhao, S., Wang, T. & Xiong, B. Neural correlates of conspiracy beliefs during information evaluation. Sci Rep 15, 18375 (2025). https://doi.org/10.1038/s41598-025-03723-z


Original Submission

posted by janrinok on Wednesday September 03, @10:27PM   Printer-friendly

CRLite: Fast, private, and comprehensive certificate revocation checking in Firefox:

Firefox is now the first and the only browser to deploy fast and comprehensive certificate revocation checking that does not reveal your browsing activity to anyone (not even to Mozilla).

Tens of millions of TLS server certificates are issued each day to secure communications between browsers and websites. These certificates are the cornerstones of ubiquitous encryption and a key part of our vision for the web. While a certificate can be valid for up to 398 days, it can also be revoked at any point in its lifetime. A revoked certificate poses a serious security risk and should not be trusted to authenticate a server.

Identifying a revoked certificate is difficult because information needs to flow from the certificate's issuer out to each browser. There are basically two ways to handle this. The browser either needs to ask an authority in real time about each certificate that it encounters, or it needs to maintain a frequently-updated list of revoked certificates. Firefox's new mechanism, CRLite, has made the latter strategy feasible for the first time.

With CRLite, Firefox periodically downloads a compact encoding of the set of all revoked certificates that appear in Certificate Transparency logs. Firefox stores this encoding locally, updates it every 12 hours, and queries it privately every time a new TLS connection is created.

You may have heard that revocation is broken or that revocation doesn't work. For a long time, the web was stuck with bad tradeoffs between security, privacy, and reliability in this space. That's no longer the case. We enabled CRLite for all Firefox desktop (Windows, Linux, MacOS) users starting in Firefox 137, and we have seen that it makes revocation checking functional, reliable, and performant. We are hopeful that we can replicate our success in other, more constrained, environments as well.

There are lots more details in the linked source, but remember that this is a Mozilla document.


Original Submission

posted by janrinok on Wednesday September 03, @05:44PM   Printer-friendly

French provider seizes on Redmond's admission that US law could override local protections:

European cloud provider OVHcloud has long warned about the risks of relying on foreign tech giants for critical infrastructure – especially when it comes to data sovereignty.

Those warnings seemed to gain fresh credibility in June, when Microsoft admitted it could not guarantee that customer data would remain protected from US government access requests.

"They finally told the truth!" says OVHcloud Chief Legal Officer Solange Viegas Dos Reis. "It's not a surprise," she shrugs, "we already knew that." However, "this reply from Microsoft brought kind of a shock for customers, because they suddenly discover that what they have been taught for a while. 'Oh guys, don't worry, it will not apply to you. Don't worry.' It's false! Because, indeed, the data can be communicated."

Anton Carniaux, director of public and legal affairs at Microsoft France, made the admission during a hearing [source in French] in the country. In answer to whether he could guarantee that data on French citizens could not be transmitted to the US government without the explicit agreement of the French authorities, Carniaux replied: "No, I can't guarantee it," but added that the scenario had "never happened before."

[...] The sovereignty problem, however, is difficult to solve. Almost every vendor and commentator appears to have a different idea of what it means. "One of the issues we have is that, as there is no legal definition of sovereignty, everyone has their own idea of what sovereignty is," Viegas Dos Reis says. "It's becoming quite a marketing concept for some."

She states that there are three key concepts: data sovereignty, technical sovereignty, and operational sovereignty.

Data sovereignty is the simplest to define. It involves compliance with the laws where the data resides, rather than the laws of other countries. It also covers the freedom of choice regarding where that data is stored. Additionally, it involves ethics, such as not training LLMs on the data. Finally, it involves keeping the data secure.

"Technical sovereignty," says Viegas Dos Reis, "is about being able, through ensuring interoperability, you can move your data from one provider to another." Data might be being stored with one cloud provider, but processed by another.

"So interoperability, reversibility, it's about the control of the infrastructure – datacenters, of course – but telecommunications network as well. It's about the control of the choice of the provider you have with the supply chain you have.

"So you control your supply chain, and that means that you control the risk. When you have a risk in one part of the supply chain, you must be able to change it to adapt."

And finally, there is operational sovereignty. Who will have access to the data? It is not difficult to imagine support personnel looking at screens of data in another country to diagnose an issue and inadvertently blow a hole in the most carefully made sovereignty plans.

[...] Concerns about the dominance of cloud hyperscalers are not new. However, worries about competition in the era of AI and fears surrounding the unpredictability of the US regime have led many customers – not just in Europe – to take a long, hard look at their dependencies.

"The sovereignty pitch starts rising in a lot of countries," says Viegas Dos Reis, "because there is this fear of, 'OK, if I'm not digitally sovereign, I expose myself as a country, as a company, and as an individual as well. I expose myself to pressure from a third party.

[...] That said, Viegas Dos Reis acknowledges that a migration from the hyperscalers would be "a very long and complex project." After all, it can be costly to leave a hyperscaler, and the services of one provider are not necessarily matched by another.

That said, Viegas Dos Reis notes that a slow migration does appear to be underway, where companies are considering which workloads need to be where. Some can stay in the public cloud. Some might be on-premises. Others might opt for a European cloud provider.

"Each company should have a clear strategy on the management of its data and of its dependencies, and each company should map the data, map the needs," says Viegas Dos Reis.

"And depending on this mapping, they will say, 'OK, with this kind of data, no problem. I can put it in a cloud that is not immune to a territorial regulation, but another kind of data. Oh, my God, if this data falls into the hands of a foreign government or a competitor, I will have big, big problems.'"


Original Submission

posted by janrinok on Wednesday September 03, @01:01PM   Printer-friendly

Psypost has published a very interesting article titled Fascinating new psychology research shows how music shapes imagination:

During the COVID-19 pandemic, many people reported turning to music not just for entertainment, but for comfort, support, and even companionship. Now, a new study published in Scientific Reports provides evidence that this sense of "music as company" may be more than a metaphor. Researchers found that music listening can shape mental imagery by increasing the presence of social themes in people's imagined scenes.

The idea that music offers social comfort has been widely reported in surveys and interviews, especially during periods of isolation such as pandemic lockdowns. Listeners often say they use music "to keep them company" or to ease feelings of loneliness. But the extent to which music genuinely prompts social thinking—rather than simply modulating mood—has been unclear. Most prior research has focused on how music affects memory, emotion, or passive mind-wandering. Few studies have examined how music shapes the content of intentional mental imagery, particularly whether it elicits social scenes or interactions.

This distinction is important because directed mental imagery is used in various clinical and therapeutic settings. Techniques such as imagery rescripting or exposure therapy rely on a person's ability to vividly imagine scenarios. If music can reliably shift the content of such imagery toward social themes, it might offer new ways to enhance therapeutic outcomes or support individuals struggling with loneliness.

"There have been many reports of people listening to music to 'keep them company,'" said study author Steffen A. Herff, a Horizon Fellow and leader of the Sydney, Music, Mind, and Body lab at the Sydney Conservatorium of Music at the University of Sydney. "The number of these reports was particularly high during the pandemic isolation periods. But whether this is just a figure of speech, or an actual empirically observable effect of music on social thought was previously unclear, despite its great applicational implications."

To explore this, the researchers designed two experiments involving over 600 participants. In the first experiment, participants were asked to perform a directed imagery task. They watched a brief video clip showing a solitary figure beginning a journey toward a distant mountain, and were then instructed to close their eyes and imagine how the journey continued. During this 90-second imagination phase, they either heard no sound or listened to folk music in Spanish, Italian, or Swedish.

Each song was played in both vocal and instrumental versions, and participants were either fluent or non-fluent in the language of the lyrics. This allowed the researchers to test whether comprehension or vocal presence mattered for the effect. Across the three language groups, 600 participants took part, split evenly between native and non-native speakers of each language.

After each imagination trial, participants described what they imagined and rated aspects such as vividness and emotional tone. These descriptions were then analyzed using a topic modeling technique called Latent Dirichlet Allocation, which allowed the researchers to identify recurring themes across participants' narratives.

The researchers found strong evidence that music had an impact on the characteristics of mental imagery. Compared to silence, music consistently led to more vivid mental scenes, more positive emotional tone, and greater perceived time and distance traveled in the imagined journey.

"The effect of induced social interactions into imagination was much stronger than we originally anticipated," Herff told PsyPost. "The probability of imagination to contain social interactions in our experiment is more than three times higher when participants listen to music, compared to silence."

"Music's ability to increase social imagination works even if you don't understand the lyrics of the song, for example because it is in a different language," Herff said. "In fact, it even works if there are no lyrics at all! Together, this tells us that it's not simply a question of hearing the human voice that is driving this."

One exception occurred with an Italian folk song describing a communal grape harvest, where understanding the lyrics amplified the effect—highlighting how specific lyrical content can enhance music's social influence under certain conditions.

In a second experiment, the researchers used a stable diffusion model to generate images based on participants' written descriptions of their mental imagery. These visualizations allowed for a more intuitive grasp of the differences between imagery during music and silence.

A new group of 60 participants then viewed pairs of these images—one generated from a music condition, the other from silence—and tried to guess which image was imagined while listening to music. Half of these participants completed the task in silence, while the other half listened to the same music that the original participant had heard.

"Interestingly, when a new group of participants was provided with representations of what the initial participants imagined during silence and during music, they could tell which content was previously imagined during music listening, and which was imagined during silence, but only if the new participants also listened to the music," Herff told PsyPost. "This tells us that there is a 'theory of mind' when it comes to music-evoked mental imagery. In other words, you can imagine what someone else might imagine when listening to music."

"I believe our findings provide support for an intuition about imagination, music, and their interaction, that many who explore the topic already have, no matter if they approach it from an empirical, artistic, or philosophical perspective. But where previously we had to rely on our intuition, we now have something more tangible to build upon."

"Ideally, we would have tested a much larger and more diverse set of music, in particular non-western music, and for each of them, included an expert familiar with that given music and culture," Herff noted. "However, further increasing the stimulus set and number of recruited participants would have made this already logistically challenging endeavour unfeasible. But that is certainly something we have our eyes on for the future."

It also remains unclear what specific musical features drive the effect. Is it melody, rhythm, tempo, or cultural associations that make a piece of music more likely to elicit social thought? Answering these questions would help refine music-based interventions in clinical and therapeutic settings.

"At the same time, we are working closely together with the music community to understand the insights and intuitions on how to use music to shape listeners' imagination that already exists in these experts. We hope that our research can contribute to clinical (e.g., cognitive behaviour therapies that use mental imagery techniques), recreational (e.g., roleplay), and artistic applications (e.g., new compositions)."

Journal Reference: Solitary silence and social sounds: music can influence mental imagery, inducing thoughts of social interactions


Original Submission

posted by jelizondo on Wednesday September 03, @10:26AM   Printer-friendly
from the fat-for-thought dept.

We fed people a milkshake with 130g of fat to see what it did to their brains – here's what we learned:

A greasy takeaway may seem like an innocent Friday night indulgence. But our recent research suggests even a single high-fat meal could impair blood flow to the brain, potentially increasing the risk of stroke and dementia.

Dietary fat is an important part of our diet. It provides us with a concentrated source of energy, transports vitamins and when stored in the body, protects our organs and helps keep us warm. The two main types of fat that we consume are saturated and unsaturated (monounsaturated and polyunsaturated), which are differentiated by their chemical composition.

But these fats have different effects on our body. For example, it is well established that eating a meal that is high in saturated fat, such as that self-indulgent Friday night takeaway pizza, can be bad for our blood vessels and heart health. And these effects are not simply confined to the heart.

The brain has limited energy stores, which means it is heavily reliant on a continuous supply of blood delivering oxygen and glucose to maintain normal function.

One of the ways the body maintains this supply is through a process known as "dynamic cerebral autoregulation". This process ensures that blood flow to the brain remains stable despite everyday changes in blood pressure, such as standing up and exercising. It's like having shock absorbers that help keep our brains cool under pressure.

But when this process is impaired, those swings in blood pressure become harder to manage. That can mean brief episodes of too little or too much blood reaching the brain. Over time, this increases the risk of developing conditions like stroke and dementia.

After eating a meal high in saturated fat, levels of fat in the blood rise and peak after around four hours. At the same time, blood vessels become stiffer and lose their ability to relax and expand. This restricts blood flow around the body. But little is known about what happens to the brain during this time and how well its blood supply is protected.

[...] Our findings confirmed previous research that has shown that a high-fat meal impairs the ability of the blood vessels linked to heart health to open in both young and old participants. These impairments reduced the brain's ability to buffer changes in blood pressure. This was more pronounced (by about 10%) in the older adults, suggesting that older brains may be more vulnerable to the effects of the meal.

Although we didn't directly test for the long-term effects of a high-fat meal on mental functioning in this study, we have previously shown that such a meal increases free radicals (unstable, cell-damaging molecules) and decreases nitric oxide (molecules that help blood vessels relax and open up to transport oxygen and glucose around the body).

This may explain the reduced blood flow regulation we observed in our recent study.

[...] This has important clinical implications. While an occasional takeaway is unlikely to cause harm on its own, our results suggest that even one fatty meal has an immediate effect on the body.

Our study highlights the importance of consuming a diet that is low in saturated fat to protect not only our heart health, but also our brain health. This is particularly important for older adults whose brains appear to be more vulnerable to the effects of such a meal and are already at increased risk of stroke and neurodegenerative diseases.

[...] Public health guidance recommends swapping saturated fats for polyunsaturated ones. These are found in foods like oily fish, walnuts and seeds, which are associated with better heart and brain health over the long term. But we don't yet know how the brain responds to a single meal that is high in polyunsaturated fat.

Nor do we know how the female brain responds to a high-fat meal. This is a crucial gap in our knowledge since women face a greater risk of stroke and dementia in later life compared to men.

Our study offers a timely reminder that diet doesn't just shape our long-term health. It also affects our body and brain in real time. And as we're learning, when it comes to protecting brain health, every meal may count.

Journal Reference: Post-prandial hyperlipidaemia impairs systemic vascular function and dynamic cerebral autoregulation in young and old male adults


Original Submission

posted by janrinok on Wednesday September 03, @08:17AM   Printer-friendly

China turns on giant neutrino detector:

See Here.

More than a decade after construction began, China has commenced operation of what it claims is the world's most sensitive neutrino detector.

Neutrinos are subatomic particles that have no charge and therefore pass through most matter without leaving any sign of their passing. Physics can't fully explain neutrinos, so scientists are interested in observing them more often to learn more about how they behave.

The Jiangmen Underground Neutrino Experiment (JUNO) is buried 700 meters under a mountain and features a 20,000-tonne "liquid scintillator detector" that China's Academy of Science says is "housed at the center of a 44-meter-deep water pool." There's also a 35.4-meter-diameter acrylic sphere supported by a 41.1-meter-diameter stainless steel truss. All that stuff is surrounded by more than 45,000 photo-multiplier tubes (PMTs).

The latter devices are super-sensitive light detectors. A liquid scintillator is a fluid that, when exposed to ionizing radiation, produces light. At JUNO, the liquid is 99.7 percent alkylbenzene, an ingredient found in detergents and refrigerants.

JUNO's designers hope that any neutrinos that pass through its giant tank bonk a hydrogen atom and produce just enough light that the detector array of PMTs can record their passing, producing data scientists can use to learn more about the particles.

The answer lies in the facility's location – a few tens of kilometers away from two nuclear power plants that produce neutrinos.

The Chinese Academy of Science's Journal of High Energy Physics says [source in Chinese] trials of JUNO succeeded, suggesting it will be able to help scientists understand why some neutrinos are heavier than others so we can begin to classify the different types of the particle – a key goal for the facility. The Journal also reports that scientists from Japan, the United States, Europe, India, and South Korea, are either already using JUNO or plan experiments at the facility.

China likes to share some of its scientific achievements. When it retrieved moon rocks, it made sure to show them off and allow some foreign labs to examine them. It looks like Beijing is keen to ensure the world appreciates JUNO's efforts to help us all appreciate neutrinos.


Original Submission