Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:43 | Votes:221

posted by Fnord666 on Saturday June 14, @03:42PM   Printer-friendly
from the carbon-footprint dept.

The company's chairman insists that going all-in on electric cars is wrong:

Akio Toyoda is a man who speaks his mind. He's been saying for years that forcing everyone to buy EVs isn't the way forward. Toyota's chairman is adamant that the transition can't be rushed and that going all-in on electric vehicles would have massive repercussions across the automotive industry. He believes millions of jobs throughout the supply chain could be at risk if the combustion engine is phased out too quickly. On the environmental front, Toyoda maintains that EVs are still much dirtier than hybrids.

The grandson of Toyota founder Kiichiro Toyoda claims the company has sold around 27 million hybrids since launching the first-generation Prius in 1997. According to him, those hybrids have had the same carbon footprint as nine million fully electric vehicles when adding battery and vehicle production into the equation.

Toyoda argues that a single EV is as dirty as three hybrids. However, while it's true that producing EVs and their batteries creates more carbon emissions than building gas cars, over their life cycles, EVs are responsible for far fewer overall emissions.

From InsideEVs:

The biggest anti-EV argument stems from the emissions generated during the mining, refining and processing of the raw materials used in high-voltage batteries. EV batteries use materials such as lithium, cobalt and nickel that require hazardous, water-intensive mining processes.

So when an EV rolls off a production line, it's already born "dirtier" than the average gas or hybrid vehicle, for now. It comes with a bigger "carbon debt," a term that researchers use to calculate the emissions vehicles gather before even hitting the road.

A research paper published in the scientific journal IOP Science says that gas and hybrid vehicles create six to nine metric tons of carbon dioxide emissions in their manufacturing, depending on the vehicle segment. EVs, on the other hand, generate 11 to 14 metric tons of CO2 emissions before going into the hands of customers.

But that's only part of the story. Once EVs hit the road, they begin paying off that carbon debt and their overall "emissions" start decreasing. Hybrids and gas vehicles, on the other hand, head in the opposite direction, growing their carbon emissions over time. After a certain number of miles, an EV can potentially clear that debt entirely.

How long that takes, exactly, can depend on who you ask. A 2023 Argonne National Laboratory study found that it can take an electric car 19,500 miles to mitigate the emissions made during manufacturing. That's less than two years of typical American driving, according to FactCheck.org. Another study in the journal Nature put that number higher, with carbon reductions beginning around 28,000 miles. Either way, considering how long Americans keep their cars, EVs become the far cleaner option over time.


Original Submission

posted by hubie on Saturday June 14, @10:57AM   Printer-friendly
from the feeling-it-in-your-cheese dept.

Arthur T Knackerbracket has processed the following story:

By affecting cows’ diets, climate change can affect cheese’s nutritional value and sensory traits such as taste, color and texture. This is true at least for Cantal — a firm, unpasteurized cheese from the Auvergne region in central France, researchers report February 20 in the Journal of Dairy Science.

Cows in this region typically graze on local grass. But as climate change causes more severe droughts, some dairy producers are shifting to other feedstocks for their cows, such as corn, to adapt. “Farmers are looking for feed with better yields than grass or that are more resilient to droughts,” but they also want to know how dietary changes affect their products, says animal scientist Matthieu Bouchon.

For almost five months in 2021, Bouchon and colleagues at France’s National Research Institute for Agriculture, Food and Environment tested 40 dairy cows from two different breeds — simulating a drought and supplementing grass with other fodder, largely corn, in varying amounts.

[...] They found that a corn-based diet did not affect milk yield and even led to an estimated reduction in the greenhouse gas methane coming from cows’ belching. But grass-fed cows’ cheese was richer and more savory than that from cows mostly or exclusively fed corn. Grass-based diets also yielded cheese with more heart-healthy omega-3 fatty acids and higher counts of probiotic lactic acid bacteria. The authors suggest that to maintain cheese quality, producers should include fresh vegetation in cows’ fodder when it is based on corn.

Experts not involved with the study point out that warming climates impact cattle physiology as well as feed quality. “Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature,” says Marina Danes, a dairy scientist at the Federal University of Lavras in Brazil.

[...] “The problem with the study is they increased the starch levels in the feed,” says Marcus Vinícius Couto, technical coordinator at the Central Cooperative of Rural Producers, an association of agricultural producers in Belo Horizonte. Starch is a challenge to digest for the first and largest compartment of a cow’s stomach — the rumen — where food ferments and plant fibers get broken down.

“We’re using feed with controlled starch levels,” as well as fat, hay and cottonseed fibers, to improve the milk’s composition, Couto says.

French producers will possibly need different strategies to fit their environment and cow breeds. But Bouchon is certain of one thing: “If climate change progresses the way it’s going, we’ll feel it in our cheese.”

Journal Reference: M. Bouchon et al. Adaptation strategies to manage summer forage shortages improve animal performance and better maintain milk and cheese quality in grass- versus corn-based dairy systems. Journal of Dairy Science, Volume 108, Issue 5. Published online February 20, 2025. doi: 10.3168/jds.2024-25730


Original Submission

posted by hubie on Saturday June 14, @06:12AM   Printer-friendly

Mistral releases a vibe coding client, Mistral Code:

French AI startup Mistral is releasing its own "vibe coding" client, Mistral Code, to compete with incumbents like Windsurf, Anysphere's Cursor, and GitHub Copilot.

Mistral Code, a fork of the open source project Continue, is an AI-powered coding assistant that bundles Mistral's models, an "in-IDE" assistant, local deployment options, and enterprise tools into a single package. A private beta is available as of Wednesday for JetBrains development platforms and Microsoft's VS Code.

"Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped, on-prem GPUs," Mistral wrote in a blog post provided to TechCrunch.

AI programming assistants are growing increasingly popular. While they still struggle to codequalitysoftware, their promise to boost coding productivity is pushing companies and developers to adopt them rapidly. One recent poll found that 76% of developers have used or were planning to use AI tools in their development processes last year.

Mistral Code is said to be powered by a combination of in-house models including Codestral (for code autocomplete), Codestral Embed (for code search and retrieval), Devstral (for "agentic" coding tasks), and Mistral Medium (for chat assistance). The client supports more than 80 programming languages and a number of third-party plug-ins, and can reason over things like files, terminal outputs, and issues, the company said.

[...] Mistral said it plans to continue making improvements to Mistral Code and contribute at least a portion of those upgrades to the Continue open source project.

Another day, another whatever these things are.


Original Submission

posted by hubie on Saturday June 14, @01:25AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

But it could just be a coincidence...

Just days after a new Washington crackdown on semiconductor design software exports to China, which banned companies like Synopsys from offering their services to clients in the country, access to some vital services appears to have been quietly restored. Notably, the turnabout comes within days of a high-level phone call between President Trump and Xi Jinping, according to Digitimes.

Digitimes reports that following the call, which took place on June 5, there has been a shift in the semiconductor market pertaining to the software used by companies in semiconductor design. Notably, several local Chinese IC design engineers and companies have reported that access to Synopsys' SolvNetPlus platform and Cadence's Support Portal has now been restored.

The report notes that it's unclear at this stage whether the change marks an isolated dispensation for certain clients or a broader relaxing of tension and restrictions between China and the US.

At the end of May, Synopsys paused its sales and services offerings in China and suspended its financial guidance, after receiving a BIS letter from the Bureau of Industry and Security of the U.S. Department of Commerce. The letter reportedly disclosed "new export restrictions related to China," and further reports claimed Synopsys had told staff to halt services and sales in the country and stop taking new orders to comply.

The EDA ban was expected to hit Chinese companies, notably Xiaomi and Lenovo, hard. Specifically, Chinese companies rely on American software for the manufacture and production of more advanced semiconductors like those used in AI processing. Reports at the time indicated that while China did have some homegrown EDA capacity, it was only "usable" on 7nm nodes and older, a weighty concern for future production.

These concerns appear to have been short-lived, however. Digitimes reports that following the phone call on June 5 (it is unclear if EDA access was specifically discussed), multiple Chinese IC firms reported successfully logging into SolvNetPlus with no issues.

Digitimes cites industry analysts who postulate whether the call might have garnered a software approach from the U.S. in regard to technology export restrictions and a gradual restoration of some services.


Original Submission

posted by hubie on Friday June 13, @08:41PM   Printer-friendly
from the Going-For-DOGE-puns dept.

The House Committee on Oversight and Government Reform recently (June 5) held a hearing on, ahem, Artificial Intelligence, and its usage within the federal government.

We stand at the dawn of an intelligent age, a transformative period rivaling the industrial and nuclear eras, where AI—the new electricity, the engine of global change—is redrawing the very architecture of global power. It is clear that the nation that masters and fully adopts this foundational technology will not only lead but also write the rules for this new epoch. The breathtaking adoption of AI, exemplified by ChatGPT's rapid rise, underscores that for the United States, widespread federal adoption and deployment are not merely options but a strategic imperative essential for national competitiveness, national security, and effective governance.

(First witness, Mr. Yll Bajraktari, Competitive Studies Project.)

Today, AI is fundamentally transforming how work gets done across America's $30 trillion economy. AI solves a universal problem for public and private entities by transforming employee experience, providing instant support, reducing the toil of manual and tedious tasks, and allowing employees to focus on activities and jobs that provide significantly more value to the organization, leading to more efficient and effective organizations.

(Second witness, Mr. Bhavin Shah, Moveworks.)

AI has evolved dramatically in just a few years and today Generative AI holds enormous promise in radically improving the delivery of government services. The meteoric rise of the newest form of Generative AI— Agentic AI— offers the alluring opportunity to use AI for task automation, not just generating on-demand content, like ChatGPT and its rival chatbots. With these rapid developments, the government stands to realize massive cost savings and enormous gains in effectiveness in scores of programs while at the same time preserving the integrity of taxpayer dollars.

(Third witness, Ms. Linda Miller, TrackLight.)

Proposals to regulate AI systems are proliferating rapidly with over 1,000 AI-related bills already introduced just five months into 2025.27 The vast majority of these are state bills and many of them propose a very top-down, bureaucratic approach to preemptively constraining algorithmic systems. As these mandates expand they will significantly raise the cost of deploying advanced AI systems because complicated, confusing compliance regimes would hamstring developers—especially smaller ones.

Such a restrictive, overlapping regulatory regime would represent a reversal of the policy formula that helped America become the global leader in personal computing, digital technologies, and the internet.

(Fourth witness, Mr. Adam Thierer, R Street Institute.)

Than they made the mistake of calling their final witness, a man named Bruce Schneier. I'll leave you the pleasure of reading the full 31 pages of his testimony here, but I'd like to finish with a couple of money quotes of his, as cited in El Reg:

"You all need to assume that adversaries have copies of all the data DOGE has exfiltrated and has established access into all the networks that DOGE has removed security controls from ... DOGE's affiliates have spread across government ... They are still exfiltrating massive US databases, processing them with AI and offering them to private companies such as Palantir. These actions are causing irreparable harm to the security of our country and the safety of everyone, including everyone in this room, regardless of political affiliation."

Oddly enough, Mr. Schneier was the only witness not quoted, or even mentioned, in the wrap-up of that hearing. Maybe that wrap-up was AI generated?


Original Submission

posted by hubie on Friday June 13, @03:58PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

United has switched off Starlink service on its United Express regional aircraft following reports of radio interference. According to The Points Guy, Starlink connectivity has been turned off across its fleet "out of an abundance of caution," a move the carrier confirmed in a statement.

As noted by the report, United has installed Starlink on nearly two dozen Embraer E175 aircraft. United announced the rollout on March 7, outlining plans to fit 40+ regional aircraft each month beginning in May through the end of 2025. The installation takes around 8 hours per aircraft, and United eventually plans to roll out Starlink to its entire fleet.

TPG reports that United has received reports of radio interference caused by Starlink, affecting the VHF antennas pilots use to contact air traffic control. As such, the aforementioned E175 aircraft carrying Starlink have been operating offline for the past few days, including a flight Tom's Hardware took on Monday, June 9.

United has issued a statement to TPG noting "Starlink is now installed on about two dozen United regional aircraft. United and Starlink teams are working together to address a small number of reports of static interference during the operation of the Wi-Fi system." United says this is "fairly common" with any new airline Wi-Fi provider, and says it expects the service to be back up and running "soon."

TPG reports that United and Starlink have already identified a solution and are rolling out the fix to affected aircraft. Allegedly, one-third of the affected planes have had the fix applied and are now operating with Starlink restored, with the remaining planes set for reconnection once they've had the fix applied.


Original Submission

posted by hubie on Friday June 13, @11:13AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A researcher has exposed a flaw in Google's authentication systems, opening it to a brute-force attack that left users' mobile numbers up for grabs.

The security hole, discovered by a white-hat hacker operating under the handle Brutecat, left the phone numbers of any Google user who'd logged in open to exposure. The issue was a code slip that allowed brute-force attacks against accounts, potentially enabling SIM-swapping attacks.

"This Google exploit I disclosed just requires the email address of the victim and you can get the phone number tied to the account," Brutecat told The Register.

Brutecat found that Google's account recovery process provided partial phone number hints, which could be exploited. By using cloud services and a Google Looker Studio account, the attacker was able to bypass security systems and launch a brute-force attack.

They explained in the post that "after looking through random Google products, I found out that I could create a Looker Studio document, transfer ownership of it to the victim, and the victim's display name would leak on the home page, with 0 interaction required from the victim."

The researcher also found an old-school username recovery form that worked without Javascript, which allowed them to check if a recovery email or phone number was associated with a specific display name using 2 HTTP requests.

After this, they could go "through forgot password flow for that email and get the masked phone."

Finally, a brute-forcing tool they developed as gpb would run with the display name and masked phone to unmask the phone number, using real-time libphonenumber validation to filter out invalid number queries made to Google's API.

[...] Surprisingly, Google didn't consider this a serious flaw, awarding Brutecat $5,000 under its bug bounty scheme.

"Google was pretty receptive and promptly patched the bug," the researcher said. "By depreciating the whole form compared to my other disclosures, this was done much more quickly. That being said, the bounty is pretty low when taking into account the impact of this bug."


Original Submission

posted by janrinok on Friday June 13, @06:29AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

New imagery encompassing nearly 800,000 galaxies.

The Cosmic Evolution Survey (COSMOS) has just released the “largest look ever into the deep universe.” Even more importantly, it has made the data publicly available and accessible “in an easily searchable format.” Possibly the star attraction from this massive 1.5TB of James Webb Space Telescope (JWST) data is the interactive viewer, where you can gawp at stunning space imagery encompassing nearly 800,000 galaxies. At the same site, you can find the complete set of NIRCam and MIRI mosaics and tiles, plus a full photometric catalog.

The COSMOS-Web program is a NASA-backed project with the support of scientists from the University of California, Santa Barbara (UCSB), and Rochester Institute of Technology (RIT). With this significant data release, the public at large is getting access to the largest view deep into the universe they will have ever seen.

According to the press release announcement, the published survey maps 0.54 degrees of the sky, or “about the area of three full moons,” with the NIRCam (near infrared imaging), and a 0.2 square degree area with MIRI (mid-infrared imaging).

To help Joe Public make sense of this 1.5TB data deluge, COSMOS-Web has thoughtfully provided a full aperture and model-based photometric catalog. Using this reference, those interested can observe “photometry, structural measurements, redshifts, and physical parameters for nearly 800,000 galaxies.” More excitingly for amateur astrophysics enthusiasts, the researchers claim that the new JWST imaging, combined with previous COSMOS data, “opens many unexplored scientific avenues.”

Before you head on over to the linked resources, it might be useful to familiarize yourself with some of the terms and units used by COSMOS-Web. If we want to look more closely at the JWST NIRCam mosaics, for example, you will see that the newly surveyed area is mapped into 20 zones with reference codes. Each of the mosaics is available in four NIRCam filters (F115W, F150W, F277W, F444W). In terms of scale, mosaics are available in both 30mas and 60mas. ‘Mas’ is short for milliarcseconds, a unit of angular measurement commonly used in astronomy.

Both mosaics (created by stitching together multiple tiles), and tiles (individual images, as captured by the telescope) are available for download and study. For example, a single 30mas pixel scale mosaic from NIRCam might require a download of up to 174GB, while the individual tiles are a ‘mere’ 7-10GB (compressed). You would also need specialized astronomical software to open these FITS data maps, but there are many options available, including some free and open-source software.

The COSMOS project has made use of most of the major telescopes on Earth and in space. It began with its use of the Hubble Space Telescope to cover what has now become known as the COSMOS field, a 2-square-degree field which appears to cover approximately 2 million galaxies. The initial Hubble survey took 640 orbits of the Earth. Ultimately, it is hoped that the research team will be able to study the formation and evolution of galaxies across cosmic time.


Original Submission

posted by janrinok on Friday June 13, @01:43AM   Printer-friendly

Study shows making hydrogen with soda cans and seawater is scalable and sustainable:

A MIT study shows that making hydrogen with aluminum soda cans and seawater is both scalable and sustainable.

Hydrogen has the potential to be a climate-friendly fuel since it doesn't release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a "green" fuel over its entire life cycle.

A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.

Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.

Now, the researchers have carried out a "cradle-to-grave" life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.

In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.

Now the question is how to avoid a Hindenburg every now and then.


Original Submission

posted by janrinok on Thursday June 12, @09:01PM   Printer-friendly
from the sad-king dept.

ChatGPT might have many strengths and claims of "intelligence". But in a recent game of Chess was utterly wrecked (their word not mine) by a Atari 2600 and it's simple little chess program. So all the might of ChatGPT applied to chess wrecked by the scrappy little game console that is almost 50 years old.

So there are things that ChatGPT apparently shouldn't do. Like playing chess. If anything this might show its absolute lack of critical thinking or thinking ahead. Instead it's a regurgitation engine for text blobs. I guess you just conjure up a good game of Chess from the Internet and apply it ...

The matchup seems almost comical when you consider the hardware involved. The Atari 2600 was powered by a MOS Technology 6507 processor running at just 1.19 MHz. To put that in perspective, your smartphone is literally thousands of times more powerful. The chess engine in Atari Chess only thinks one to two moves ahead – a far cry from the sophisticated AI systems we're used to today.

The most telling part? ChatGPT was playing on the beginner difficulty level. This wasn't even the game's hardest setting – it was designed for people just learning to play chess.

https://www.theregister.com/2025/06/09/atari_vs_chatgpt_chess/
https://futurism.com/atari-beats-chatgpt-chess
https://techstory.in/chatgpt-absolutely-wrecked-by-atari-2600-in-beginner-chess/


Original Submission

posted by janrinok on Thursday June 12, @04:16PM   Printer-friendly
from the organic-data dept.

UNFI, North America's largest grocery distributor, halted deliveries after a cyberattack disrupted operations for 30,000 retail locations:

United Natural Foods Inc. (UNFI), North America's largest grocery distributor and the primary supplier for Whole Foods Market, has been forced to halt deliveries and take systems offline after a crippling cyberattack. The breach, discovered in early June, has disrupted operations across its network of 30,000 retail locations, raising alarms about the vulnerability of the nation's food supply chain to digital threats.

The Rhode Island-based company confirmed in a June 9 regulatory filing that unauthorized access to its IT systems triggered emergency protocols, including shutting down critical infrastructure. "The incident has caused, and is expected to continue to cause, temporary disruptions to the Company's business operations," UNFI stated, adding that it is working with law enforcement and cybersecurity experts to restore functionality.

UNFI's outage has left grocery retailers scrambling. Steve Schwartz, director of sales for New York's Morton Williams chain, told The New York Post, "It's bringing the company to a standstill with no orders generated and no orders coming in." The chain relies on UNFI for staples like dairy products and bottled waters, forcing it to seek alternative suppliers. Smaller businesses, like bakeries dependent on UNFI deliveries, face even steeper challenges.

[...] UNFI insists it has implemented "temporary workarounds" to mitigate customer disruptions, but the timeline for full recovery remains unclear. The company's stock fell 8.5% following the announcement, reflecting investor unease.

Also at CNN, TechCrunch and Bloomberg.


Original Submission

posted by janrinok on Thursday June 12, @11:31AM   Printer-friendly
from the Linguistics dept.

From https://www.maginative.com/article/with-dolphingemma-google-is-trying-to-decode-dolphin-language-using-ai/

Google, in collaboration with Georgia Tech and the Wild Dolphin Project, has announced DolphinGemma, an AI model designed to analyze and generate dolphin vocalizations. With about 400 million parameters, the model is compact enough to run on Google Pixel phones used in ocean fieldwork, allowing researchers to process dolphin sounds in real-time.

DolphinGemma builds on Google's lightweight Gemma model family, optimized for on-device use. It was trained on an extensive, labeled dataset collected over four decades by the Wild Dolphin Project — the longest-running underwater dolphin research initiative. These audio and video records capture generations of Atlantic spotted dolphins in their natural habitat, complete with behavioral context and individual dolphin identities.

The goal is ambitious: to detect the structure and potential meaning in dolphin sounds — including signature whistles used between mothers and calves, or the aggressive "squawks" exchanged during disputes. DolphinGemma functions like a language model for dolphins, predicting likely vocalizations based on prior sequences, helping researchers uncover patterns and hidden rules in their communication.

and here's the dolphingemma site

Will this LLM generate AI spam for dolphins? And is there any way we can know what it's saying?

Additional discussion on the matter at The Guardian: We're close to translating animal languages – what happens then?


Original Submission

Processed by jelizondo

posted by hubie on Thursday June 12, @06:45AM   Printer-friendly

https://www.righto.com/2017/10/the-xerox-alto-smalltalk-and-rewriting.html

We succeeded in running the Smalltalk-76 language on our vintage Xerox Alto; this blog post gives a quick overview of the Smalltalk environment. One unusual feature of Smalltalk is you can view and modify the system's code while the system is running. I demonstrate this by modifying the scrollbar code on a running system.

Smalltalk is a highly-influential programming language and environment that introduced the term "object-oriented programming" and was the ancestor of modern object-oriented languages. The Alto's Smalltalk environment is also notable for its creation of the graphical user interface with the desktop metaphor, icons, scrollbars, overlapping windows, popup menus and so forth. When Steve Jobs famously visited Xerox PARC, the Smalltalk GUI inspired him on how the Lisa and Macintosh should work.


Original Submission

posted by hubie on Thursday June 12, @01:56AM   Printer-friendly
from the Stand-Up-For-Science dept.

We regard as "scientific" a method based on deep analysis of facts, theories, and views, presupposing unprejudiced, unfearing open discussion and conclusions.

(Andrei Sakharov, Thoughts on Peace, Progress and Intellectual Freedom, 1968.)

At the time of writing, a couple hundred scientists at the National Institutes of Health (NIH) have signed a letter of dissent towards their management, dubbed the Bethesda Declaration. It opens thus:

Dear Dr. Bhattacharya,

For staff across the National Institutes of Health (NIH), we dissent to Administration policies that undermine the NIH mission, waste public resources, and harm the health of Americans and people across the globe. Keeping NIH at the forefront of biomedical research requires our stalwart commitment to continuous improvement. But the life-and-death nature of our work demands that changes be thoughtful and vetted. We are compelled to speak up when our leadership prioritizes political momentum over human safety and faithful stewardship of public resources.

You too can sign the letter, along with 2,331 scientists and IT specialists who have done so, already, here.

Since January 20, the new administration has cancelled 2,100 NIH research grants totalling around $9.5bn and $2.6bn in contracts.


Original Submission

posted by hubie on Wednesday June 11, @09:11PM   Printer-friendly

New Way to Track Covertly Android Users

Researchers have discovered a new way to covertly track Android users. Both Meta and Yandex were using it, but have suddenly stopped now that they have been caught.

The details are interesting, and worth reading in detail:

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it's investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

        The covert tracking ­implemented in the Meta Pixel and Yandex Metrica trackers­ allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they're off-limits for every other site.

-- Links in article:

https://localmess.github.io/
https://www.facebook.com/business/tools/meta-pixel/
https://ads.yandex/metrica
https://source.android.com/docs/security/app-sandbox
https://developer.mozilla.org/en-US/docs/Web/Privacy/Guides/State_Partitioning
https://privacysandbox.google.com/cookies/storage-partitioning
https://www.washingtonpost.com/technology/2025/06/06/meta-privacy-facebook-instagram/

-- See Also:

- Meta and Yandex are de-anonymizing Android users' web browsing identifiers
https://arstechnica.com/security/2025/06/meta-and-yandex-are-de-anonymizing-android-users-web-browsing-identifiers/


Original Submission