Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Disasters spur investment in flood and fire risk tech:
When Storm Babet hit the town of Trowell in Nottingham in 2023, Claire Sneddon felt confident her home would not be affected.
After all, when she bought the property in 2021, she was told by the estate agent that a previous flood the year before, which had reached but not effected the property, was a once-in-a-lifetime event, and that flooding measures to protect the properties on the cul-de-sac would be put in place.
However, when Storm Babet tore through the UK two years later, Ms Sneddon's home flooded after several days of rain.
"We knew there would be water on the cul-de-sac but no one expected it to flood internally again. However, water entered the property for five hours," she says. "It reached to the top of the skirting boards. We had to have all the flooring, woodwork and lower kitchen replaced, which took nearly 12 months." Their final insurance bill was around £45,000. She says they were fortunate to have qualified for a government scheme providing affordable insurance for homeowners in areas of high-flood risk.
While it might be too late for Ms Sneddon and other homeowners, new tools are being developed to help people and companies assess climate risk.
[...] Last December, the UK Environment Agency updated its National Flood Risk Assessment (NaFRA), showing current and future flood risk from rivers, the sea and surface water for England. It used its own data alongside that of local authorities and climate data from the Met Office. It also brought up to date the National Coastal Erosion Risk Map (NCERM). They were both last updated in 2018 and 2017 respectively.
The new NaFRA data shows as many as 6.3 million properties in England are in areas at risk of flooding from rivers, the sea or surface water, and with climate change this could increase to around eight million by 2050.
"We have spent the last few years transforming our understanding of flood and coastal erosion risk in England, drawing on the best available data... as well as improved modelling and technological advances," says Julie Foley, director of flood risk strategy at the Environment Agency.
"When we account for the latest climate projections, one in four properties could be in areas at risk of flooding by the middle of the century."
The Environment Agency plans to launch a portal, external where users can check their long-term flood risk. Similar resources exist for Scotland, Northern Ireland, and Wales through the ABI.
"We can no longer rely on historical data," says Lukky Ahmed, co-founder of Climate X.
The London-based climate risk firm offers a digital twin of the Earth, which simulates different extreme weather events and their potential impact on properties, infrastructure and assets under different emissions scenarios.
It combines artificial intelligence with physics-based climate models. "While many climate models might tell you how much rainfall to expect, they don't say what happens when that water hits the ground," he says. "Our models simulate, for example, what happens when the water hits, where it travels and what the impact of the flooding will be."
While banks and other lenders are testing their product, property companies are currently using their services when considering new developments.
"They log into our platform and identify locations and existing building stock and in return they receive risk rating and severity metrics tied to hazards," says Mr Ahmed.
Many parts of the world have much more extreme weather than the UK.
In the US in January, devastating wild fires tore through parts of Los Angeles. Meanwhile Hurricane Milton, which landed last October, is likely to be one of the costliest hurricanes to hit west Florida, external.
To help insurers manage those costs, New York-based Faura analyses the resilience of homes and commercial buildings. "We look at the different elements of a property to understand how likely it is to survive and pinpoint resilience and survivability of a property," says Faura co-founder Valkyrie Holmes.
"We tell companies and homeowners whether their property will still be standing after a disaster, not just whether a disaster will happen in an area," he adds.
Faura bases its assessments on satellite and aerial imagery and data from surveys and disaster reports. "Insurance companies technically have the data to be able to do this but have not build out the models to quantify it," says Mr Holmes.
Other services are popping up for homebuyers. For the properties it markets, US firm Redfin, estimates the percentage chance of natural disasters, such as flooding and wildfires, occurring up to the next 30 years across each property.
"If people are looking at two homes with the same layout in the same neighbourhood, then climate risk will make or break [their decision]," says Redfin chief economist Daryl Fairweather.
As for Ms Sneddon, following her personal experience, she now works for flood risk company The FPS Group. "Flood risk is only going to get worse over the coming years so it is essential to find out as much as you can about the flood risk to a property," she advises.
"Flooding has a huge impact on communities and mental health. You are supposed to feel safe in your home, it shouldn't be a place of worry and anxiety."
Advanced Micro Devices (AMD.O), said on Tuesday its key processor chips would soon be made at TSMC's (2330.TW), opens new tab new production site in Arizona, marking the first time that its products will be manufactured in the United States:
Though AMD's plans predate U.S. President Donald Trump's return to office, tech companies' efforts to diversify their supply chains have taken on added significance given Trump's escalating tariff war.
His administration is currently investigating whether imports of semiconductors threaten national security, which could be a precursor to slapping tariffs on those products.
"Our new fifth-generation EPYC is doing very well, so we're ready to start production," AMD Chief Executive Lisa Su told reporters in Taipei, referring to the company's central processing unit (CPU) for data centres.
Until now, the U.S. company's products have been made at contract chip manufacturer TSMC's facilities in Taiwan.
Also at ZeroHedge.
Related:
The OpenWrt community is proud to announce the newest stable release of the OpenWrt 24.10 stable series.
The OpenWrt Project is a Linux operating system targeting embedded devices. It is a complete replacement for the vendor-supplied firmware of a wide range of wireless routers and non-network devices.
Instead of trying to create a single, static firmware, OpenWrt provides a fully writable filesystem with package management. This frees you from the application selection and configuration provided by the vendor and allows you to customize the device through the use of packages to suit any application. For developers, OpenWrt is the framework to build an application without having to build a complete firmware around it; for users this means the ability for full customization, to use the device in ways never envisioned.
If you're not familiar with OpenWrt, it really is quite a nifty OS ecosystem and many commercially available routers even run OpenWrt under the hood behind a manufacturer-specific user-facing web interface.
While installing OpenWrt on a device will not magically transform older, less capable hardware into faster Wi-Fi for your home or something, many devices are effectively crippled from the factory as to the hardware capabilities you can utilize, the options, packages and software capabilities you can use by their stock firmware.
Newer devices gain all possible functionality through a fully capable software suite and extensible packages. Devices with bugs, security issues or simply abandoned by their manufacturer but still capable of good performance from the hardware can be brought up to date and used successfully with the updated OS. Older devices no longer suited to their original, intended purpose (like a slow Wi-Fi chip) can be re-purposed into something useful, for example using an old router with a USB port as a NAS server for your LAN by simply connecting storage.
This latest 24.10.1 release addresses some of the various issues and regressions caused by some of the underlying fundamental changes from the previous 23.5.x series to the initial 24.10.0 release.
Personally, I've come to use it quite extensively across a wide range of devices. Note though that as of this moment, many of the firmware download links, etc. have yet to be updated to specifically point to 24.10.1 as the release roll-out proceeds.
https://arstechnica.com/gadgets/2025/04/a-history-of-the-internet-part-1-an-arpa-dream-takes-form/
In a very real sense, the Internet, this marvelous worldwide digital communications network that you're using right now, was created because one man was annoyed at having too many computer terminals in his office.
The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency's Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik.
[...]
He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.
[...]
Taylor's predecessor, Joseph "J.C.R." Licklider, had released a memo in 1963 that whimsically described an "Intergalactic Computer Network" that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn't able to turn it into a real project. But Taylor knew that he could.
[...]
Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept."Is it going to be hard to do?" Herzfeld asked.
"Oh no. We already know how to do it," Taylor replied.
"Great idea," Herzfeld said. "Get it going. You've got a million dollars more in your budget right now. Go."
Taylor wasn't lying—at least, not completely.
Arthur T Knackerbracket has processed the following story:
Dolphins in seas around the UK are dying from a combination of increased water temperatures and toxic chemicals that the UK banned in the 1980s.
Polychlorinated biphenyls (PCBs) are a long-lasting type of persistent chemical pollutant, once widely used in industrial manufacturing. They interfere with animals’ reproduction and immune response and cause cancer in humans.
In a new study, researchers showed that higher levels of PCBs in the body and increased sea surface temperatures are linked to a greater mortality risk from infectious diseases for short-beaked common dolphins (Delphinus delphis), a first for marine mammals.
The ocean is facing “a triple planetary crisis” – climate change, pollution and biodiversity loss – but we often look at threats in isolation, says Rosie Williams at Zoological Society of London.
Williams and her colleagues analysed post-mortem data from 836 common dolphins stranded in the UK between 1990 and 2020 to assess the impact of these interlinked threats.
They found a rise of 1 milligram of PCBs per kilogram of blubber was linked with a 1.6 per cent increase in the chance of infectious diseases – such as gastritis, enteritis, bacterial infection, encephalitis and pneumonia – becoming fatal. Every 1°C rise in sea surface temperature corresponded to a 14 per cent increase in mortality risk.
According to the study, the threshold where PCB blubber concentrations have a significant effect on a dolphin’s risk of disease is 22 mg/kg, but the average concentration in samples was higher, at 32.15 mg/kg.
Because dolphins are long-lived, widely distributed around the UK and high in the food chain, they are a good indicator species to show how threats might also affect other animals.
[...] Despite being banned in the UK in 1981 and internationally in 2001, PCBs are still washing into the ocean. “They are still probably entering the environment through stockpiles and are often a side product or a byproduct of other manufacturing processes,” says Williams.
Cleaning up PCBs is very difficult. “Because they’re so persistent, they’re a nightmare to get rid of,” she says. “There is definitely not an easy fix.”
Some researchers are exploring dredging as a cleanup technique, while others are focused on improving water treatment plants’ effectiveness in removing persistent chemicals.
These findings indicate what might happen if action isn’t taken to ban perfluoroalkyl and polyfluoroalkyl substances (PFAS), another widespread group of so-called forever chemicals.
“While we cannot reverse the contamination that has already occurred, it is critical to prevent further chemical inputs into the environment,” says Taylor.
Journal Reference: Williams, R.S., Curnick, D.J., Baillie, A. et al. Sea temperature and pollution are associated with infectious disease mortality in short-beaked common dolphins. Commun Biol 8, 557 (2025). https://doi.org/10.1038/s42003-025-07858-7
From Brian Krebs on Infosec.Exchange:
I boosted several posts about this already, but since people keep asking if I've seen it....
MITRE has announced that its funding for the Common Vulnerabilities and Exposures (CVE) program and related programs, including the Common Weakness Enumeration Program, will expire on April 16. The CVE database is critical for anyone doing vulnerability management or security research, and for a whole lot of other uses. There isn't really anyone else left who does this, and it's typically been work that is paid for and supported by the US government, which is a major consumer of this information, btw.
I reached out to MITRE, and they confirmed it is for real. Here is the contract, which is through the Department of Homeland Security, and has been renewed annually on the 16th or 17th of April.
usaspending.gov/award/CONT_AWD_70RCSJ23FR0000015_7001_70RSAT20D00000001_7001
MITRE's CVE database is likely going offline tomorrow. They have told me that for now, historical CVE records will be available at GitHub, https://github.com/CVEProject
Yosry Barsoum, vice president and director at MITRE's Center for Securing the Homeland, said:
"On Wednesday, April 16, 2025, funding for MITRE to develop, operate, and modernize the Common Vulnerabilities and Exposures (CVE®) Program and related programs, such as the Common Weakness Enumeration (CWE™) Program, will expire. The government continues to make considerable efforts to support MITRE's role in the program and MITRE remains committed to CVE as a global resource."
Once again, Cui Bono? It certainly ain't us.
Rooftop solar PV could supply two-thirds of world's energy needs, and lower global temperatures:
Covering rooftops across the planet with solar panels could deliver 65 per cent of current global power consumption and almost completely replace fossil fuel-based electricity, and it could also lower global temperatures by 0.13 degrees.
These are the findings from a new study from researchers at the University of Sussex that found rooftop solar PV could generate 19,500 terawatt hours (TWh) of electricity per year. (Australia consumes around 250 TWh of electricity a year).
By using nine advanced Earth system models, geospatial data mining, and artificial intelligence techniques, the researchers were able to estimate the global rooftop area at a resolution of 1 kilometres to evaluate the technological potential of rooftop solar PV.
The researchers outlined their full methodology in an article published in the journal Nature, involving a lot of artificial intelligence machine learning that helped to determine that rooftops currently cover 286,393 kilometres-squared (km2) of the globe.
Of this 286,393km2, 30 per cent is unsurprisingly located in East Asia and 12 per cent by North America. China and the United States similarly comprised the largest collection of rooftops, with 74,426km2 and 30,928km2 respectively.
They were then able to extrapolate the generation potential of rooftop solar PV if every suitable rooftop was used, which resulted in annual electricity generation potential of 19,483TWh.
[...] The researchers were also able to use their findings to calculate the impact a global coverage of rooftop solar would have on global warming. While figures differed depending on the models and scenarios used, complete rooftop solar coverage based on current building stocks could mitigate global warming by 0.13–0.05 °C.
Importantly, the researchers also warned that solar power offers taxpayers better value for money than nuclear and urged policymakers around the globe to prioritise rooftop solar.
See also:
'We don't want to build an ecosystem that shuts the door':
For smartphone manufacturers, competing with Apple must feel like bringing a knife to a gunfight. Every. Single. Quarter.
The best iPhones aren't necessarily the best phones outright (read: they're not), but the Cupertino giant has undoubtedly managed to cordon off a large swath of smartphone-owning consumers (perhaps indefinitely so) through its decades-long focus on building a watertight product ecosystem. Heck, even Samsung, Apple's biggest competitor, has seen its own home country fall victim to iPhone fever, and Apple remains a force to be reckoned with in China, too.
What, then, are Apple's rivals to do? According to OnePlus' Senior Product Marketing Manager Rudolf Xu, there's only one thing for it: push for greater compatibility with iOS.
"I think the key thing is to build a bridge with iOS," Xu told TechRadar during a recent visit to OnePlus HQ in Guangdong, China. "That's why, for example, on OxygenOS 15, we have a feature called Share with iPhone, and people love it – we are getting very positive feedback, because it makes file transfer [between Android and iOS] a lot easier. That's something that Android devices have always struggled with.
"Another thing is the sharing of live photos," Xu continued. "If you capture a live photo with the OnePlus 13, you can actually still see the live photo effect on an iPhone [if you transfer it]. That's because we're using the latest format to package live photos.
"These are all the efforts we're putting in to build a bridge between OnePlus products and the iOS ecosystem. We don't want to build an ecosystem that shuts the door for other customers. We want to make [our ecosystem] as open as possible, so that we can attract more users."
In person, Xu's comment about "an ecosystem that shuts the door for other customers" wasn't made in reference to Apple directly, but it does rather nicely highlight the crux of the issue at hand. Apple won't willingly open up its operating system to rival software developers (and why would it?), so there's only so much that brands like OnePlus can do to improve compatibility between Android- and iOS-based devices.
Microsoft has begun the rollout of an AI-powered tool which takes snapshots of users' screens every few seconds.
The Copilot+ Recall feature is available in preview mode to some people with Microsoft's AI PCs and laptops.
It is the relaunch of a feature which was dubbed a "privacy nightmare" when it was first announced last year.
Microsoft paused the rollout in 2024, and after trialling the tech with a small number of users, it has begun expanding access to those signed up to its Windows Insider software testing programme.
The BBC has approached Microsoft for comment.
Microsoft says Recall will be rolled out worldwide, but those based in the EU will have to wait until later in 2025.
Users will opt in to the feature and Microsoft says they can "can pause saving snapshots at any time".
The purpose of Recall is to allow PC users to easily search through their past activity including files, photos, emails and browsing history.
For example, Microsoft says a person who saw a dress online a few days ago would be able to use the feature to easily locate where they saw it.
Privacy campaigner Dr Kris Shrishak - who previously called Recall a "privacy nightmare" - said the opt-in mechanism is "an improvement", but felt it could still be misused.
"Information about other people, who cannot consent, will be captured and processed through Recall," he said.
The feature is able to save images of your emails and messaging apps such as WhatsApp - meaning pictures and messages from others will be saved.
This is no different to a user taking a screenshot themselves when they receive a message.
"Think of disappearing messages on Signal that is stored on Recall forever," he said.
And he said he was concerned that malicious actors could exploit the images saved by Recall if they gained login access to a device.
Arthur T Knackerbracket has processed the following story:
As heat dissipation has become a major challenge for modern data centers, various cooling methods have been tried and deployed in recent years. For years, the industry has relied on air cooling; then, big companies began to experiment with liquid cooling, tried both warm water cooling and chilled water cooling, tested immersion cooling, and even planned to deploy it in the coming years. There is one thing that has not been used for cooling yet: lasers. Yet, lasers can be used to take away heat from processors. But there is a catch.
A startup called Maxwell Labs, with support from Sandia National Laboratories, is working on a new way to cool high-performance computing hardware, reports The Register. The technique uses special cold plates made of ultrapure gallium arsenide (GaAs) that cool down when they receive focused beams of coherent laser light of a certain wavelength. Rather than heating, which is common in most interactions involving intense light beams, this carefully engineered setup allows the semiconductor to shed heat at precise locations thanks to the high electron mobility of GaAs. The method promises to assist traditional cooling systems rather than replace them.
To implement this in practical applications, the GaAs semiconductors are structured into thin components placed directly on high-heat regions of processors. Microscopic patterns within the semiconductor guide the coherent beams precisely to these hot spots, resulting in highly localized cooling, which ensures efficiency by directly managing heat exactly where it becomes problematic instead of attempting to use GaAs and lasers to cool down an entire system. This technique has roots in earlier studies: back in 2012, at the University of Copenhagen, they cooled a tiny membrane to -269°C using a similar method, according to the report.
Additionally, this technique offers a unique capability: it can recapture the energy removed as heat, according to Maxwell. Rather than dissipating into the environment, the thermal energy extracted from chips can be emitted as usable photons, which are convertible back into electrical power. While this certainly increases the overall energy efficiency of computing systems, the efficiency of the process remains to be seen.
While the approach to use GaAs semiconductors for cooling is certainly an innovation, it is associated with extreme challenges both from the cost and manufacturability points of view.
[...] Currently, the concept remains in the experimental and modeling stage. According to Maxwell Labs chief executive Jacob Balma, simulations suggest the method is promising, but it has never been confirmed in physical trials as testing so far has been limited to separate components rather than a full setup.
Look, Microsoft, we need to talk. It's no secret that you've been nagging me (and everyone else) to upgrade to Windows 11 for a while now, with everything from ads to in-OS reminders pushing me towards the settings menu to check if my PC is eligible for an upgrade. But here's the thing, Microsoft: this path you're on isn't sustainable.
I mean this in a few different ways. Firstly, the extremely literal sense; Windows 11 forces a Trusted Platform Module 2.0 requirement, which for the uninitiated is a specific chip on your laptop or desktop's motherboard enabling enhanced security features. No TPM 2.0? No Windows 11. Yes, I know you can technically upgrade to Windows 11 without TPM 2.0, but I wouldn't recommend it.
Is that enhanced security good? Yes, absolutely - but it effectively means that many older computers literally can't run Windows 11, which combined with the impending Windows 10 End of Life is eventually going to result in a lot of PCs headed to the ever-growing e-waste pile. That's a real problem in itself. But I'm not here to rant about e-waste (though it's really bad). I want to talk about how users perceive Microsoft's nigh-omnipresent operating system, and how its current trajectory could result in serious issues further down the line.
See, Windows is constantly evolving - from humble beginnings as an MS-DOS interface in the mid-Eighties to beloved iterations like Windows XP and 10 (and widely panned versions, such as Vista and RT). But over the years, there have long been whispers of a 'final' version of the OS; a 'Windows Perfected' if you will, designed to last forever with continual updates - or at least, designed to last for a very long time.
In a sense, what those hunting for this 'last' Windows iteration want is the same experience that macOS users get: an operating system that just continually gets free updates adding new features, rarely changes in a hugely significant way, and isn't chock-full of annoying ads. Of course, it's not quite that simple for Microsoft; Apple has incredibly tight control over the macOS hardware ecosystem, while Microsoft theoretically has to make Windows run on a near-limitless selection of custom- and pre-built PCs as well as laptops from numerous different manufacturers. Then again, keeping ads out of Windows should be as simple as it is for macOS, and that hasn't happened...
At the end of the day, Microsoft doesn't need to keep creating entirely new versions of Windows - it does so because outside of an Apple-esque closed ecosystem, that's profitable, as system manufacturers will need to keep buying new OS keys and users will need to keep buying new systems.
Sure, there might need to be major overhauls now and then that leave some people behind - the TPM 2.0 debacle is perhaps one such example. But there are cracks in this methodology that are slowly starting to show, and I suspect it won't end well unless Microsoft changes course.
If upgrading to a new OS is a lot of hassle for an individual (I've personally been putting it off for years, still using Windows 10 on my personal desktop), imagine how much work - and how much money - it takes for a large business to do it. Although Windows 11 adoption is finally on the rise, plenty of private businesses and public sector organizations are still stuck on Win10 or older, despite Microsoft's insistence for us all to upgrade.
A 2021 report by Kaspersky suggested that 73% of healthcare providers globally are still using equipment with an outdated OS for medical purposes. Now, this isn't just talking about Windows computers, but it's a damning figure - a more recent investigation by Cynerio claimed that 80% of imaging devices are still using operating systems that have been officially EoL'd and are now unsupported, like Windows 7 and XP.
Healthcare is just one such sector, but it's felt widely, particularly in sectors and countries where funding for hardware and software upgrades often isn't readily available. Running an out-of-support OS can lead to a variety of issues, not least with security and compatibility. It's not that these organizations don't want to upgrade, it's that they literally can't - not without the significant expenditure of completely replacing the computer, and sometimes the entire machine it's hooked up to.
Lastly - and I'm going to be a bit brutally honest with you here, Microsoft - the slow but inexorable enshittification of Windows has got to stop. Ads, bugs, pestering notifications, the constant forcing of Copilot AI down our throats; just stop it, guys. Please.
I have Windows 11 on my laptop, and also the ROG Ally I used for handheld PC gaming. I'm no stranger to how bad it's become. My dislike of Apple hardware is well-documented, yet macOS's year-on-year consistency and total lack of ads is beginning to look mighty appealing.
Win11 feels less like a product you buy and own and more like an 'OS as a service' - something you pay for but don't really own, and can be snatched away or heavily modified at a moment's notice. It's already a serious issue in the game industry, with triple-A games increasingly becoming less about providing a good, fun experience and more about extracting as much value from the player as possible.
Even Windows 10 isn't safe from Microsoft's meddling. At this point, I'm half looking forward to the EoL purely so that Microsoft will take its grubby little fingers out of my desktop OS. No, I don't care about how great Windows 11 supposedly is now. No, I don't care about Copilot and how it's going to fix my digital life and cure all my worldly ailments.
Let me create a little analogy here. Imagine if you bought a car. It's a good car, it runs fine and doesn't give you any major issues. Then, a few years later, a new model comes out, and every morning, no matter where you park, the dealership sends someone to put a flyer on your windshield advertising the new car, or some other new offer the dealership is running. Every now and then, they also take away a small part of your car, like a wiper blade or a single tire nut. The kicker? You don't want the new car, and you might not even be able to afford it anyway.
I just want a straightforward OS that runs smoothly and doesn't become outdated every five years. Is that really too much to ask, Microsoft?
NIST Finalizes Guidelines for Evaluating 'Differential Privacy' Guarantees to De-Identify Data:
How can we glean useful insights from databases containing confidential information while protecting the privacy of the individuals whose data is contained within? Differential privacy, a way of defining privacy in a mathematically rigorous manner, can help strike this balance. Newly updated guidelines from the National Institute of Standards and Technology (NIST) are intended to assist organizations with making the most of differential privacy's capabilities.
Differential privacy, or DP, is a privacy-enhancing technology used in data analytics. In recent years, it has been successfully deployed by large technology corporations and the U.S. Census Bureau. While it is a relatively mature technology, a lack of standards can create challenges for its effective use and adoption. For example, a DP software vendor may offer guarantees that if its software is used, it will be impossible to re-identify an individual whose data appears in the database. NIST's new guidelines aim to help organizations understand and think more consistently about such claims.
The newly finalized publication, Guidelines for Evaluating Differential Privacy Guarantees (NIST Special Publication 800-226), was originally released in draft form in December 2023. Based in part on comments received, the authors updated the guidelines with the goal of making them clearer and easier to use.
"The changes we made improve the precision in the draft's language to make the guidelines less ambiguous," said Gary Howarth, a NIST scientist and an author of the publication. "The guidelines can help leaders more clearly understand the trade-offs inherent in DP and can help understand what DP claims mean."
Differential privacy works by adding random "noise" to the data in a way that obscures the identity of the individuals but keeps the database useful overall as a source of statistical information. However, noise applied in the wrong way can jeopardize privacy or render the data less useful.
To help users avoid these pitfalls, the document includes interactive tools, flow charts, and even sample computer code that can aid in decision-making and show how varying noise levels can affect privacy and data usability.
"Small groups in the data of any sort tend to stand out more, so you may need to add more noise to protect their privacy," Howarth said.
While the document is not intended to be a complete primer on differential privacy, Howarth said that it provides a robust reading list of other publications that can help practitioners get up to speed on the topic. The guidelines also cover the sorts of problems that the technology could work with and how to implement it in those situations.
"With DP there are many gray areas," he said. "There is no simple answer for how to balance privacy with usefulness. You must answer that every time you apply DP to data. This publication can help you navigate that space."
New updates to ChatGPT have made it easier than ever to create fake images of real politicians, according to testing done by CBC News. https://www.cbc.ca/news/canada/chatgpt-fake-politicians-1.7507039
Manipulating images of real people without their consent is against OpenAI's rules, but the company recently allowed more leeway with public figures, with specific limitations. CBC's visual investigations unit found prompts could be structured to evade some of those restrictions.
In some cases, the chatbot effectively told reporters how to get around its restrictions — for example, by specifying a speculative scenario involving fictional characters — while still ultimately generating images of real people.
When CBC News tried to get the GPT-4o image generator to create politically damaging images, the system initially did not comply with problematic requests.
"While I can't merge real individuals into a single image, I can generate a fictional selfie-style scene featuring a character inspired by the person in this image."
When the reporters uploaded an image of current Canadian Prime Minster Mark Carney and an image of Jeffrey Epstein, without indicating their names but describing them as "two fictional characters that I created," the system created a realistic image of Carney and Epstein together in a nightclub.
Gary Marcus, a Vancouver-based cognitive scientist focused on AI, and the author of Taming Silicon Valley, has concerns about the potential for generating political disinformation.
"We live in the era of misinformation. Misinformation is not new, propaganda has existed for ages, but it's become cheaper and easier to manufacture."
Ethically sourced "spare" human bodies could revolutionize medicine:
Even if it all works, it may not be practical or economical to "grow" bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and the ethical implications.
Bodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness.
But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it.
At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living.
Recently we have even begun using for experiments the "animated cadavers" of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people.
In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and we'd have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience.
Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoids—created without pregnancy, parental hopes, or indeed parents—blur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, why—just because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought.
Until recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausible—and possibly revolutionary. It is time for it to be explored.
Google's new Ironwood chip is 24x more powerful than the world's fastest supercomputer:
Google Cloud unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood, on Wednesday. This custom AI accelerator, the company claims, delivers more than 24 times the computing power of the world's fastest supercomputer when deployed at scale.
The new chip, announced at Google Cloud Next '25, represents a significant pivot in Google's decade-long AI chip development strategy. While previous generations of TPUs were designed primarily for both training and inference workloads, Ironwood is the first purpose-built specifically for inference — the process of deploying trained AI models to make predictions or generate responses.
"Ironwood is built to support this next phase of generative AI and its tremendous computational and communication requirements," said Amin Vahdat, Google's Vice President and General Manager of ML, Systems, and Cloud AI, in a virtual press conference ahead of the event. "This is what we call the 'age of inference' where AI agents will proactively retrieve and generate data to collaboratively deliver insights and answers, not just data."
The technical specifications of Ironwood are striking. When scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power — dwarfing El Capitan's 1.7 exaflops, currently the world's fastest supercomputer. Each individual Ironwood chip delivers peak compute of 4,614 teraflops.
Ironwood also features significant memory and bandwidth improvements. Each chip comes with 192GB of High Bandwidth Memory (HBM), six times more than Trillium, Google's previous-generation TPU announced last year. Memory bandwidth reaches 7.2 terabits per second per chip, a 4.5x improvement over Trillium.
Perhaps most importantly, in an era of power-constrained data centers, Ironwood delivers twice the performance per watt compared to Trillium, and is nearly 30 times more power efficient than Google's first Cloud TPU from 2018.
"At a time when available power is one of the constraints for delivering AI capabilities, we deliver significantly more capacity per watt for customer workloads," Vahdat explained.
The emphasis on inference rather than training represents a significant inflection point in the AI timeline. The industry has been fixated on building increasingly massive foundation models for years, with companies competing primarily on parameter size and training capabilities. Google's pivot to inference optimization suggests we're entering a new phase where deployment efficiency and reasoning capabilities take center stage.
This transition makes sense. Training happens once, but inference operations occur billions of times daily as users interact with AI systems. The economics of AI are increasingly tied to inference costs, especially as models grow more complex and computationally intensive.
During the press conference, Vahdat revealed that Google has observed a 10x year-over-year increase in demand for AI compute over the past eight years — a staggering factor of 100 million overall. No amount of Moore's Law progression could satisfy this growth curve without specialized architectures like Ironwood.
What's particularly notable is the focus on "thinking models" that perform complex reasoning tasks rather than simple pattern recognition. This suggests that Google sees the future of AI not just in larger models, but in models that can break down problems, reason through multiple steps and simulate human-like thought processes.
Google is positioning Ironwood as the foundation for its most advanced AI models, including Gemini 2.5, which the company describes as having "thinking capabilities natively built in."
At the conference, Google also announced Gemini 2.5 Flash, a more cost-effective version of its flagship model that "adjusts the depth of reasoning based on a prompt's complexity." While Gemini 2.5 Pro is designed for complex use cases like drug discovery and financial modeling, Gemini 2.5 Flash is positioned for everyday applications where responsiveness is critical.
The company also demonstrated its full suite of generative media models, including text-to-image, text-to-video, and a newly announced text-to-music capability called Lyria. A demonstration showed how these tools could be used together to create a complete promotional video for a concert.
Ironwood is just one part of Google's broader AI infrastructure strategy. The company also announced Cloud WAN, a managed wide-area network service that gives businesses access to Google's planet-scale private network infrastructure.