Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Photo of Great Salt Lake, taken in 2020, shows how the rail causeway built in 1959 has divided the lake into bodies with much different chemistries. On the right is lake's North Arm, which has no tributaries other than what flows through an opening in the causeway from the South Arm and consequently has much higher salinity. The red tint comes from halophilic bacteria and archaea that thrive there.
Over the past 8,000 years, Utah's Great Salt Lake has been sensitive to changes in climate and water inflow. Now, new sediment isotope data indicate that human activity over the past 200 years has pushed the lake into a biogeochemical state not seen for at least 2,000 years.
A University of Utah geoscientist applied isotope analysis to sediments recovered from the lake's bed to characterize changes to the lake and its surrounding watershed back to the time the lake took its current shape from the vast freshwater Lake Bonneville that once covered much of northern Utah.
"Lakes are great integrators. They're a point of focus for water, for sediments, and also for carbon and nutrients," said Gabriel Bowen, a professor and chairman of the Department of Geology & Geophysics. "We can go to lakes like this and look at their sediments and they tell us a lot about the surrounding landscape."
Sedimentary records provide context for ongoing changes in terminal saline lakes, which support fragile, yet vital ecosystems, and may help define targets for their management, according to Bowen's new study, published last month in Geophysical Research Letters.
This research helps fill critical gaps in the lake's geological and hydrological records, coming at a time when the drought-depleted level of the terminal body has been hovering near its historic low.
"We have all these great observations, so much monitoring, so much information and interest in what's happening today. We also have a legacy of people looking at the huge changes in the lake that happened over tens of thousands and hundreds of thousands of years," Bowen said. "What we've been missing is the scale in the middle."
That is the time spanning the first arrival of white settlers in Utah but after Lake Bonneville receded to become Great Salt Lake.
By analyzing oxygen and carbon isotopes preserved in lake sediments, the study reconstructs the lake's water and carbon budgets through time. Two distinct, human-driven shifts stand out:
- Mid-19th century – Coinciding with Mormon settlement in 1847, irrigation rapidly greened the landscape around the lake, increasing the flow of organic matter into the lake and altering its carbon cycle.
- Mid-20th century – Construction of the railroad causeway in 1959 disrupted water flow between the lake's north and south arms, which turned Gilbert Bay from a terminal lake to an open one that partially drained into Gunnison Bay, altering the salinity and water balance to values rarely seen in thousands of years.
The new study examines two sets of sediment cores extracted from the bed of Great Salt Lake, each representing different timescales. The top 10 meters of the first core, drilled in the year 2000 south of Fremont Island, contains sediments washed into the lake up to 8,000 years ago.
The other samples, recovered by the U.S. Geological Survey, represent only the upper 30 centimeters of sediments, deposited in the last few hundred years.
"The first gives us a look at what was happening for the 8,000 years before the settlers showed up here," Bowen said. "The second are these shallower cores that allow us to see how the lake changed after the arrival of the settlers."
Bowen subjected these lakebed sediments at varying depths to an analysis that determines isotope ratios of carbon and oxygen, shedding light on the landscape surrounding the lake and the water in the lake at varying points in the past.
"The carbon tells us about the biogeochemistry, about how the carbon cycles through the lake, and that's affected by things like weathering of rocks that bring carbon to the lake and the vegetation in the watershed, which also contributes carbon that dissolves into the water and flows to the lake," he said.
Bowen's analysis documented a sharp change in carbon, indicating profound changes that coincided with the arrival of Mormon pioneers in the Salt Lake Valley, where they introduced irrigated agriculture to support a rapidly growing community.
"We see a big shift in the carbon isotopes, and it shifts from values that are more indicative of rock weathering, carbon coming into the lake from dissolving limestone, toward more organic sources, more vegetation sources," Bowen said.
The new carbon balance after settlement was unprecedented during the 8,000 years of record following the demise of Lake Bonneville.
Next, Bowen's oxygen isotope analysis reconstructed the lake's water balance over time.
"Essentially, it tells us about the balance of evaporation and water inflow into the lake. As the lake is expanding, the oxygen isotope ratio goes down. As the lake shrinks, it goes up, basically telling us about the rate of change of the lake volume. We see little fluctuations, but nothing major until we get to 1959."
That's the year Union Pacific built a 20-mile causeway to replace a historic rail trestle, dividing the lake's North Arm, which has no tributaries, from its South Arm, also known as Gilbert Bay, which receives inflow from three rivers. Water flows through a gap in the causeway into North Arm, now rendering the South Arm an open system.
"We changed the hydrology of the lake fundamentally and gave it an outflow. We see that really clearly in the oxygen isotopes, which start behaving in a different way," he said. Counterintuitively, the impact of this change was to make Gilbert Bay waters fresher than they would have been otherwise, buying time to deal with falling lake levels and increasing salinity due to other causes.
"If we look at the longer time scale, 8,000 years, the lake has mostly been pinned at a high evaporation state. It's been essentially in a shrinking, consolidating state throughout that time. And that only reversed when we put in the causeway."
Journal Reference:
Multi-millennial context for post-colonial hydroecological change in Great Salt Lake.
The future of typography is uncertain:
Monotype is keen for you to know what AI might do in typography. As one of the largest type design companies in the world, Monotype owns Helvetica, Futura, and Gill Sans — among 250,000 other fonts. In the typography giant's 2025 Re:Vision trends report, published in February, Monotype devotes an entire chapter to how AI will result in a reactive typography that will "leverage emotional and psychological data" to tailor itself to the reader. It might bring text into focus when you look at it and soften when your gaze drifts. It could shift typefaces depending on the time of day and light level. It could even adapt to reading speeds and emphasize the important portions of online text for greater engagement. AI, the report suggests, will make type accessible through "intelligent agents and chatbots" and let anyone generate typography regardless of training or design proficiency. How that will be deployed isn't certain, possibly as part of proprietarily trained apps. Indeed, how any of this will work remains nebulous.
Monotype isn't alone in this kind of speculation. Typographers are keeping a close eye on AI as designers start to adopt tools like Midjourney for ideation and Replit for coding, and explore the potential of GPTs in their workflow. All over the art and design space, creatives are joining the ongoing gold rush to find the use case of AI in type design. This search continues both speculatively and, in some places, adversarially as creatives push back against the idea that creativity itself is the bottleneck that we need to optimize out of the process.
That idea of optimization echoes where we were a hundred years ago. In the early 20th century, creatives came together to debate the implications of rapid industrialization in Europe on art and typography at the Deutscher Werkbund (German alliance of craftspeople). Some of those artists rejected the idea of mass production and what it offered artists, while others went all in, leading to the founding of the Bauhaus.
The latter posed multiple vague questions on what the industrialization of typography might mean, with few real ideas of how those questions might be answered. Will typography remain on the page or will it take advantage of advances in radio to be both text and sound? Could we develop a universal typeface that is applicable to any and all contexts? In the end, those experiments amounted to little and the questions were closed, and the real advances were in the efficiency of both manufacturing and the design process. Monotype might be reopening those old questions, but it is still realistic about AI in the near future.
[...] But the broader possibilities, Nix says, are endless, and that's what makes being a typographer now so exciting. "I think that at either end of the parentheses of AI are human beings who are looking for novel solutions to problems to use their skills as designers," he says. "You don't get these opportunities many times in the course of one's life, to see a radical shift in the way technology plays within not only your industry, but a lot of industries."
Not everyone is sold. For Zeynep Akay, creative director at typeface design studio Dalton Maag, the results simply aren't there to justify getting too excited. That's not to say Dalton Maag rejects AI; the assistive potential of AI is significant. Dalton Maag is exploring using AI to mitigate the repetitive tasks of type design that slow down creativity, like building kern tables, writing OpenType features, and diagnosing font issues. But many designers remain tempered about the prospect of relinquishing creative control to generative AI.
"It's almost as if we are being gaslighted into believing our lives, or our professions, or our creative skills are ephemeral," Akay says. She is yet to see how its generative applications promise a better creative future. "It's a future in which, arguably, all human intellectual undertaking is shed over time, and handed over to AI — and what we gain in return isn't altogether clear," she adds.
[...] That shift to digital type was the result of a clear and discernible need to improve typographic workflow from setting type by hand to something more immediate, Akay says. In the current space, however, we've arrived at the paintbrush before knowing how the canvas appears. As powerful as AI could be, where in our workflow it should be deployed is yet to be understood — if it should be deployed at all, given the less-than-stellar results we're seeing in the broader spectrum of generative AI. That lack of direction makes her wonder whether a better analog isn't the dot-com bubble of the late 1990s.
[...] Both Nix and Akay agree a similar crash around AI might actually be beneficial in pushing some of those venture capitalist interests out of AI. For Nix, however, just because its practical need isn't immediately obvious doesn't mean it's not there or, at least, won't become apparent soon. Nix suggests that it may well be beyond the bounds of our current field of vision.
[...] Though, that remains more speculation. We are simply so early on this that the only AI tools we can actually demonstrate are font identification tools like WhatTheFont and related ideas like TypeMixer.xyz. It's not possible to accurately comprehend what such nascent technology will do based solely on what it does now — it's like trying to understand a four-dimensional shape. "What was defined as type in 1965 is radically different from what we define as type in 2025," Nix adds. "We're primed to know that those things are possible to change, and that they will change. But it's hard at this stage to sort of see how much of our current workflows we preserve, how much of our current understanding and definition of typography we preserve."But as we explore, it's important not to get caught up with the spectacle of what it looks like AI can do. It may seem romantic to those who have already committed to AI at all costs, but Akay suggests this isn't just about mechanics, that creativity is valuable "because it isn't easy or fast, but rather because it is traditionally the result of work, consideration, and risk." We cannot put the toothpaste back in the tube, but, she adds, in an uncertain future and workflow, "that doesn't mean that it's built on firm, impartial foundations, nor does it mean we have to be reckless in the present."
NASA Challenge Winners Cook Up New Industry Developments - NASA:
NASA invests in technologies that have the potential to revolutionize space exploration, including the way astronauts live in space. Through the Deep Space Food Challenge, NASA, in partnership with CSA (Canadian Space Agency), sought novel food production systems that could provide long-duration human space exploration missions with safe, nutritious, and tasty food. Three winners selected last summer are now taking their technology to new heights – figuratively and literally – through commercial partnerships.
Interstellar Lab of Merritt Island, Florida, won the challenge's $750,000 grand prize for its food production system NuCLEUS (Nutritional Closed-Loop Eco-Unit System), by demonstrating an autonomous operation growing microgreens, vegetables, and mushrooms, as well as sustaining insects for use in an astronaut's diet. To address the requirements of the NASA challenge, NuCLEUS includes an irrigation system that sustains crop growth with minimal human intervention. This end-to-end system supplies fresh ingredients to support astronauts' health and happiness, with an eye toward what the future of dining on deep space missions to Mars and the Moon may look like.
Since the close of the challenge, Interstellar Lab has partnered with aerospace company Vast to integrate a spinoff of NuCLEUS, called Eden 1.0, on Haven-1, a planned commercial space station. Eden 1.0 is a plant growth unit designed to conduct research on plants in a microgravity environment using functions directly stemming from NuCLEUS.
"The NASA Deep Space Food Challenge was a pivotal catalyst for Interstellar Lab, driving us to refine our NuCLEUS system and directly shaping the development of Eden 1.0, setting the stage for breakthroughs in plant growth research to sustain life both in space and on Earth," said Barbara Belvisi, founder and CEO of Interstellar Lab.
Team SATED (Safe Appliance, Tidy, Efficient & Delicious) of Boulder, Colorado, earned a $250,000 second prize for its namesake appliance, which creates an artificial gravitational force that presses food ingredients against its heated inner surface for cooking. The technology was developed by Jim Sears, who entered the contest as a one-person team and has since founded the small business SATED Space LLC.
At the challenge finale event, the technology was introduced to the team of world-renowned chef and restaurant owner, José Andrés. The SATED technology is undergoing testing with the José Andrés Group, which could add to existing space food recipes that include lemon cake, pizza, and quiche. The SATED team also is exploring partnerships to expand the list of ingredients compatible with the appliance, such as synthetic cooking oils safe for space.
Delicious food was a top priority in the Deep Space Food Challenge. Sears noted the importance of food that is more than mere sustenance. "When extremely high performance is required, and the situations are demanding, tough, and lonely, the thing that pulls it all together and makes people operate at their best is eating fresh cooked food in community."
Team Nolux, formed from faculty members, graduate, and undergraduate students from the University of California, Riverside, also won a $250,000 second prize for its artificial photosynthesis system. The Nolux system – whose name means "no light" – grows plant and fungal-based foods in a dark chamber using acetate to chemically stimulate photosynthesis without light, a capability that could prove valuable in space with limited access to sunlight.
Some members of the Nolux team are now commercializing select aspects of the technology developed during the challenge. These efforts are being pursued through a newly incorporated company focused on refining the technology and exploring market applications.
A competition inspired by NASA's Deep Space Food Challenge will open this fall.
Stay tuned for more information: https://www.nasa.gov/prizes-challenges-and-crowdsourcing/centennial-challenges/
Radio Waves Can Strengthen Sense of Smell - Neuroscience News:
Our sense of smell is more important than we often realize. It helps us enjoy food, detect danger like smoke or gas leaks, and even affects memory and emotion.
Many people — especially after COVID-19, aging, or brain injury — suffer from a loss of smell. However, there are very few effective treatments, and those that exist often use strong scents or medicines that cause discomfort in patients.
In a study published this week in APL Bioengineering, by AIP Publishing, researchers from Hanyang University and Kwangwoon University in South Korea introduced a simple and painless way to improve our sense of smell using radio waves.
Unlike traditional aroma-based therapy, which indirectly treats smell loss by exposing the patient to chemicals, radio waves can directly target the part of our brain responsible for smell, without causing pain.
"The method is completely noninvasive — no surgery or chemicals needed — and safe, as it does not overheat the skin or cause discomfort," author Yonwoong Jang said.
In the study, the team asked volunteers with a healthy sense of smell to sit while a small radio antenna was placed near, but not touching, their forehead. For five minutes, this antenna gently sent out radio waves to reach the smell-related nerves deep in the brain.
Before and after the short treatment, the authors tested how well the patient could smell very faint odors, like diluted alcohol or fruit scents, using pen-shaped odor dispensers called Sniffin' Sticks. They also recorded the patients' brain signals to see how active their smell nerves were.
The team found that their method improved subjects' sense of smell for over a week after just one treatment.
"This study represents the first time that a person's sense of smell has been improved using radio waves without any physical contact or chemicals, and the first attempt to explore radio frequency stimulation as a potential therapy for neurological conditions," Jang said.
The results of the current study, which focused on people with a normal sense of smell, could help professionals such as perfumers, chefs, or coffee tasters, who need to distinguish aromatic subtleties. The method could be also used to preserve or even enhance the sense of smell.
As an important next step, the team plans to conduct a similar study on individuals with olfactory dysfunction, such as anosmia (complete loss of smell) or hyposmia (reduced sense of smell).
"This will help us determine whether the treatment can truly benefit those who need it most," Jang said.
Journal Reference: Junsoo Bok, Eun-Seong Kim, Juchan Ha, et al., Non-contact radiofrequency stimulation to the olfactory nerve of human subjects [OPEN], APL Bioeng. 9, 036112 (2025) https://doi.org/10.1063/5.0275613
The Creative Commons has become an unofficial UNESCO NGO partner. UNESCO is the part of UN which advances international cooperation in the fields of education, science, culture, and communication.
This new, formal status is an important recognition of the synergies between our two organizations and of our shared commitment to openness as a means to benefit everyone worldwide. As an official NGO partner, Creative Commons (CC) will now have the opportunity to contribute to UNESCO’s program and to interact with other official partner NGOs with common goals. In particular, we look forward to:
- Participating in UNESCO meetings and consultations on various subjects core to CC's mission. This will give us a seat at the table to advocate for the communities we serve and share our expertise on openness, the commons, and access to knowledge.
- Participating in UNESCO's governing bodies in an observer capacity. This will enable us to deliver official statements on matters within our sphere of expertise and contribute to determining UNESCO's policies and main lines of work, including its programs and budget.
- Taking part in consultations about UNESCO's strategy and program and being involved in UNESCO's programming cycle. This will give us opportunities to communicate our views and suggestions on proposals by the Director-General.
Previously:
(2022) New UNESCO Flagship Report Calls for Reinventing Education
Commerce Secretary Howard Lutnick delivered major news on Friday, confirming that the United States has finalized an investment deal with Intel, securing a 10% ownership stake in the semiconductor powerhouse. This development marks a significant step in bolstering America's position in global technology amid ongoing concerns about supply chain vulnerabilities and competition from abroad:
The agreement stems from negotiations tied to the 2022 CHIPS and Science Act, which aimed to revitalize domestic chip production. Under the terms, the U.S. gains a nonvoting equity position in Intel in return for federal funding support.
While specific financial details remain under wraps, the move aligns with efforts to ensure taxpayer dollars yield tangible returns for national interests. Intel, for its part, has committed billions to constructing advanced manufacturing facilities in Ohio, with full operations expected by 2030. This follows an $8 billion grant finalized last fall to accelerate those projects.
[...] Critics from the left may decry increased government involvement in private enterprise, but proponents argue it's essential for safeguarding national security in an era of geopolitical tensions. As Lutnick noted, the pact benefits both Intel and the public, positioning the U.S. to lead in semiconductors—a sector vital for everything from consumer electronics to defense systems.
This deal could set a precedent for future public-private partnerships, ensuring that American ingenuity drives global progress while keeping strategic assets firmly under domestic control. With operations ramping up in the coming years, the long-term impacts on the economy and technology landscape will be worth watching closely.
Intel press release. Also at Politico, Newsweek and NBC News.
Previously: Trump Administration Considering US Government Purchase of Stake in Intel
- "Quiet cracking" is the new workplace phenomenon sweeping offices. As AI looms over jobs and promotions stall, workers' mental health is quietly fraying. For employers, it has resulted in a staggering $438 billion loss in global productivity in the past year alone. But not all hope is lost. A career expert tells Fortune there are ways for managers and employees to course-correct.
Workers are down in the dumps about a lack of career growth opportunities and emptying offices as companies slash staffers to make way for AI, all while being put under constant pressure to do more with less.
Scared of speaking out and putting their neck on the line in a dire job climate, staff are silently but massively disengaging with their employers: Welcome to "quiet cracking."
The latest workplace phenomenon sees staff showing up and doing their job but mentally and emotionally struggling. About 54% of employees report feeling unhappy at work, with the frequency ranging from occasionally to constantly, according to a 2025 report from TalentLMS.
"The telltale signs of quiet cracking are very similar to burnout. You may notice yourself lacking motivation and enthusiasm for your work, and you may be feeling useless, or even angry and irritable," Martin Poduška, editor in chief and career writer for Kickresume, tells Fortune. "These are all common indicators of quiet cracking, and they gradually get worse over time."
Unlike "quiet quitting," this decline in productivity from workers isn't intentional. Instead, it's caused by feeling worn down and unappreciated by their employers. And oftentimes, as with burnout, they don't even register it creeping up on them until it's too late. But feeling unable to quit in protest because of the current job market, it's left them ultimately stuck and unhappy in their roles.
A fleet of unhappy workers may sound easy to spot, but the problem is sneaking up on workplaces without much course correction.
Last year, the proportion of engaged employees globally dropped from 23% to 21%—a similar dip in enthusiasm seen during the COVID-19 lockdown—costing the world economy about $438 billion in lost productivity, according to a 2025 report from Gallup.
Quiet cracking isn't only creating a bad culture for employees to work in, but the trend is also hitting businesses hard. It's imperative that bosses seize the moment to develop an engagement strategy before the problem festers into a ticking time bomb. And employees can also make adjustments to better advocate for their own career happiness.
"It isn't obvious when quiet cracking happens," Poduška explains. "You may be starting to quietly crack right now, but you wouldn't know as this type of burnout takes some time for others, and even you, to notice."
The current state of the workplace may sound bleak, but not all hope is lost. A career expert tells Fortune there are ways to spot fissures in company culture before employees are fully down in the dumps, and managers need to stand on guard.
"If you've noticed an employee becoming more and more disengaged with their work, it may be best to schedule a time where you can discuss how they feel," Poduška says. "Setting them new tasks, providing new learning opportunities, and simply having an honest conversation could steer things back in the right direction."
A good boss can make or break company culture. Among employees who experience quiet cracking, 47% say their managers do not listen to their concerns, according to the TalentLMS study. But by simply sparking a conversation on the issue, supervisors can get staffers back on track to be happy at work. Alongside having an honest conversation, managers should also show interest in the development of their direct reports. Training workers can help show that the company is interested in their career advancement; about 62% of staffers who aren't quiet cracking receive training, compared to 44% of those who frequently or constantly experience the feeling.
"When employee training is prioritized, it signals care, investment, and belief in people's potential," the TalentLMS report notes. "It fuels motivation, builds capability, and creates a culture where people want to contribute—and stay. Training isn't just about skill-building; it's an antidote to disengagement. A catalyst for connection."
Managers aren't the only ones with power in fighting workplace disengagement; employees also have the power to combat their own unhappiness.
"How can quiet cracking be avoided? For staff, finding out the root cause of your unhappiness might be the key to stop quiet cracking in its tracks," Poduška explains. "If you feel like there are no opportunities for progression with your role, you may find it worthwhile to talk to your manager about a development plan. This can give you something to work toward, which may help combat boredom and spark your motivation."
However, not every company is going to be invested in developing their workers, even if they voice the need for it. In that case, Poduška advises that staffers take a hard look at the business they work for. He recommends that employees question if their jobs feel sustainable and if they feel adequately supported by their teams. If not, a new employer—or even career—could be the answer.
"Another way to stop quiet cracking is to change things up. You could ask yourself if the role you're currently in is right for you," Poduška says. "A total career pivot may be the answer to quiet cracking in some cases, or for others, a switch into another department might be the best solution. Some, however, may just need something new and fresh to work on."
South Australia experienced a state-wide blackout in 2016 due to a severe storm that damaged critical electricity transmission infrastructure and left 850,000 customers without power. Most electricity supplies were restored within eight hours, but it was a major event and prompted a multi-agency response involving emergency services and the Australian Defence Force.
[...] Historically, Australia has been heavily reliant on gas and coal generator units for system restart after a blackout, but those units are quickly reaching their end-of-life.
The grid has also changed significantly in the last decade alone, and today's electricity network looks very different, with large commercial wind and solar farms making up a higher percentage of Australia's generation mix every year.
Sorrell's work looks at how power systems can be restarted using large-scale, grid-forming batteries storing power from wind and solar sources as the primary restart source. While he recognises restarting the grid is not something most renewable plants were intentionally designed for in the first place, he remains confident in their ability.
"We're 100 per cent moving in a direction where large-scale batteries are going to feature prominently, if not be the primary black starter of the grid after major blackouts," Sorrell said.
During the South Australian blackout, severe weather damaged powerlines and subsequently nearly all wind turbines across the state shut down in quick succession. This was caused by a protection setting unknown to operators. Losing the turbines caused a massive energy imbalance, and with far too much load for the generation available the system collapsed. Within seconds, the whole state lost power.
"It's not because it's wrong for those protection devices to be there. They're there for very good reasons," Sorrell said.
"What the problem tends to be, and what was the case in South Australia, was that despite being compliant with existing standards, these particular settings were not present in the models that the manufacturers provided."
This meant the equipment was not being correctly represented, either in technical standards or in the simulation models that power system operators need, especially in understanding extreme circumstances.
Sorrell said there has since been a concerted effort across the industry to implement new standards in modelling so that they accurately represent the equipment in the field and their performance.
"Australia is a world-leader for setting modelling and performance standards," he said.
In his latest System Restoration and Black Start DOCX (15 MB) report, Sorrell used these next generation computer models and simulations to explore how large-scale batteries, wind and solar can actively participate in system restart.
Traditionally, it has been thought that large coal or gas generators have more capability and that large amounts of wind and solar in Australia will make our networks less stable.
CSIRO Power Systems Researcher, Dr Thomas Brinsmead, said one of the more interesting outcomes from the latest report is that this is not necessarily the case when it comes to restarting after a blackout.
"The capability of batteries with grid-forming inverter technology is better at supporting system restart than traditional black-start generators in many respects," Thomas said.
The report found that grid-forming battery technology was capable of energising far larger areas of the network than an equivalent synchronous generator, be it gas, coal or hydro.
A synchronous generator is a type of electrical machine used to convert mechanical energy into electrical energy. It's called 'synchronous' because its rotor rotates at the same speed as the magnetic field in the stator – this means it's perfectly in sync with the frequency of the electricity being produced.
Grid-forming batteries use smart inverters that mimic the behaviour of traditional generators – such as coal or gas turbines – but without the fuel-burning. The inverters work by converting direct current (DC) from renewable energy sources into controlled alternating current (AC) to supply power to the grid. These can also be used to help restart the grid after blackouts.
"What we consistently found, and I was genuinely surprised by these results, was that grid-forming batteries outperformed the synchronous generators in almost all areas," Sorrell said.
A big challenge with inverter-based technology is that it is current limited. This means that the amount of energy it injects into the system must be tightly controlled, otherwise the transistors within it will fail. Synchronous generators are not current limited to the same extent. They're capable of injecting immense amounts of current into the system as and when required.
"We originally subscribed to the idea that the best practice for re-energising a transformer during restart was to maintain a strong system capable of supplying that massive inrush of current to get it going," Sorrell said.
"But what we found was the current-limited nature of grid-forming inverters might actually be helping in these circumstances. Because those inverters, they're ramping to their maximum current and then they're staying at that level for longer, resulting in these transformers being gradually re-energised over a second or more."
This inherently gradual re-energisation from the inverters allowed large transformers to be re-energised without tripping network protection mechanisms more reliably than traditional rotating machine restart sources such as coal, gas or hydro. The current-limited nature of the inverters, although generally seen as a drawback of the technology, may be beneficial in this situation.
However, a grid with large amounts of solar, especially on rooftops, is not all good news when it comes to system restarts.
It's important to have a steady load during system restart, especially in residential areas that rely heavily on electricity.
However, researchers discovered that during the early stages of system restart, the use of large-scale grid-forming batteries as the primary source may cause rooftop solar to become unstable. This happened at lower penetration levels when compared to the use of traditional restart sources.
These studies concluded that although a battery is more flexible than a coal or gas generator, or even a hydro generator, to accommodate changing load, they don't initially provide the same system strength to rooftop solar.
Thomas said that there is presently still a need for black-start generators to be available in the National Electricity Market (NEM) as the primary source for restarts.
"We don't like blackouts to happen, but we want to be very confident that when they do, we are able to get things started again as soon as possible," he said.
However, the work continues to build confidence that the same restoration function can and will eventually be performed by newer technologies.
Given the published retirement schedule of synchronous machine-based generation in the NEM, approximately 2GVA of new grid-forming technology will be required by 2028 to maintain network restoration capability equivalent to today.
This is considerably lower than the capacity of synchronous machine-based generation being retired. This could be viewed as already recognising the greater capability of grid-forming inverters to restore network elements without activating protection mechanisms.
During the next stage of the system restoration work, energy system experts will investigate how new renewable energy zones – which include solar and wind farms – throughout the country can play an active role in system restoration. They will engage a transmission provider to devise a realistic test plan template for grid-forming batteries to restart a system. A successful test of a battery restarting a portion of the network in Australia is not far away.
"The industry is learning so quickly," Sorrell said.
"From the inception of distributed electricity to when renewables came on board, we had 100 years. The world had 100 years to get electricity right. Meanwhile, we've had just two decades to go from the idea of large-scale wind and solar to getting it fully functional in the grid."
The Stanford Report has an interesting article on a brain interface:
Neurosurgery Assistant Professor Frank Willett, PhD, and his teammates are using brain-computer interfaces, or BCIs, to help people whose paralysis renders them unable to speak clearly.
The brain's motor cortex contains regions that control movement – including the muscular movements that produce speech. A BCI uses tiny arrays of microelectrodes (each array is smaller than a baby aspirin), surgically implanted in the brain's surface layer, to record neural activity patterns directly from the brain. These signals are then fed via a cable hookup to a computer algorithm that translates them into actions such as speech or computer cursor movement.
To decode the neural activity picked up by the arrays into words the patient wants to say, the researchers use machine learning to train the computer to recognize repeatable patterns of neural activity associated with each "phoneme" – the tiniest units of speech – then stitch the phonemes into sentences.
Willett and his colleagues have previously demonstrated that, when people with paralysis try to make speaking or handwriting movements (even though they cannot, because their throat, lip, tongue and cheek muscles or the nerve connections to them are too weak), a BCI can pick up the resulting brain signals and translate them into words with high accuracy.
Recently, the scientists took another important step: They investigated brain signals related to "inner speech," or language-based but silent, unuttered thought.
Willett is the senior author, and postdoctoral scholar Erin Kunz, PhD, and graduate student Benyamin Meschede-Krasa are the co-lead authors of a new study about this exploration, published Aug. 14 in Cell.
Willett said:"Inner speech (also called 'inner monologue' or self-talk) is the imagination of speech in your mind – imagining the sounds of speech, the feeling of speaking, or both. We wanted to know whether a BCI could work based only on neural activity evoked by imagined speech, as opposed to attempts to physically produce speech. For people with paralysis, attempting to speak can be slow and fatiguing, and if the paralysis is partial, it can produce distracting sounds and breath control difficulties."
"We studied four people with severe speech and motor impairments who had microelectrode arrays placed in motor areas of their brain. We found that inner speech evoked clear and robust patterns of activity in these brain regions. These patterns appeared to be a similar, but smaller, version of the activity patterns evoked by attempted speech. We found that we could decode these signals well enough to demonstrate a proof of principle, although still not as well as we could with attempted speech. This gives us hope that future systems could restore fluent, rapid, and comfortable speech to people with paralysis via inner speech alone."
"The existence of inner speech in motor regions of the brain raises the possibility that it could accidentally 'leak out'; in other words, a BCI could end up decoding something the user intended only to think, not to say aloud. While this might cause errors in current BCI systems designed to decode attempted speech, BCIs do not yet have the resolution and fidelity needed to accurately decode rapid, unconstrained inner speech, so this would probably just result in garbled output. Nevertheless, we're proactively addressing the possibility of accidental inner speech decoding, and we've come up with several promising solutions."
"For current-generation BCIs, which are designed to decode neural activity evoked by attempts to physically produce speech, we demonstrated in our study a new way to train the BCI to more effectively ignore inner speech, preventing it from accidentally being picked up by the BCI. For next-generation BCIs that are intended to decode inner speech directly – which could enable higher speeds and greater comfort – we demonstrated a password-protection system that prevents any inner speech from being decoded unless the user first imagines the password (for example, a rare phrase that wouldn't otherwise be accidentally imagined, such as "Orange you glad I didn't say banana"). Both of these methods were extremely effective at preventing unintended inner speech from leaking out."
"Improved hardware will enable more neurons to be recorded and will be fully implantable and wireless, increasing BCIs' accuracy, reliability, and ease of use. Several companies are working on the hardware part, which we expect to become available within the next few years. To improve the accuracy of inner speech decoding, we are also interested in exploring brain regions outside of the motor cortex, which might contain higher-fidelity information about imagined speech – for example, regions traditionally associated with language or with hearing."
Once the system works, we could have forcible installation with no password to make you "spill the beans"...
They're cheap and grew up with AI ... so you're firing them why?
Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is "the dumbest thing I've ever heard."
Garman made that remark in conversation [YouTube 51:35 -- JE] with AI investor Matthew Berman, during which he talked up AWS's Kiro AI-assisted coding tool and said he's encountered business leaders who think AI tools "can replace all of our junior people in our company."
That notion led to the "dumbest thing I've ever heard" quote, followed by a justification that junior staff are "probably the least expensive employees you have" and also the most engaged with AI tools.
"How's that going to work when ten years in the future you have no one that has learned anything," he asked. "My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have."
Naturally he thinks AI – and Kiro, natch – can help with that education.
Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.
"It's a silly metric," he said, because while organizations can use AI to write "infinitely more lines of code" it could be bad code.
"Often times fewer lines of code is way better than more lines of code," he observed. "So I'm never really sure why that's the exciting metric that people like to brag about."
That said, he's seen data that suggests over 80 percent of AWS's developers use AI in some way.
"Sometimes it's writing unit tests, sometimes it's helping write documentation, sometimes it's writing code, sometimes it's kind of an agentic workflow" in which developers collaborate with AI agents.
Garman said usage of AI tools by AWS developers increases every week.
The CEO also offered some career advice for the AI age, suggesting that kids these days need to learn how to learn – and not just learn specific skills.
"I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?"
Garman thinks that approach is necessary because technological development is now so rapid it's no longer sensible to expect that studying narrow skills can sustain a career for 30 years. He wants educators to instead teach "how do you think and how do you decompose problems", and thinks kids who acquire those skills will thrive.
Webb discovers a new moon orbiting Uranus:
Using NASA's James Webb Space Telescope, a team led by the Southwest Research Institute (SwRI) has identified a previously unknown moon orbiting Uranus, expanding the planet's known satellite family to 29. The detection was made during a Webb observation on Feb. 2, 2025.
"This object was spotted in a series of ten 40-minute long-exposure images captured by the Near-Infrared Camera (NIRCam)," said Maryame El Moutamid, a lead scientist in SwRI's Solar System Science and Exploration Division based in Boulder, Colorado. "It's a small moon but a significant discovery, which is something that even NASA's Voyager 2 spacecraft didn't see during its flyby nearly 40 years ago."
The newly discovered moon is estimated to be just six miles (10 kilometers) in diameter, assuming it has a similar reflectivity (albedo) to Uranus's other small satellites. That tiny size likely rendered it invisible to Voyager 2 and other telescopes.
"No other planet has as many small inner moons as Uranus, and their complex inter-relationships with the rings hint at a chaotic history that blurs the boundary between a ring system and a system of moons," said Matthew Tiscareno of the SETI Institute in Mountain View, California, a member of the research team.
The new moon is the 14th member of the intricate system of small moons orbiting inward of the largest moons, Miranda, Ariel, Umbriel, Titania, and Oberon. (All the moons of Uranus are named after characters from Shakespeare and Alexander Pope.)
A name for the newly found moon will need to be approved by the International Astronomical Union (IAU), the leading authority in assigning official names and designations to astronomical objects.
"Looking forward, the discovery of this moon underscores how modern astronomy continues to build upon the legacy of missions like Voyager 2, which flew past Uranus on Jan. 24, 1986, and gave humanity its first close-up look at this mysterious world. Now, nearly four decades later, the James Webb Space Telescope is pushing that frontier even farther."
It's been more than 200 years since the United States issued a letter of marque allowing privateers to attack the vessels of foreign nations, but those letters may return to empower cyber operators if a bill introduced in Congress actually manages to pass.
Arizona Republican David Schweikert introduced the Scam Farms Marque and Reprisal Authorization Act of 2025 in the House of Representatives last week. If signed into law, it would give the US President a lot of leeway in issuing letters of marque to create an armada of internet privateers.
Letters of marque were popular in the age of sail, with many eventual pirates getting their starts as privateers working for the US, UK, France, Spain, and other naval powers of the era. The US last issued letters of marque during the war of 1812, giving privateers the right to wage war against British vessels.
According to Schweikert's office's statement on the bill published today, Congress still has the authority to allow for the issues of letters of marque and reprisal, and that's exactly what he thinks we should do to tackle the cyberthreats posed by foreign countries.
"Our current tools are failing to keep pace," Schweikert said. "This legislation allows us to effectively engage these criminals and bring accountability and restitution to the digital battlefield by leveraging the same constitutional mechanism that once helped secure our nation's maritime interests."
Schweikert called attention to growing cybercrime losses, like the $16.6 billion US citizens lost to scams last year, the highest in 25 years of record keeping, as evidence that our current ideas haven't solved the problem.
The text of the bill gives the President the authority to issue letters of marque and reprisal against anyone they determine "is a member of a criminal enterprise or any conspirator associated with an enterprise involved in cybercrime who is responsible for an act of aggression against the United States."
That includes foreign governments.
The bill doesn't limit the number of cyber privateers the President could commission, either, with the size of such a force only restricted to what they judge is required "to employ all means reasonably necessary to seize outside the geographic boundaries of the United States and its territories the person and property of any individual or foreign government" involved in hostile cyber activities.
As was the case with ocean-going privateers, Trump's cyber warriors are being authorized to "recover stolen assets, prevent future attacks, and defend critical infrastructure," Schweikert's office said. Age of Sail privateers were often allowed to keep their seized assets, though that isn't mentioned in the bill.
Either way, Schweikert said, "Americans deserve protection from digital predators who exploit outdated laws and hide in foreign jurisdictions."
Passage of the bill is far from a sure thing, of course. Foreign governments facing a force of government-sanctioned American hackers might not greet the matter too kindly, either.
[...] Giving marque to US ethical hackers, Shedd added, would help the US protect not only US citizens, but those being abused abroad, too. Whether Age of Sail tactics are worth pursuing in the Age of Cyber is now up to Congress to decide.
https://phys.org/news/2025-08-styling-hair-products-billions-nanoparticles.html
A Purdue research team led by Nusrat Jung, an assistant professor in the Lyles School of Civil and Construction Engineering, and her Ph.D. student Jianghui Liu, found that a 10–20-minute heat-based hair care routine exposes a person to upward of 10 billion nanoparticles that are directly deposited into their lungs. These particles can lead to serious health risks such as respiratory stress, lung inflammation and cognitive decline.
The team's findings are published in Environmental Science & Technology.
"This is really quite concerning," Jung said. "The number of nanoparticles inhaled from using typical, store-bought hair-care products was far greater than we ever anticipated."
Until this study, Jung said, no real-time measurements on nanoparticle formation during heat-based hair styling had been conducted in full-scale residential settings. Their research addresses this gap by examining temporal changes in indoor nanoparticle number concentrations and size distributions during realistic heat-based hair styling routines.
"By providing a detailed characterization of indoor nanoparticle emissions during these personal care routines, our research lays the groundwork for future investigations into their impact on indoor atmospheric chemistry and inhalation toxicity," Jung said. "Studies of this kind have not been done before, so until now, the public has had little understanding of the potential health risks posed by their everyday hair care routines."
What makes these hair care products so harmful, Liu said, is when they are combined with large amounts of heat from styling appliances such as curling irons and straighteners. When combined with heat exceeding 300 degrees Fahrenheit, the chemicals not only rapidly release into the air but also lead to the formation of substantial numbers of new airborne nanoparticles.
"Atmospheric nanoparticle formation was especially responsive to these heat applications," Liu said. "Heat is the main driver—cyclic siloxanes and other low-volatility ingredients volatilize, nucleate and grow into new nanoparticles, most of them smaller than 100 nanometers."
In a study Jung published in 2023, her team found that heat significantly increased emissions of volatile chemicals such as decamethylcyclopentasiloxane (aka D5 siloxane) from hair care routines. D5 siloxane in particular was identified as a compound of concern when inhaled.
"When we first studied the emissions from hair care products during heat surges, we focused on the volatile chemicals that were released, and what we found was already quite concerning," Jung said. "But when we took an even closer look with aerosol instrumentation typically used to measure tailpipe exhaust, we discovered that these chemicals were generating bursts of anywhere from 10,000 to 100,000 nanoparticles per cubic centimeter."
Jung said that D5 siloxane is an organosilicon compound and is often listed first or second in the ingredient lists of many hair care products, indicating it can be among the most abundant ingredients. It has become a common ingredient over the past few decades in many personal care products due to its low surface tension, inertness, high thermal stability and smooth texture.
According to the European Chemicals Agency, D5 siloxane is classified as "very persistent, very bioaccumulative." And while the test results on laboratory animals are already concerning, Jung said, there is little information on its human impact. The chemical in wash-off cosmetic products has already been restricted in the European Union because of this.
"D5 siloxane has been found to lead to adverse effects on the respiratory tract, liver and nervous system of laboratory animals," Jung said previously. However, under high heat, cyclic siloxanes and other hair care product ingredients can volatilize and contribute to the formation of large numbers of airborne nanoparticles that deposit efficiently throughout the respiratory system. These secondary emissions and exposures remain far less characterized than the primary chemical emissions.
"And now it appears that the airborne hazards of these products—particularly 'leave-on' formulations designed to be heat-resistant, such as hair sprays, creams and gels—are even greater than we expected," Liu said.
According to the report, respiratory tract deposition modeling indicated that more than 10 billion nanoparticles could deposit in the respiratory system during a single hair styling session, with the highest dose occurring in the pulmonary region—the deepest part of the lungs. Their findings identified heat-based hair styling as a significant indoor source of airborne nanoparticles and highlight previously underestimated inhalation exposure risks.
As for how to avoid putting oneself at risk of inhaling mixtures of airborne nanoparticles and volatile chemicals, Jung and Liu said the best course of action is simply to avoid using such products—particularly in combination with heating devices. If that is not possible, Jung recommends reducing exposure by using bathroom exhaust fans for better room ventilation.
"If you must use hair care products, limit their use and ensure the space is well ventilated," Liu said. "Even without heating appliances, better ventilation can reduce exposure to volatile chemicals, such as D5 siloxane, in these products."
To more fully capture the complete nanoparticle formation and growth process, Jung said future studies should integrate nano-mobility particle sizing instruments capable of detecting particles down to a single nanometer. The chemical composition of these particles should also be evaluated.
"By addressing these research gaps, future studies can provide a more holistic understanding of the emissions and exposures associated with heat-based hair styling, contributing to improved indoor air pollution assessments and mitigation strategies," Jung said.
Jung and Liu's experimental research was conducted in a residential architectural engineering laboratory that Jung designed: the Purdue zero Energy Design Guidance for Engineers (zEDGE) tiny house.
The zEDGE lab is a mechanically ventilated, single-zone residential building with a conditioned interior. A state-of-the-art high-resolution electrical low-pressure impactor (HR-ELPI+) from Jung's laboratory was used to measure airborne nanoparticles in indoor air in real time, second by second. In parallel, a proton transfer reaction time-of-flight mass spectrometer (PTR-TOF-MS) was used to monitor volatile chemicals in real time.
The hair care routine emission experiments were conducted during a measurement campaign in zEDGE over a period of several months, including three experiment types: realistic hair care experiments that replicate actual hair care routines in the home environment, hot plate emission experiments that explore the relationship between the temperature of the hair care tools and nanoparticle formation, and surface area emission experiments that investigate how hair surface area impacts nanoparticle emissions during hair care events.
For the realistic hair care routine emission experiments, participants were asked to bring their own hair care products and hair styling tools to replicate their routines in zEDGE. Prior to each experiment, the participants were instructed to separate their hair into four sections. The hair length of each participant was categorized as long hair (below the shoulder) or short hair (above the shoulder). The sequence of each experiment consisted of four periods, to replicate a real-life routine.
After hair styling, the participants had two minutes to collect the tools and leave zEDGE; this was followed by a 60-minute concentration decay period in which zEDGE was unoccupied, and the HR-ELPI+ monitored the decay in indoor nanoparticle concentrations. The experiments and subsequent analysis focused on the formation of nanoparticles and resulting exposure during and after active hair care routine periods.
More information: Jianghui Liu et al, Indoor Nanoparticle Emissions and Exposures during Heat-Based Hair Styling Activities, Environmental Science & Technology (2025). DOI: 10.1021/acs.est.4c14384
New research ferments the perfect recipe for fine chocolate flavour - University of Nottingham:
Researchers have identified key factors that influence the flavour of chocolate during the cocoa bean fermentation process, a discovery that could offer chocolate producers a powerful tool to craft consistently high-quality, flavour-rich chocolate.
Scientists from the University of Nottingham's School of Biosciences examined how cacao bean temperature, pH, and microbial communities interact during fermentation and how these factors shape chocolate flavour. The team identified key microbial species and metabolic traits associated with fine-flavour chocolate and found that both abiotic factors (such as temperature and pH) and biotic factors (the microbial communities) are strong,consistent indicators of flavour development. The study has been published today in Nature Microbiology.
The quality and flavour of chocolate begin with the cacao bean, which is profoundly influenced by both pre- and post-harvest factors. Among these, fermentation is the first, and one of the most critical steps after harvest. It lays the foundation for aroma development, flavour complexity, and the reduction of bitterness in the final chocolate product.
Dr David Gopaulchan, the first author of the paper, from the School of Biosciences explains: "Fermentation is a natural, microbe-driven process that typically takes place directly on cocoa farms, where harvested beans are piled in boxes, heaps, or baskets. In these settings, naturally occurring bacteria and fungi from the surrounding environment break down the beans, producing key chemical compounds that underpin chocolate's final taste and aroma. However, this spontaneous fermentation is largely uncontrolled. Farmers have little influence over which microbes dominate or how the fermentation process unfolds. As a result, fermentation, and thus the flavour and quality of the beans, varies widely between harvests, farms, regions, and countries."
The researchers wanted to find out whether this unstable, natural process could be replicated and controlled in the lab. Working with Colombian farmers during the fermentation process they identified the factors that influence flavour. They were then able to use this knowledge to create a lab fermentation process and developed a defined microbial community, a curated mix of bacteria and fungi, capable of replicating the key chemical and sensory outcomes of traditional fermentations. This synthetic community successfully mimicked the dynamics of on-farm fermentations and produced chocolate with the same fine-flavour characteristics.
Dr David Gopaulchan adds: "The discoveries we have made are really important for helping chocolate producers to be able to consistently maximise their cocoa crops as we have shown they can rely on measurable markers such as specific pH, temperature, and microbial dynamics, to reliably predict and achieve consistent flavour outcomes.
This research signals a shift from spontaneous, uncontrolled fermentations to a standardized, science-driven process. Just as starter cultures revolutionized beer and cheese production, cocoa fermentation is poised for its own transformation, powered by microbes, guided by data, and tailored for flavour excellence. By effectively domesticating the fermentation process, this work lays the foundation for a new era in chocolate production, where defined starter cultures can standardise fermentation, unlock novel flavour possibilities, and elevate chocolate quality on a global scale.
Journal Reference: Gopaulchan, D., Moore, C., Ali, N. et al. A defined microbial community reproduces attributes of fine flavour chocolate fermentation. Nat Microbiol (2025). https://doi.org/10.1038/s41564-025-02077-6
https://arstechnica.com/tech-policy/2025/08/t-mobile-claimed-selling-location-data-without-consent-is-legal-judges-disagree/
https://archive.ph/LBtay
A federal appeals court rejected T-Mobile's attempt to overturn $92 million in fines for selling customer location information to third-party firms.
The Federal Communications Commission last year fined T-Mobile, AT&T, and Verizon, saying the carriers illegally shared access to customers' location information without consent and did not take reasonable measures to protect that sensitive data against unauthorized disclosure. The fines relate to sharing of real-time location data that was revealed in 2018, but it took years for the FCC to finalize the penalties.
The three carriers appealed the rulings in three different courts, and the first major decision was handed down Friday. A three-judge panel at the US Court of Appeals for the District of Columbia Circuit ruled unanimously against T-Mobile and its subsidiary Sprint.
"Every cell phone is a tracking device," the ruling begins. "To receive service, a cell phone must periodically connect with the nearest tower in a wireless carrier's network. Each time it does, it sends the carrier a record of the phone's location and, by extension, the location of the customer who owns it. Over time, this information becomes an exhaustive history of a customer's whereabouts and 'provides an intimate window into [that] person's life.'"
Until 2019, T-Mobile and Sprint sold customer location information (CLI) to location information aggregators LocationSmart and Zumigo.
The carriers did not verify whether buyers obtained customer consent, the ruling said. "Several bad actors abused Sprint and T-Mobile's programs to illicitly access CLI without the customers' knowledge, let alone consent. And even after Sprint and T-Mobile became aware of those abuses, they continued to sell CLI for some time without adopting new safeguards," judges wrote.
Carriers claimed selling data didn't violate law. Instead of denying the allegations, the carriers argued that the FCC overstepped its authority. But the appeals court panel decided that the FCC acted properly:
[...] T-Mobile told Ars today that it is "currently reviewing the court's action" but did not provide further comment. The carrier could seek an en banc review in front of all the appeals court's justices, or ask the Supreme Court to review the case. Meanwhile, AT&T is challenging its fine in the 5th Circuit appeals court while Verizon is challenging in the 2nd Circuit.
[...] The carriers also argued that the device-location information, which is "passively generated when a mobile device pings cell towers to support both voice and data services," does not qualify as Customer Proprietary Network Information (CPNI) under the law. The carriers said the law "covers information relating to the 'location... of use' of a telecommunications service," and claimed that only call location information fits that description.
Judges faulted T-Mobile and Sprint for relying on "strained interpretations" of the statute. "We begin with the text. The Communications Act refers to the 'location... of a telecommunications service, not the location of a voice call... Recall that cell phones connect periodically to cell towers, and that is what enables the devices to send and receive calls at any moment," the ruling said. In the judges' view, "a customer 'uses' a telecommunications service whenever his or her device connects to the carrier's network for the purpose of being able to send and receive calls. And the Carriers' reading therefore does not narrow 'location... of use' to times when the customer is actively on a voice call."
Judges also weren't persuaded by the argument that the fines were too large. "The Carriers note that the Commission previously had imposed such large fines only in cases involving fraud or intentional efforts to mislead consumers, and they are guilty of neither form of misconduct," the ruling said. "The Commission reasonably explained, however, that the Carriers' conduct was 'egregious': Even after the Securus breach exposed Sprint and T-Mobile's safeguards as inadequate, both carriers continued to sell access to CLI under a broken system."