Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current job?

  • less than 1 year
  • 1 year up to 2 years
  • 2 years up to 3 years
  • 3 years up to 5 years
  • 5 years up to 10 years
  • 10 or more years
  • work is for suckers
  • I haven't got a job you insensitive clod!

[ Results | Polls ]
Comments:116 | Votes:502

posted by jelizondo on Tuesday August 26, @07:33PM   Printer-friendly

Kelp forests in Marine Protected Areas are more resilient to marine heatwaves:

Using four decades of satellite images, University of California, Los Angeles (UCLA) researchers have looked at the impacts Marine Protected Areas (MPAs) are having on kelp forests along the coast of California.

They found that although the overall effect of MPAs on kelp forest cover was modest, the benefits became clear in the aftermath of marine heatwaves in 2014-2016, when kelp forests within MPAs were able recover more quickly, particularly in southern California.

"We found that kelp forests inside MPAs showed better recovery after a major climate disturbance compared to similar unprotected areas." Explained Emelly Ortiz-Villa, lead author of the study and a PhD researcher at UCLA Department of Geography.

"Places where fishing is restricted and important predators like lobsters and sheephead are protected saw stronger kelp regrowth. This suggests that MPAs can support ecosystem resilience to climate events like marine heatwaves."

Professor Rick Stafford, Chair of the British Ecological Society Policy Committee, who was not involved in the study said: "It's great to see these results and they clearly show that local action to protect biodiversity and ecosystem function can help prevent changes caused by global pressures such as climate change.

"However, it also demonstrates the need for effective MPAs. In this study, all the MPAs examined regulated fishing activity, and this is not the case for many sites which are designated as MPAs worldwide – including many in the UK."

Kelp forests our found around coastlines all over the world, particularly in cool, temperate waters such as the pacific coast of North America, The UK, South Africa, and Australia.

These complex ecosystems are havens for marine wildlife, including commercially important fish, and are one of the most productive habitats on Earth. They're also efficient in capturing carbon and protect coastlines by buffering against wave energy.

However, kelp forests across the west coast of North America have declined in recent years due to pressures such as marine heatwaves, made more frequent and intense with climate change, and predation from increasing numbers of sea urchins, which have benefitted from population collapses of sea stars, which predate them.

Kyle Cavanaugh, a senior author of the study and professor in the UCLA Department of Geography and Institute of the Environment and Sustainability said: "Kelp forests are facing many threats, including ocean warming, overgrazing, and pollution. These forests can be remarkably resilient to individual stressors, but multi-stressor situations can overwhelm their capacity to recover. By mitigating certain stressors, MPAs can help enhance the resilience of kelp."

MPAs are designated areas of the ocean where human activity is limited to support ecosystems and the species living there. However, protections vary widely and while some areas are no-take zones, others have few restrictions or lack comprehensive management and enforcement. Many even allow destructive practices like bottom trawling.

Effective MPAs form a key part of the Kunming-Montreal Global Biodiversity Framework, agreed at COP15 in 2022, which commits nations to protecting at least 30% of oceans and land by 2030.

"Our findings can inform decisions about where to establish new MPAs or implement other spatial protection measures." said Kyle Cavanaugh. "MPAs will be most effective when located in areas that are inherently more resilient to ocean warming, such as regions with localized upwelling or kelp populations with higher thermal tolerance."

Emelly Villa added: "Our findings suggest that kelp forests could be a useful indicator for tracking the ecological health and climate resilience of protected areas and should be included in long-term monitoring strategies."

To understand the effects MPAs were having on kelp, the researchers used of satellite data from 1984-2022 to compare kelp forests inside and outside of 54 MPAs along the California coast.

By matching each MPA with a reference site with similar environmental conditions, they were able to test whether MPAs helped kelp forests resist loss or recover from extreme marine heatwaves which took place in the North pacific between 2014 and 2016.

The researchers warn that while their findings show that MPAs can help kelp recovery after marine heatwaves, the effect was highly variable depending on location.

"On average, kelp within MPAs showed greater recovery than in the reference sites. However, not all MPAs outperformed their corresponding reference sites, suggesting that additional factors are also play a role in determining resilience." said Kyle Cavanaugh.

The researchers say that future work could look to identify these factors to better understand where and when MPAs are most effective at enhancing kelp resilience.

Journal Reference:
(DOI: Marine protected areas enhance climate resilience to severe marine heatwaves for kelp forests)


Original Submission

posted by jelizondo on Tuesday August 26, @02:49PM   Printer-friendly
from the moar-power-eh dept.

While Canadians flocked to purchase gas-powered vehicles over the summer, electric vehicle sales continued to nosedive, according to new data from Statistics Canada:

Electric vehicle sales dropped 35.2 per cent in June compared to last year. Zero-emission vehicles comprised only 7.9 per cent of total new motor vehicles sold that month, with 14,090 entering the market.

Meanwhile, 177,313 new motor vehicles were sold in Canada in June, up 6.2 per cent from June 2024.

"In dollar terms, sales increased 3.1 per cent during the same period. In June 2025, there were more new motor vehicles sold in every province compared with the same period in 2024," reads the Statistics Canada data.

"Sales of new passenger cars increased 19.5 per cent in June 2025, marking the first gain in this subsector since November 2024. In June 2025, sales of new trucks (+4.3 per cent) were also higher than one year earlier."

Despite dwindling sales, the Carney government remains committed to its electric vehicle mandate of having 60 per cent of all vehicles sold be ZEVs by 2030 and 100 per cent by 2035, banning all motor vehicle sales.

Previously:


Original Submission

posted by janrinok on Tuesday August 26, @10:02AM   Printer-friendly

Does 3I/ATLAS Generate its Own Light?

Does 3I/ATLAS Generate Its Own Light?:

The best image we have so far of the new interstellar object, 3I/ATLAS, was obtained by the Hubble Space Telescope on July 21, 2025. The image shows a glow of light, likely from a coma, ahead of the motion of 3I/ATLAS towards the Sun. There is no evidence for a bright cometary tail in the opposite direction. This glow was interpreted as evaporation of dust from the Sun-facing side of 3I/ATLAS.

Figure 3 of the analysis paper (accessible here) [PDF] shows a steep surface brightness profile of the glow with a projected power-law slope of -3, which implies a three-dimensional emissivity profile with a radial power-law slope of -4. Such a slope is steeper than observed in solar system comets. Together with my brilliant colleague, Eric Keto, we realized that the observed slope of -4 is consistent with an alternative model in which the dust outflow around 3I/ATLAS is illuminated by a central source. This model naturally accounts for the steep brightness profile, since the outflow density slope of -2 is accompanied by the radial decline of the illuminating radiation flux with an additional declining slope of -2.

If 3I/ATLAS generates its own light, then it could be much smaller than expected from a model in which it reflects sunlight. The reflection model requires a diameter of up to 20 kilometers, which is untenable given that the limited reservoir of rocky material in interstellar space can only deliver such a giant rock once per 10,000 years or longer (see the calculation in my paper here).

Last night, we held the annual soccer cup match between the faculty and the students at Harvard's Institute for Theory & Computation, for which I serve as director. Although I scored 2 goals for the faculty team, the students won 3 to 2. Disappointed by the outcome, I focused on 3I/ATLAS as soon as I woke up the following morning.

First, I calculated that the luminosity of 3I/ATLAS needs to be of order 10 gigawatt. Second, I realized that the steep brightness profile around 3I/ATLAS implies that the nucleus dominates the observed light. This must hold irrespective of the origin of the light. In other words, the nucleus dominates over the emission from the glow around it.

The illumination by sunlight cannot explain the steep 1/R⁴ profile of scattered light, where R is the radial distance from the nucleus. This is because a steady dust outflow develops a 1/R² profile which scatters sunlight within the same emissivity profile. Sunlight would dominate the illumination in this model because a rocky nucleus would reflect only a small fraction of the solar intensity from a much smaller area than the 10,000-kilometer region resolved in the Hubble Space Telescope image. Another possibility for the steep brightness profile is that the scattering halo is made of icy particles that get evaporated as they move towards the Sun from the warm Sun-facing side of 3I/ATLAS. This would explain why there is no tail of these scattering particles. The required evaporation time must be of order 10 minutes but it is unclear whether this would lead to the observed 1/R⁴ brightness profile.

The simplest interpretation is that the nucleus of 3I/ATLAS produces most of the light. I calculated that the nucleus cannot be a thermal emitter with an effective surface temperature below 1000 degrees Kelvin or else its peak emission wavelength would have been longer than 3 micrometers with an exponential cutoff at shorter wavelengths, incompatible with the data. At higher effective temperatures, the required luminosity of 3I/ATLAS can be obtained from a source diameter smaller than 100 meters. A compact bright emitter would make 3I/ATLAS of comparable size to the previous interstellar objects 1I/`Oumuamua or 2I/Borisov, making more sense than the 20-kilometer size inferred in the model where it reflects sunlight.

I first calculated that a primordial black hole with a Hawking temperature of 1,000 degrees Kelvin would produce only 20 nanowatts of power, clearly insufficient to power 3I/ATLAS. A natural nuclear source could be a rare fragment from the core of a nearby supernova that is rich in radioactive material. This possibility is highly unlikely, given the scarce reservoir of radioactive elements in interstellar space.

Alternatively, 3I/ATLAS could be a spacecraft powered by nuclear energy, and the dust emitted from its frontal surface might be from dirt that accumulated on its surface during its interstellar travel. This cannot be ruled out, but requires better evidence to be viable.

Insisting on 3I/ATLAS being a natural object, one might consider the hypothetical case of an object heated by friction on an ambient medium. In this case, the momentum flux of the dust flowing out of the object must exceed the momentum flux of the ambient medium in the rest frame of the object, the so-called ambient ram pressure. Otherwise, the dust outflow would be suppressed by the ambient medium. What does this condition boil to?

Given the mass loss rate (6–60 kilograms per second) and ejection speed of dust (20–2 kilometers per second) that were inferred from the Hubble Space Telescope image, I calculated that this model is marginally ruled-out. In addition, the required ambient medium density is larger by many orders of magnitude than the mass density of the zodiacal gas and dust through which 3I/ATLAS is traveling as it traverses the main asteroid belt.

This leaves us with the interpretation of the brightness profile around 3I/ATLAS as originating from a central light source. Its potential technological origin is supported by its fine-tuned trajectory (as visualized here and discussed here).

The new interstellar object 3I/ATLAS is expected to pass within a distance of 28.96 (+/-0.06) million kilometers from Mars on October 3, 2025. This would offer an excellent opportunity to observe 3I/ATLAS with the HiRISE camera near Mars, one of six instruments onboard the Mars Reconnaissance Orbiter. This morning, I encouraged the HiRISE team to use their camera during the first week of October 2025 in order to gather new data on 3I/ATLAS. They responded favorably. It would be challenging to observe 3I/ATLAS from Earth around the same time because of the proximity of 3I/ATLAS in our sky to the direction of the Sun. The more data we collect on 3I/ATLAS, the closer we will get to understanding its nature.

3I/ATLAS

Image

This is a Hubble Space telescope image of the interstellar comet 3I/ATLAS. Hubble photographed the comet on 21 July 21 2025, when the comet was 365 million kilometres from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Because Hubble was tracking the comet moving along a hyperbolic trajectory, the stationary background stars are streaked in the exposure.

[Image description: At the center of the image is a comet that appears as a teardrop-shaped bluish cocoon of dust coming off the comet's solid, icy nucleus and seen against a black background. The comet appears to be heading to the bottom left corner of the image. About a dozen short, light blue diagonal streaks are seen scattered across the image, which are from background stars that appeared to move during the exposure because the telescope was tracking the moving comet.]

3rd Interstellar Comet Just Formed a Bizarre Tail and More Updates

Youtube Video: 3rd Interstellar Comet Just Formed a Bizarre Tail and More Updates [15m33].

[This video has NOT been reviewed by staff.--JR]

posted by hubie on Tuesday August 26, @05:16AM   Printer-friendly
from the even-dogs-can't-escape-AI-fakes dept.

Revolutionary AI Tech Breathes Life into Virtual Companion Animals:

Researchers at UNIST have developed an innovative AI technology capable of reconstructing highly detailed three-dimensional (3D) models of companion animals from a single photograph, enabling realistic animations. This breakthrough allows users to experience lifelike digital avatars of their companion animals in virtual reality (VR), augmented reality (AR), and metaverse environments.

Led by Professor Kyungdon Joo at the Artificial Intelligence Graduate School of UNIST, the research team announced the development of DogRecon, a novel AI framework that can reconstruct an animatable 3D dog Gaussian from a single dog image.

Due to their diverse breeds, varying body shapes, and frequent occlusion of joints caused by their quadrupedal stance, reconstructing 3D models of dogs presents unique challenges. Moreover, creating accurate 3D structures from a single 2D photo is inherently difficult, often resulting in distorted or unrealistic representations.

DogRecon overcomes these challenges by utilizing breed-specific statistical models to capture variations in body shape and posture. It also employs advanced generative AI to produce multiple viewpoints, effectively reconstructing occluded areas with high fidelity. Additionally, the application of Gaussian Splatting techniques enables the model to accurately reproduce the curvilinear body contours and fur textures characteristic of dogs.

Performance evaluations using various datasets demonstrated that DogRecon can generate natural, precise 3D dog avatars comparable to those produced by existing video-based methods, but from only a single image. Unlike prior models, which often rendered dogs with unnatural postures—such as stretched bodies with bent joints, or bundled ears, tails, and fur—especially when dogs are in relaxed or crouched positions, DogRecon delivers more realistic results.

Furthermore, due to its scalable architecture, DogRecon holds significant promise for applications in text-driven animation generation, as well as AR/VR environments.

This research was led by first author Gyeongsu Cho, with contributions from Changwoo Kang (UNIST) and Donghyeon Soon (DGIST). Gyeongsu Cho remarked, "With over a quarter of households owning pets, expanding 3D reconstruction technology—traditionally focused on humans—to include companion animals has been a goal," adding, "DogRecon offers a tool that enables anyone to create and animate a digital version of their companion animals."

Professor Joo added, "This study represents a meaningful step forward by integrating generative AI with 3D reconstruction techniques to produce realistic models of companion animals." He further added, "We look forward to expanding this approach to include other animals and personalized avatars in the future."

Journal Reference: Gyeongsu Cho, Changwoo Kang, Donghyeon Soon, and Kyungdon Joo, "DogRecon: Canine Prior-Guided Animatable 3D Gaussian Dog Reconstruction From A Single Image," IJCV, (2025) https://doi.org/10.1007/s11263-025-02485-5


Original Submission

posted by hubie on Tuesday August 26, @12:31AM   Printer-friendly

Astronomers used the James Webb Space Telescope to investigate whether the most distant star identified in the universe is, in fact, a star cluster:

The most distant star ever discovered may have been misclassified: Instead of being a single star, the object — nicknamed Earendel from the Old English word for "morning star" — may be a star cluster, a group of stars that are bound together by gravity and formed from the same cloud of gas and dust, new research suggests.

Discovered by the Hubble Space Telescope in 2022, Earendel was thought to be a star that formed merely 900 million years after the Big Bang, when the universe was only 7% of its current age.

Now, in a study published July 31 in The Astrophysical Journal, astronomers used the James Webb Space Telescope (JWST) to take a fresh look at Earendel. They wanted to explore the possibility that Earendel might not be a single star or a binary system as previously thought, but rather a compact star cluster.

They found that Earendel's spectral features match those of globular clusters — a type of star cluster — found in the local universe.

"What's reassuring about this work is that if Earendel really is a star cluster, it isn't unexpected!" Massimo Pascale, an astronomy doctoral student at the University of California, Berkeley, and lead author of the study, told Live Science in an email. "[This] work finds that Earendel seems fairly consistent with how we expect globular clusters we see in the local universe would have looked in the first billion years of the universe."

Earendel, located in the Sunrise Arc galaxy 12.9 billion light-years from us, was discovered through a phenomenon known as gravitational lensing, a phenomenon predicted by Einstein's theory of general relativity in which massive objects bend the light that passes by them. A massive galaxy cluster located between Earth and Earendel is so large that it distorts the fabric of space-time, creating a magnifying effect that allowed astronomers to observe Earendel's light, which would otherwise be too faint to detect. Studies indicate that the star appears at least 4,000 times larger due to this gravitational lensing effect.

[...] After Earendel's discovery in 2022, researchers analyzed the object using data from JWST's Near Infrared Imager (NIRCam). By examining its brightness and size, they concluded that Earendel could be a massive star more than twice as hot as the sun and roughly a million times more luminous than our star. In the color of Earendel, astronomers also found a hint of the presence of a cooler companion star.

"After some recent work showed that indeed Earendel could (but is not necessarily) be much larger than previously thought, I was convinced it was worthwhile to explore the star cluster scenario," Pascale said..

Using spectroscopic data from JWST's NIRSpec instruments, Pascale and team studied the age and metal content of Earendel.

[...] The researchers have only explored the "star cluster" possibility. They did not investigate all possible scenarios, like Earendel being a single star or a multiple star system, and compare the results.

"The measurement is robust and well done, but in only considering the star cluster hypothesis, the study is limited in scope," Welch noted.

Both Pascale and Welch agreed that the key to solving Earendel's mystery is to monitor microlensing effects. Microlensing is a subtype of gravitational lensing in which a passing object temporarily distorts the image of a distant object when a nearer object lines up in front of it as it passes by. Changes in brightness due to microlensing are more noticeable when the distant objects are small — such as stars, planets or star systems — rather than much larger star clusters.

"It will be exciting to see what future JWST programs could do to further demystify the nature of Earendel," Pascale said.

Previously: Hubble Space Telescope Spots Oldest and Farthest Star Known

See also: Webb Reveals Colors of Earendel, Most Distant Star Ever Detected - NASA


Original Submission

posted by jelizondo on Monday August 25, @07:45PM   Printer-friendly

The University of Utah published an article about the 8,000-year history of Great Salt Lake and its watershed:

Photo of Great Salt Lake, taken in 2020, shows how the rail causeway built in 1959 has divided the lake into bodies with much different chemistries. On the right is lake's North Arm, which has no tributaries other than what flows through an opening in the causeway from the South Arm and consequently has much higher salinity. The red tint comes from halophilic bacteria and archaea that thrive there.

Over the past 8,000 years, Utah's Great Salt Lake has been sensitive to changes in climate and water inflow. Now, new sediment isotope data indicate that human activity over the past 200 years has pushed the lake into a biogeochemical state not seen for at least 2,000 years.

A University of Utah geoscientist applied isotope analysis to sediments recovered from the lake's bed to characterize changes to the lake and its surrounding watershed back to the time the lake took its current shape from the vast freshwater Lake Bonneville that once covered much of northern Utah.

"Lakes are great integrators. They're a point of focus for water, for sediments, and also for carbon and nutrients," said Gabriel Bowen, a professor and chairman of the Department of Geology & Geophysics. "We can go to lakes like this and look at their sediments and they tell us a lot about the surrounding landscape."

Sedimentary records provide context for ongoing changes in terminal saline lakes, which support fragile, yet vital ecosystems, and may help define targets for their management, according to Bowen's new study, published last month in Geophysical Research Letters.

This research helps fill critical gaps in the lake's geological and hydrological records, coming at a time when the drought-depleted level of the terminal body has been hovering near its historic low.

"We have all these great observations, so much monitoring, so much information and interest in what's happening today. We also have a legacy of people looking at the huge changes in the lake that happened over tens of thousands and hundreds of thousands of years," Bowen said. "What we've been missing is the scale in the middle."

That is the time spanning the first arrival of white settlers in Utah but after Lake Bonneville receded to become Great Salt Lake.

By analyzing oxygen and carbon isotopes preserved in lake sediments, the study reconstructs the lake's water and carbon budgets through time. Two distinct, human-driven shifts stand out:

  • Mid-19th century – Coinciding with Mormon settlement in 1847, irrigation rapidly greened the landscape around the lake, increasing the flow of organic matter into the lake and altering its carbon cycle.
  • Mid-20th century – Construction of the railroad causeway in 1959 disrupted water flow between the lake's north and south arms, which turned Gilbert Bay from a terminal lake to an open one that partially drained into Gunnison Bay, altering the salinity and water balance to values rarely seen in thousands of years.

The new study examines two sets of sediment cores extracted from the bed of Great Salt Lake, each representing different timescales. The top 10 meters of the first core, drilled in the year 2000 south of Fremont Island, contains sediments washed into the lake up to 8,000 years ago.

The other samples, recovered by the U.S. Geological Survey, represent only the upper 30 centimeters of sediments, deposited in the last few hundred years.

"The first gives us a look at what was happening for the 8,000 years before the settlers showed up here," Bowen said. "The second are these shallower cores that allow us to see how the lake changed after the arrival of the settlers."

Bowen subjected these lakebed sediments at varying depths to an analysis that determines isotope ratios of carbon and oxygen, shedding light on the landscape surrounding the lake and the water in the lake at varying points in the past.

"The carbon tells us about the biogeochemistry, about how the carbon cycles through the lake, and that's affected by things like weathering of rocks that bring carbon to the lake and the vegetation in the watershed, which also contributes carbon that dissolves into the water and flows to the lake," he said.

Bowen's analysis documented a sharp change in carbon, indicating profound changes that coincided with the arrival of Mormon pioneers in the Salt Lake Valley, where they introduced irrigated agriculture to support a rapidly growing community.

"We see a big shift in the carbon isotopes, and it shifts from values that are more indicative of rock weathering, carbon coming into the lake from dissolving limestone, toward more organic sources, more vegetation sources," Bowen said.

The new carbon balance after settlement was unprecedented during the 8,000 years of record following the demise of Lake Bonneville.

Next, Bowen's oxygen isotope analysis reconstructed the lake's water balance over time.

"Essentially, it tells us about the balance of evaporation and water inflow into the lake. As the lake is expanding, the oxygen isotope ratio goes down. As the lake shrinks, it goes up, basically telling us about the rate of change of the lake volume. We see little fluctuations, but nothing major until we get to 1959."

That's the year Union Pacific built a 20-mile causeway to replace a historic rail trestle, dividing the lake's North Arm, which has no tributaries, from its South Arm, also known as Gilbert Bay, which receives inflow from three rivers. Water flows through a gap in the causeway into North Arm, now rendering the South Arm an open system.

"We changed the hydrology of the lake fundamentally and gave it an outflow. We see that really clearly in the oxygen isotopes, which start behaving in a different way," he said. Counterintuitively, the impact of this change was to make Gilbert Bay waters fresher than they would have been otherwise, buying time to deal with falling lake levels and increasing salinity due to other causes.

"If we look at the longer time scale, 8,000 years, the lake has mostly been pinned at a high evaporation state. It's been essentially in a shrinking, consolidating state throughout that time. And that only reversed when we put in the causeway."

Journal Reference:

Multi-millennial context for post-colonial hydroecological change in Great Salt Lake.


Original Submission

posted by jelizondo on Monday August 25, @02:59PM   Printer-friendly

The future of typography is uncertain:

Monotype is keen for you to know what AI might do in typography. As one of the largest type design companies in the world, Monotype owns Helvetica, Futura, and Gill Sans — among 250,000 other fonts. In the typography giant's 2025 Re:Vision trends report, published in February, Monotype devotes an entire chapter to how AI will result in a reactive typography that will "leverage emotional and psychological data" to tailor itself to the reader. It might bring text into focus when you look at it and soften when your gaze drifts. It could shift typefaces depending on the time of day and light level. It could even adapt to reading speeds and emphasize the important portions of online text for greater engagement. AI, the report suggests, will make type accessible through "intelligent agents and chatbots" and let anyone generate typography regardless of training or design proficiency. How that will be deployed isn't certain, possibly as part of proprietarily trained apps. Indeed, how any of this will work remains nebulous.

Monotype isn't alone in this kind of speculation. Typographers are keeping a close eye on AI as designers start to adopt tools like Midjourney for ideation and Replit for coding, and explore the potential of GPTs in their workflow. All over the art and design space, creatives are joining the ongoing gold rush to find the use case of AI in type design. This search continues both speculatively and, in some places, adversarially as creatives push back against the idea that creativity itself is the bottleneck that we need to optimize out of the process.

That idea of optimization echoes where we were a hundred years ago. In the early 20th century, creatives came together to debate the implications of rapid industrialization in Europe on art and typography at the Deutscher Werkbund (German alliance of craftspeople). Some of those artists rejected the idea of mass production and what it offered artists, while others went all in, leading to the founding of the Bauhaus.

The latter posed multiple vague questions on what the industrialization of typography might mean, with few real ideas of how those questions might be answered. Will typography remain on the page or will it take advantage of advances in radio to be both text and sound? Could we develop a universal typeface that is applicable to any and all contexts? In the end, those experiments amounted to little and the questions were closed, and the real advances were in the efficiency of both manufacturing and the design process. Monotype might be reopening those old questions, but it is still realistic about AI in the near future.

[...] But the broader possibilities, Nix says, are endless, and that's what makes being a typographer now so exciting. "I think that at either end of the parentheses of AI are human beings who are looking for novel solutions to problems to use their skills as designers," he says. "You don't get these opportunities many times in the course of one's life, to see a radical shift in the way technology plays within not only your industry, but a lot of industries."

Not everyone is sold. For Zeynep Akay, creative director at typeface design studio Dalton Maag, the results simply aren't there to justify getting too excited. That's not to say Dalton Maag rejects AI; the assistive potential of AI is significant. Dalton Maag is exploring using AI to mitigate the repetitive tasks of type design that slow down creativity, like building kern tables, writing OpenType features, and diagnosing font issues. But many designers remain tempered about the prospect of relinquishing creative control to generative AI.

"It's almost as if we are being gaslighted into believing our lives, or our professions, or our creative skills are ephemeral," Akay says. She is yet to see how its generative applications promise a better creative future. "It's a future in which, arguably, all human intellectual undertaking is shed over time, and handed over to AI — and what we gain in return isn't altogether clear," she adds.

[...] That shift to digital type was the result of a clear and discernible need to improve typographic workflow from setting type by hand to something more immediate, Akay says. In the current space, however, we've arrived at the paintbrush before knowing how the canvas appears. As powerful as AI could be, where in our workflow it should be deployed is yet to be understood — if it should be deployed at all, given the less-than-stellar results we're seeing in the broader spectrum of generative AI. That lack of direction makes her wonder whether a better analog isn't the dot-com bubble of the late 1990s.

[...] Both Nix and Akay agree a similar crash around AI might actually be beneficial in pushing some of those venture capitalist interests out of AI. For Nix, however, just because its practical need isn't immediately obvious doesn't mean it's not there or, at least, won't become apparent soon. Nix suggests that it may well be beyond the bounds of our current field of vision.

[...] Though, that remains more speculation. We are simply so early on this that the only AI tools we can actually demonstrate are font identification tools like WhatTheFont and related ideas like TypeMixer.xyz. It's not possible to accurately comprehend what such nascent technology will do based solely on what it does now — it's like trying to understand a four-dimensional shape. "What was defined as type in 1965 is radically different from what we define as type in 2025," Nix adds. "We're primed to know that those things are possible to change, and that they will change. But it's hard at this stage to sort of see how much of our current workflows we preserve, how much of our current understanding and definition of typography we preserve."But as we explore, it's important not to get caught up with the spectacle of what it looks like AI can do. It may seem romantic to those who have already committed to AI at all costs, but Akay suggests this isn't just about mechanics, that creativity is valuable "because it isn't easy or fast, but rather because it is traditionally the result of work, consideration, and risk." We cannot put the toothpaste back in the tube, but, she adds, in an uncertain future and workflow, "that doesn't mean that it's built on firm, impartial foundations, nor does it mean we have to be reckless in the present."


Original Submission

posted by hubie on Monday August 25, @10:15AM   Printer-friendly

NASA Challenge Winners Cook Up New Industry Developments - NASA:

NASA invests in technologies that have the potential to revolutionize space exploration, including the way astronauts live in space. Through the Deep Space Food Challenge, NASA, in partnership with CSA (Canadian Space Agency), sought novel food production systems that could provide long-duration human space exploration missions with safe, nutritious, and tasty food. Three winners selected last summer are now taking their technology to new heights – figuratively and literally – through commercial partnerships.

Interstellar Lab of Merritt Island, Florida, won the challenge's $750,000 grand prize for its food production system NuCLEUS (Nutritional Closed-Loop Eco-Unit System), by demonstrating an autonomous operation growing microgreens, vegetables, and mushrooms, as well as sustaining insects for use in an astronaut's diet. To address the requirements of the NASA challenge, NuCLEUS includes an irrigation system that sustains crop growth with minimal human intervention. This end-to-end system supplies fresh ingredients to support astronauts' health and happiness, with an eye toward what the future of dining on deep space missions to Mars and the Moon may look like.

Since the close of the challenge, Interstellar Lab has partnered with aerospace company Vast to integrate a spinoff of NuCLEUS, called Eden 1.0, on Haven-1, a planned commercial space station. Eden 1.0 is a plant growth unit designed to conduct research on plants in a microgravity environment using functions directly stemming from NuCLEUS.

"The NASA Deep Space Food Challenge was a pivotal catalyst for Interstellar Lab, driving us to refine our NuCLEUS system and directly shaping the development of Eden 1.0, setting the stage for breakthroughs in plant growth research to sustain life both in space and on Earth," said Barbara Belvisi, founder and CEO of Interstellar Lab.

Team SATED (Safe Appliance, Tidy, Efficient & Delicious) of Boulder, Colorado, earned a $250,000 second prize for its namesake appliance, which creates an artificial gravitational force that presses food ingredients against its heated inner surface for cooking. The technology was developed by Jim Sears, who entered the contest as a one-person team and has since founded the small business SATED Space LLC.

At the challenge finale event, the technology was introduced to the team of world-renowned chef and restaurant owner, José Andrés. The SATED technology is undergoing testing with the José Andrés Group, which could add to existing space food recipes that include lemon cake, pizza, and quiche. The SATED team also is exploring partnerships to expand the list of ingredients compatible with the appliance, such as synthetic cooking oils safe for space.

Delicious food was a top priority in the Deep Space Food Challenge. Sears noted the importance of food that is more than mere sustenance. "When extremely high performance is required, and the situations are demanding, tough, and lonely, the thing that pulls it all together and makes people operate at their best is eating fresh cooked food in community."

Team Nolux, formed from faculty members, graduate, and undergraduate students from the University of California, Riverside, also won a $250,000 second prize for its artificial photosynthesis system. The Nolux system – whose name means "no light" – grows plant and fungal-based foods in a dark chamber using acetate to chemically stimulate photosynthesis without light, a capability that could prove valuable in space with limited access to sunlight.

Some members of the Nolux team are now commercializing select aspects of the technology developed during the challenge. These efforts are being pursued through a newly incorporated company focused on refining the technology and exploring market applications.

A competition inspired by NASA's Deep Space Food Challenge will open this fall.

Stay tuned for more information: https://www.nasa.gov/prizes-challenges-and-crowdsourcing/centennial-challenges/


Original Submission

posted by hubie on Monday August 25, @05:30AM   Printer-friendly
from the I-love-the-smell-of-5g-in-the-morning dept.

Radio Waves Can Strengthen Sense of Smell - Neuroscience News:

Our sense of smell is more important than we often realize. It helps us enjoy food, detect danger like smoke or gas leaks, and even affects memory and emotion.

Many people — especially after COVID-19, aging, or brain injury — suffer from a loss of smell. However, there are very few effective treatments, and those that exist often use strong scents or medicines that cause discomfort in patients.

In a study published this week in APL Bioengineering, by AIP Publishing, researchers from Hanyang University and Kwangwoon University in South Korea introduced a simple and painless way to improve our sense of smell using radio waves.

Unlike traditional aroma-based therapy, which indirectly treats smell loss by exposing the patient to chemicals, radio waves can directly target the part of our brain responsible for smell, without causing pain.

"The method is completely noninvasive — no surgery or chemicals needed — and safe, as it does not overheat the skin or cause discomfort," author Yonwoong Jang said.

In the study, the team asked volunteers with a healthy sense of smell to sit while a small radio antenna was placed near, but not touching, their forehead. For five minutes, this antenna gently sent out radio waves to reach the smell-related nerves deep in the brain.

Before and after the short treatment, the authors tested how well the patient could smell very faint odors, like diluted alcohol or fruit scents, using pen-shaped odor dispensers called Sniffin' Sticks. They also recorded the patients' brain signals to see how active their smell nerves were.

The team found that their method improved subjects' sense of smell for over a week after just one treatment.

"This study represents the first time that a person's sense of smell has been improved using radio waves without any physical contact or chemicals, and the first attempt to explore radio frequency stimulation as a potential therapy for neurological conditions," Jang said.

The results of the current study, which focused on people with a normal sense of smell, could help professionals such as perfumers, chefs, or coffee tasters, who need to distinguish aromatic subtleties. The method could be also used to preserve or even enhance the sense of smell.

As an important next step, the team plans to conduct a similar study on individuals with olfactory dysfunction, such as anosmia (complete loss of smell) or hyposmia (reduced sense of smell).

"This will help us determine whether the treatment can truly benefit those who need it most," Jang said.

Journal Reference: Junsoo Bok, Eun-Seong Kim, Juchan Ha, et al., Non-contact radiofrequency stimulation to the olfactory nerve of human subjects [OPEN], APL Bioeng. 9, 036112 (2025) https://doi.org/10.1063/5.0275613


Original Submission

posted by jelizondo on Monday August 25, @12:50AM   Printer-friendly

The Creative Commons has become an unofficial UNESCO NGO partner. UNESCO is the part of UN which advances international cooperation in the fields of education, science, culture, and communication.

This new, formal status is an important recognition of the synergies between our two organizations and of our shared commitment to openness as a means to benefit everyone worldwide. As an official NGO partner, Creative Commons (CC) will now have the opportunity to contribute to UNESCO’s program and to interact with other official partner NGOs with common goals. In particular, we look forward to:

  • Participating in UNESCO meetings and consultations on various subjects core to CC's mission. This will give us a seat at the table to advocate for the communities we serve and share our expertise on openness, the commons, and access to knowledge.
  • Participating in UNESCO's governing bodies in an observer capacity. This will enable us to deliver official statements on matters within our sphere of expertise and contribute to determining UNESCO's policies and main lines of work, including its programs and budget.
  • Taking part in consultations about UNESCO's strategy and program and being involved in UNESCO's programming cycle. This will give us opportunities to communicate our views and suggestions on proposals by the Director-General.

Previously:
(2022) New UNESCO Flagship Report Calls for Reinventing Education


Original Submission

posted by jelizondo on Sunday August 24, @08:03PM   Printer-friendly
from the government-inside dept.

Commerce Secretary Howard Lutnick delivered major news on Friday, confirming that the United States has finalized an investment deal with Intel, securing a 10% ownership stake in the semiconductor powerhouse. This development marks a significant step in bolstering America's position in global technology amid ongoing concerns about supply chain vulnerabilities and competition from abroad:

The agreement stems from negotiations tied to the 2022 CHIPS and Science Act, which aimed to revitalize domestic chip production. Under the terms, the U.S. gains a nonvoting equity position in Intel in return for federal funding support.

While specific financial details remain under wraps, the move aligns with efforts to ensure taxpayer dollars yield tangible returns for national interests. Intel, for its part, has committed billions to constructing advanced manufacturing facilities in Ohio, with full operations expected by 2030. This follows an $8 billion grant finalized last fall to accelerate those projects.

[...] Critics from the left may decry increased government involvement in private enterprise, but proponents argue it's essential for safeguarding national security in an era of geopolitical tensions. As Lutnick noted, the pact benefits both Intel and the public, positioning the U.S. to lead in semiconductors—a sector vital for everything from consumer electronics to defense systems.

This deal could set a precedent for future public-private partnerships, ensuring that American ingenuity drives global progress while keeping strategic assets firmly under domestic control. With operations ramping up in the coming years, the long-term impacts on the economy and technology landscape will be worth watching closely.

Intel press release. Also at Politico, Newsweek and NBC News.

Previously: Trump Administration Considering US Government Purchase of Stake in Intel


Original Submission

posted by janrinok on Sunday August 24, @03:18PM   Printer-friendly

'Quiet cracking' is spreading in offices: Half of workers are at breaking point, and it's costing companies $438 billion in productivity loss:

  • "Quiet cracking" is the new workplace phenomenon sweeping offices. As AI looms over jobs and promotions stall, workers' mental health is quietly fraying. For employers, it has resulted in a staggering $438 billion loss in global productivity in the past year alone. But not all hope is lost. A career expert tells Fortune there are ways for managers and employees to course-correct.

Workers are down in the dumps about a lack of career growth opportunities and emptying offices as companies slash staffers to make way for AI, all while being put under constant pressure to do more with less.

Scared of speaking out and putting their neck on the line in a dire job climate, staff are silently but massively disengaging with their employers: Welcome to "quiet cracking."

The latest workplace phenomenon sees staff showing up and doing their job but mentally and emotionally struggling. About 54% of employees report feeling unhappy at work, with the frequency ranging from occasionally to constantly, according to a 2025 report from TalentLMS.

"The telltale signs of quiet cracking are very similar to burnout. You may notice yourself lacking motivation and enthusiasm for your work, and you may be feeling useless, or even angry and irritable," Martin Poduška, editor in chief and career writer for Kickresume, tells Fortune. "These are all common indicators of quiet cracking, and they gradually get worse over time."

Unlike "quiet quitting," this decline in productivity from workers isn't intentional. Instead, it's caused by feeling worn down and unappreciated by their employers. And oftentimes, as with burnout, they don't even register it creeping up on them until it's too late. But feeling unable to quit in protest because of the current job market, it's left them ultimately stuck and unhappy in their roles.

A fleet of unhappy workers may sound easy to spot, but the problem is sneaking up on workplaces without much course correction.

Last year, the proportion of engaged employees globally dropped from 23% to 21%—a similar dip in enthusiasm seen during the COVID-19 lockdown—costing the world economy about $438 billion in lost productivity, according to a 2025 report from Gallup.

Quiet cracking isn't only creating a bad culture for employees to work in, but the trend is also hitting businesses hard. It's imperative that bosses seize the moment to develop an engagement strategy before the problem festers into a ticking time bomb. And employees can also make adjustments to better advocate for their own career happiness.

"It isn't obvious when quiet cracking happens," Poduška explains. "You may be starting to quietly crack right now, but you wouldn't know as this type of burnout takes some time for others, and even you, to notice."

The current state of the workplace may sound bleak, but not all hope is lost. A career expert tells Fortune there are ways to spot fissures in company culture before employees are fully down in the dumps, and managers need to stand on guard.

"If you've noticed an employee becoming more and more disengaged with their work, it may be best to schedule a time where you can discuss how they feel," Poduška says. "Setting them new tasks, providing new learning opportunities, and simply having an honest conversation could steer things back in the right direction."

A good boss can make or break company culture. Among employees who experience quiet cracking, 47% say their managers do not listen to their concerns, according to the TalentLMS study. But by simply sparking a conversation on the issue, supervisors can get staffers back on track to be happy at work. Alongside having an honest conversation, managers should also show interest in the development of their direct reports. Training workers can help show that the company is interested in their career advancement; about 62% of staffers who aren't quiet cracking receive training, compared to 44% of those who frequently or constantly experience the feeling.

"When employee training is prioritized, it signals care, investment, and belief in people's potential," the TalentLMS report notes. "It fuels motivation, builds capability, and creates a culture where people want to contribute—and stay. Training isn't just about skill-building; it's an antidote to disengagement. A catalyst for connection."

Managers aren't the only ones with power in fighting workplace disengagement; employees also have the power to combat their own unhappiness.

"How can quiet cracking be avoided? For staff, finding out the root cause of your unhappiness might be the key to stop quiet cracking in its tracks," Poduška explains. "If you feel like there are no opportunities for progression with your role, you may find it worthwhile to talk to your manager about a development plan. This can give you something to work toward, which may help combat boredom and spark your motivation."

However, not every company is going to be invested in developing their workers, even if they voice the need for it. In that case, Poduška advises that staffers take a hard look at the business they work for. He recommends that employees question if their jobs feel sustainable and if they feel adequately supported by their teams. If not, a new employer—or even career—could be the answer.

"Another way to stop quiet cracking is to change things up. You could ask yourself if the role you're currently in is right for you," Poduška says. "A total career pivot may be the answer to quiet cracking in some cases, or for others, a switch into another department might be the best solution. Some, however, may just need something new and fresh to work on."


Original Submission

posted by janrinok on Sunday August 24, @10:36AM   Printer-friendly

Turning the lights back on:

South Australia experienced a state-wide blackout in 2016 due to a severe storm that damaged critical electricity transmission infrastructure and left 850,000 customers without power. Most electricity supplies were restored within eight hours, but it was a major event and prompted a multi-agency response involving emergency services and the Australian Defence Force.

[...] Historically, Australia has been heavily reliant on gas and coal generator units for system restart after a blackout, but those units are quickly reaching their end-of-life.

The grid has also changed significantly in the last decade alone, and today's electricity network looks very different, with large commercial wind and solar farms making up a higher percentage of Australia's generation mix every year.

Sorrell's work looks at how power systems can be restarted using large-scale, grid-forming batteries storing power from wind and solar sources as the primary restart source. While he recognises restarting the grid is not something most renewable plants were intentionally designed for in the first place, he remains confident in their ability.

"We're 100 per cent moving in a direction where large-scale batteries are going to feature prominently, if not be the primary black starter of the grid after major blackouts," Sorrell said.

During the South Australian blackout, severe weather damaged powerlines and subsequently nearly all wind turbines across the state shut down in quick succession. This was caused by a protection setting unknown to operators. Losing the turbines caused a massive energy imbalance, and with far too much load for the generation available the system collapsed. Within seconds, the whole state lost power.

"It's not because it's wrong for those protection devices to be there. They're there for very good reasons," Sorrell said.

"What the problem tends to be, and what was the case in South Australia, was that despite being compliant with existing standards, these particular settings were not present in the models that the manufacturers provided."

This meant the equipment was not being correctly represented, either in technical standards or in the simulation models that power system operators need, especially in understanding extreme circumstances.

Sorrell said there has since been a concerted effort across the industry to implement new standards in modelling so that they accurately represent the equipment in the field and their performance.

"Australia is a world-leader for setting modelling and performance standards," he said.

In his latest System Restoration and Black Start DOCX (15 MB) report, Sorrell used these next generation computer models and simulations to explore how large-scale batteries, wind and solar can actively participate in system restart.

Traditionally, it has been thought that large coal or gas generators have more capability and that large amounts of wind and solar in Australia will make our networks less stable.

CSIRO Power Systems Researcher, Dr Thomas Brinsmead, said one of the more interesting outcomes from the latest report is that this is not necessarily the case when it comes to restarting after a blackout.

"The capability of batteries with grid-forming inverter technology is better at supporting system restart than traditional black-start generators in many respects," Thomas said.

The report found that grid-forming battery technology was capable of energising far larger areas of the network than an equivalent synchronous generator, be it gas, coal or hydro.

A synchronous generator is a type of electrical machine used to convert mechanical energy into electrical energy. It's called 'synchronous' because its rotor rotates at the same speed as the magnetic field in the stator – this means it's perfectly in sync with the frequency of the electricity being produced.

Grid-forming batteries use smart inverters that mimic the behaviour of traditional generators – such as coal or gas turbines – but without the fuel-burning. The inverters work by converting direct current (DC) from renewable energy sources into controlled alternating current (AC) to supply power to the grid. These can also be used to help restart the grid after blackouts.

"What we consistently found, and I was genuinely surprised by these results, was that grid-forming batteries outperformed the synchronous generators in almost all areas," Sorrell said.

A big challenge with inverter-based technology is that it is current limited. This means that the amount of energy it injects into the system must be tightly controlled, otherwise the transistors within it will fail. Synchronous generators are not current limited to the same extent. They're capable of injecting immense amounts of current into the system as and when required.

"We originally subscribed to the idea that the best practice for re-energising a transformer during restart was to maintain a strong system capable of supplying that massive inrush of current to get it going," Sorrell said.

"But what we found was the current-limited nature of grid-forming inverters might actually be helping in these circumstances. Because those inverters, they're ramping to their maximum current and then they're staying at that level for longer, resulting in these transformers being gradually re-energised over a second or more."

This inherently gradual re-energisation from the inverters allowed large transformers to be re-energised without tripping network protection mechanisms more reliably than traditional rotating machine restart sources such as coal, gas or hydro. The current-limited nature of the inverters, although generally seen as a drawback of the technology, may be beneficial in this situation.

However, a grid with large amounts of solar, especially on rooftops, is not all good news when it comes to system restarts.

It's important to have a steady load during system restart, especially in residential areas that rely heavily on electricity.

However, researchers discovered that during the early stages of system restart, the use of large-scale grid-forming batteries as the primary source may cause rooftop solar to become unstable. This happened at lower penetration levels when compared to the use of traditional restart sources.

These studies concluded that although a battery is more flexible than a coal or gas generator, or even a hydro generator, to accommodate changing load, they don't initially provide the same system strength to rooftop solar.

Thomas said that there is presently still a need for black-start generators to be available in the National Electricity Market (NEM) as the primary source for restarts.

"We don't like blackouts to happen, but we want to be very confident that when they do, we are able to get things started again as soon as possible," he said.

However, the work continues to build confidence that the same restoration function can and will eventually be performed by newer technologies.

Given the published retirement schedule of synchronous machine-based generation in the NEM, approximately 2GVA of new grid-forming technology will be required by 2028 to maintain network restoration capability equivalent to today.

This is considerably lower than the capacity of synchronous machine-based generation being retired. This could be viewed as already recognising the greater capability of grid-forming inverters to restore network elements without activating protection mechanisms.

During the next stage of the system restoration work, energy system experts will investigate how new renewable energy zones – which include solar and wind farms – throughout the country can play an active role in system restoration. They will engage a transmission provider to devise a realistic test plan template for grid-forming batteries to restart a system. A successful test of a battery restarting a portion of the network in Australia is not far away.

"The industry is learning so quickly," Sorrell said.

"From the inception of distributed electricity to when renewables came on board, we had 100 years. The world had 100 years to get electricity right. Meanwhile, we've had just two decades to go from the idea of large-scale wind and solar to getting it fully functional in the grid."


Original Submission

posted by janrinok on Sunday August 24, @05:52AM   Printer-friendly

The Stanford Report has an interesting article on a brain interface:

Neurosurgery Assistant Professor Frank Willett, PhD, and his teammates are using brain-computer interfaces, or BCIs, to help people whose paralysis renders them unable to speak clearly.

The brain's motor cortex contains regions that control movement – including the muscular movements that produce speech. A BCI uses tiny arrays of microelectrodes (each array is smaller than a baby aspirin), surgically implanted in the brain's surface layer, to record neural activity patterns directly from the brain. These signals are then fed via a cable hookup to a computer algorithm that translates them into actions such as speech or computer cursor movement.

To decode the neural activity picked up by the arrays into words the patient wants to say, the researchers use machine learning to train the computer to recognize repeatable patterns of neural activity associated with each "phoneme" – the tiniest units of speech – then stitch the phonemes into sentences.

Willett and his colleagues have previously demonstrated that, when people with paralysis try to make speaking or handwriting movements (even though they cannot, because their throat, lip, tongue and cheek muscles or the nerve connections to them are too weak), a BCI can pick up the resulting brain signals and translate them into words with high accuracy.

Recently, the scientists took another important step: They investigated brain signals related to "inner speech," or language-based but silent, unuttered thought.

Willett is the senior author, and postdoctoral scholar Erin Kunz, PhD, and graduate student Benyamin Meschede-Krasa are the co-lead authors of a new study about this exploration, published Aug. 14 in Cell.

Willett said:"Inner speech (also called 'inner monologue' or self-talk) is the imagination of speech in your mind – imagining the sounds of speech, the feeling of speaking, or both. We wanted to know whether a BCI could work based only on neural activity evoked by imagined speech, as opposed to attempts to physically produce speech. For people with paralysis, attempting to speak can be slow and fatiguing, and if the paralysis is partial, it can produce distracting sounds and breath control difficulties."

"We studied four people with severe speech and motor impairments who had microelectrode arrays placed in motor areas of their brain. We found that inner speech evoked clear and robust patterns of activity in these brain regions. These patterns appeared to be a similar, but smaller, version of the activity patterns evoked by attempted speech. We found that we could decode these signals well enough to demonstrate a proof of principle, although still not as well as we could with attempted speech. This gives us hope that future systems could restore fluent, rapid, and comfortable speech to people with paralysis via inner speech alone."

"The existence of inner speech in motor regions of the brain raises the possibility that it could accidentally 'leak out'; in other words, a BCI could end up decoding something the user intended only to think, not to say aloud. While this might cause errors in current BCI systems designed to decode attempted speech, BCIs do not yet have the resolution and fidelity needed to accurately decode rapid, unconstrained inner speech, so this would probably just result in garbled output. Nevertheless, we're proactively addressing the possibility of accidental inner speech decoding, and we've come up with several promising solutions."

"For current-generation BCIs, which are designed to decode neural activity evoked by attempts to physically produce speech, we demonstrated in our study a new way to train the BCI to more effectively ignore inner speech, preventing it from accidentally being picked up by the BCI. For next-generation BCIs that are intended to decode inner speech directly – which could enable higher speeds and greater comfort – we demonstrated a password-protection system that prevents any inner speech from being decoded unless the user first imagines the password (for example, a rare phrase that wouldn't otherwise be accidentally imagined, such as "Orange you glad I didn't say banana"). Both of these methods were extremely effective at preventing unintended inner speech from leaking out."

"Improved hardware will enable more neurons to be recorded and will be fully implantable and wireless, increasing BCIs' accuracy, reliability, and ease of use. Several companies are working on the hardware part, which we expect to become available within the next few years. To improve the accuracy of inner speech decoding, we are also interested in exploring brain regions outside of the motor cortex, which might contain higher-fidelity information about imagined speech – for example, regions traditionally associated with language or with hearing."

Once the system works, we could have forcible installation with no password to make you "spill the beans"...


Original Submission

posted by janrinok on Sunday August 24, @01:12AM   Printer-friendly

They're cheap and grew up with AI ... so you're firing them why?

Amazon Web Services CEO Matt Garman has suggested firing junior workers because AI can do their jobs is "the dumbest thing I've ever heard."

Garman made that remark in conversation [YouTube 51:35 -- JE] with AI investor Matthew Berman, during which he talked up AWS's Kiro AI-assisted coding tool and said he's encountered business leaders who think AI tools "can replace all of our junior people in our company."

That notion led to the "dumbest thing I've ever heard" quote, followed by a justification that junior staff are "probably the least expensive employees you have" and also the most engaged with AI tools.

"How's that going to work when ten years in the future you have no one that has learned anything," he asked. "My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have."

Naturally he thinks AI – and Kiro, natch – can help with that education.

Garman is also not keen on another idea about AI – measuring its value by what percentage of code it contributes at an organization.

"It's a silly metric," he said, because while organizations can use AI to write "infinitely more lines of code" it could be bad code.

"Often times fewer lines of code is way better than more lines of code," he observed. "So I'm never really sure why that's the exciting metric that people like to brag about."

That said, he's seen data that suggests over 80 percent of AWS's developers use AI in some way.

"Sometimes it's writing unit tests, sometimes it's helping write documentation, sometimes it's writing code, sometimes it's kind of an agentic workflow" in which developers collaborate with AI agents.

Garman said usage of AI tools by AWS developers increases every week.

The CEO also offered some career advice for the AI age, suggesting that kids these days need to learn how to learn – and not just learn specific skills.

"I think the skills that should be emphasized are how do you think for yourself? How do you develop critical reasoning for solving problems? How do you develop creativity? How do you develop a learning mindset that you're going to go learn to do the next thing?"

Garman thinks that approach is necessary because technological development is now so rapid it's no longer sensible to expect that studying narrow skills can sustain a career for 30 years. He wants educators to instead teach "how do you think and how do you decompose problems", and thinks kids who acquire those skills will thrive.


Original Submission