Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 11 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How do you control privacy and tracking on the internet?

  • VPN / HTTPS and nothing else
  • uBlock Origin or similar
  • Privacy Badger or similar
  • Brave built-in
  • Firefox built-in
  • I don't bother
  • Am I being tracked?
  • Other - please expand in the comments

[ Results | Polls ]
Comments:34 | Votes:156

posted by jelizondo on Monday June 30, @06:30PM   Printer-friendly
from the Catch-Me-If-You-Can dept.

Scientists unlock the light-bending secrets of squid skin:

Squid are famous for flashing from glass-clear to kaleidoscopic in the blink of an eye, but biologists have long puzzled over the physical trick behind the act.

A research team led by the University of California, Irvine, joined by cephalopod experts at the Marine Biological Laboratory in Woods Hole, took that mystery head-on.

By peering into squid skin in three dimensions, they uncovered a hidden forest of nano-columns built from an uncommon protein called reflectin.

How squid skin bends light

These columns act much like tiny mirrors, bouncing or passing light depending on how close together they sit.

Alon Gorodetsky, an expert in chemical and biomolecular engineering at UC Irvine, is the senior author of the research.

"In nature, many animals use Bragg reflectors [which selectively transmit and reflect light at specific wavelengths] for structural coloration," he said. "A squid's ability to rapidly and reversibly transition from transparent to colored is remarkable."

"We found that cells containing specialized subcellular columnar structures with sinusoidal refractive index distributions enable the squid to achieve such feats."

Studying the master shapeshifter

The animals under study were longfin inshore squid, Doryteuthis pealeii. "These are longfin inshore squids – Doryteuthis pealeii – that are native to the Atlantic Ocean," Gorodetsky said.

"Marine Biological Laboratory has been famous for studying this squid and other cephalopods for more than a century. We were fortunate to be able to leverage their world-class expertise with properly collecting, handling, and studying these biological specimens."

Inside the squid mantle, shimmering cells known as iridophores – or iridocytes – hold the secret.

To visualize them without disturbing their delicate innards, the team used holotomography, a form of quantitative phase microscopy that maps how light bends through a sample.

Georgii Bogdanov, a postdoctoral researcher in chemical and biomolecular engineering at UC Irvine, is another lead author of the study.

"Holotomography used the high refractive index of reflectin proteins to reveal the presence of sinusoidal refractive index distributions within squid iridophore cells," he said.

Reflectin platelets form spiral columns inside iridophores, enabling cephalopods to control how their skin transmits and reflects light.

Borrowing nature's blueprint

Once the researchers understood the architecture – the stacked, spiraling Bragg reflectors – they wondered whether they could engineer something similar.

Studying squid color change inspired flexible materials that shift appearance using tiny, wavy Bragg reflector columns. They added nanostructured metal films, enabling the materials to also shift appearance in the infrared spectrum.

Using a mixture of polymer chemistry, nanofabrication, and metal coatings, the group built thin films that shift color when stretched, pressed, or heated.

They went a step further by tailoring the same films to tune their infrared emission. This allows the material to hide or reveal heat signatures as well as visible hues.

"These bioinspired materials go beyond simple static color control, as they can dynamically adjust both their appearances in the visible and infrared wavelengths in response to stimuli," said co-author Aleksandra Strzelecka, a PhD student at UC Irvine.

"Part of what makes this technology truly exciting is its inherent scalability," she said. "We have demonstrated large-area and arrayed composites that mimic and even go beyond the squid's natural optical capabilities."

This opens the door to many applications ranging from adaptive [or active] camouflage to responsive fabrics to multispectral displays to advanced sensors.

Future optics from squid skin

The implications stretch far beyond a novelty coating. The same Bragg-style stacks could sharpen laser output, filter signals in fiber-optic lines, and boost solar-cell efficiency. They could also enable real-time structural health monitoring in bridges and aircraft.

"This study is an exciting demonstration of the power of coupling basic and applied research," Gorodetsky said. "We have likely just started to scratch the surface of what is possible for cephalopod-inspired tunable optical materials in our laboratory."

Every advance stemmed from squid skin cells with tiny winding columns just hundreds of nanometers wide. Despite their size, these structures could orchestrate a light show visible from meters away.

The team's work shows how decoding those natural nanostructures can lead to devices that humans manufacture by the meter rather than by the molecule.

Squid-inspired tech evolves

Researchers aim to speed up film response and develop biodegradable versions for sensors and medical patches.

Meanwhile, the discovery reaffirms why cephalopods remain a favorite subject for materials scientists: they are masters of manipulating light without a single pigment or battery.

In the lab, that mastery is starting to take shape as fabrics that cool soldiers in the desert by day, buildings that shimmer to reduce air-conditioning loads, and flexible screens that display both artwork and thermal data.

The next chapter, as Gorodetsky's group sees it, will be written where biology and engineering merge.

The squid's split-second shape-shifting trick has journeyed from the Atlantic deep to a microscope slide and into a polymer film.

Soon, it may appear on your jacket sleeve or smartphone case, blending vivid color with invisible infrared control just like in cephalopods.

The study is published in the journal Science.


Original Submission

posted by jelizondo on Monday June 30, @01:45PM   Printer-friendly
from the let's-delve-into-em-dashes dept.

AI isn't just impacting how we write — it's changing how we speak and interact with others. And there's only more to come:

Join any Zoom call, walk into any lecture hall, or watch any YouTube video, and listen carefully. Past the content and inside the linguistic patterns, you'll find the creeping uniformity of AI voice. Words like "prowess" and "tapestry," which are favored by ChatGPT, are creeping into our vocabulary, while words like "bolster," "unearth," and "nuance," words less favored by ChatGPT, have declined in use. Researchers are already documenting shifts in the way we speak and communicate as a result of ChatGPT — and they see this linguistic influence accelerating into something much larger.

In the 18 months after ChatGPT was released, speakers used words like "meticulous," "delve," "realm," and "adept" up to 51 percent more frequently than in the three years prior, according to researchers at the Max Planck Institute for Human Development, who analyzed close to 280,000 YouTube videos from academic channels. The researchers ruled out other possible change points before ChatGPT's release and confirmed these words align with those the model favors, as established in an earlier study comparing 10,000 human- and AI-edited texts. The speakers don't realize their language is changing. That's exactly the point.

One word, in particular, stood out to researchers as a kind of linguistic watermark. "Delve" has become an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here. "We internalize this virtual vocabulary into daily communication," says Hiromu Yakura, the study's lead author and a postdoctoral researcher at the Max Planck Institute of Human Development.

But it's not just that we're adopting AI language — it's about how we're starting to sound. Even though current studies mostly focus on vocabulary, researchers suspect that AI influence is starting to show up in tone, too — in the form of longer, more structured speech and muted emotional expression. As Levin Brinkmann, a research scientist at the Max Planck Institute of Human Development and a coauthor of the study, puts it, "'Delve' is only the tip of the iceberg."

AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn't actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it's really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study.

[...] We're approaching a splitting point, where AI's impacts on how we speak and write move between the poles of standardization, like templating professional emails or formal presentations, and authentic expression in personal and emotional spaces. Between those poles, there are three core tensions at play. Early backlash signals, like academics avoiding "delve" and people actively trying not to sound like AI, suggests we may self-regulate against homogenization. AI systems themselves will likely become more expressive and personalized over time, potentially reducing the current AI voice problem. And the deepest risk of all, as Naaman pointed to, is not linguistic uniformity but losing conscious control over our own thinking and expression.

The future isn't predetermined between homogenization and hyperpersonalization: it depends on whether we'll be conscious participants in that change. We're seeing early signs that people will push back when AI influence becomes too obvious, while technology may evolve to better mirror human diversity rather than flatten it. This isn't a question about whether AI will continue shaping how we speak — because it will — but whether we'll actively choose to preserve space for the verbal quirks and emotional messiness that make communication recognizably, irreplaceably human.

See also: Blade Runners of LinkedIn Are Hunting for Replicants – One Em Dash at a Time


Original Submission

posted by hubie on Monday June 30, @09:03AM   Printer-friendly
from the anthropic-principle-nostalgia-tour dept.

Arthur T Knackerbracket has processed the following story:

When we look out into the universe, we know it can support life – if it couldn’t, we wouldn’t exist. This has been stated in different ways over the years, but the essential thrust makes up the core of a philosophical argument known as the anthropic principle. It sounds obvious, even tautological, but it isn’t quite as simple as that.

To get your head around it, start with what scientists call the fine-tuning problem, the fact our universe seems perfectly balanced on the knife’s edge of habitability. Many fundamental constants, from the mass of a neutron to the strength of gravity, must have very specific values for life to be possible. “Some of these constants, if you make them too large, you just destabilise every atom,” says Luke Barnes at Western Sydney University in Australia.

The anthropic principle began as an attempt to explain why the universe is in this seemingly improbable state, and it boils down to a simple idea: the universe has to be this way, or else we wouldn’t be here to observe it.

There are two main formulations of the principle, both of which were set out in a 1986 book by cosmologist-mathematicians John Barrow and Frank Tipler. The weak principle states that because life exists, the universe’s fundamental constants are – at least here and now – in the range that allows life to develop. The strong principle adds the powerful statement that the fundamental constants must have values in that range because they are consistent with life existing. The “must” is important, as it can be taken as implying that the universe exists in order to support life.

If the weak principle is “I heard a tree fall in the forest, and therefore I must be in a place where trees can grow”, the strong principle says “A tree has fallen nearby, and therefore this planet was destined to have forests all along.”

For scientists today, the weak anthropic principle serves as a reminder of possible biases in observations of the cosmos, particularly if it isn’t the same everywhere. “If we live in a universe that is different from place to place, then we will naturally find ourselves in a place that has some specific conditions conducive to life,” says Sean Carroll at Johns Hopkins University in Maryland.

As for the strong version of the principle, there are physicists who consider it useful too, Barnes among them. He works on developing different flavours of multiverse models and sees the strong principle as a handy guide. It implies that, within a multiverse, there is a 100 per cent chance of at least one universe forming that is conducive to life. So, for any given multiverse model, the closer that chance is to 100 per cent, the more plausible it is. If the probability is, say, around 50 per cent, Barnes sees that as a good omen for the model’s veracity. “But if it’s one-in-a-squillion, then that’s a problem,” he says.

In truth, however, most physicists write off the strong principle as simply too strong. It suggests the universe is deterministic; that life was always certain to emerge, according to Elliott Sober at the University of Wisconsin–Madison. “But that probability could have been tiny and life could have still arisen, and the observations would be the same.”

Where does that leave us? The strong principle does, on the surface, provide an answer to the fine-tuning problem – but that answer is widely considered unreasonable. On the other hand, while the weak principle doesn’t provide a reason why the constants of our universe are so finely tuned, it is a useful tool for researchers. As principles go, this one is rather slippery.


Original Submission

posted by hubie on Monday June 30, @04:18AM   Printer-friendly
from the but-not-yet dept.

Arthur T Knackerbracket has processed the following story:

Over the past decade, quantum computing has grown into a billion-dollar industry. Everyone seems to be investing in it, from tech giants, such as IBM and Google, to the US military.

But Ignacio Cirac at the Max Planck Institute of Quantum Optics in Germany, a pioneer of the technology, has a more sober assessment. “A quantum computer is something that at the moment does not exist,” he says. That is because building one that actually works – and is practical to use – is incredibly difficult.

Rather than the “bits” of conventional machines, these computers use quantum bits, or qubits, to encode information. These can be made in several ways, from tiny superconducting circuits to extremely cold atoms, but all of them are complex to build.

The upside is that their quantum properties can be used to do certain kinds of computation more quickly than standard computers.

Such speed-ups are attractive for a range of problems that normal computers struggle with, from simulating exotic physics systems to efficiently scheduling passenger flights or grocery deliveries to supermarkets. Five years ago, it seemed quantum computers would ameliorate these and many other computational challenges.

Today, the situation is a lot more nuanced. Progress in building ever bigger quantum computers has, admittedly, been stunning, with several companies developing machines with more than 1000 qubits. But this has also revealed impossible-to-ignore difficulties.

[...] So, which problems might still benefit from quantum computation? Quantum computers could break the cryptography systems we currently use for secure communication, and this makes the technology interesting to governments and other institutions whose security could be imperiled by it, says Scott Aaronson at the University of Texas at Austin.

Another place where quantum computers should still be useful is in modelling materials and chemical reactions. This is because quantum computers, themselves a system of quantum objects, are perfectly suited to simulate other quantum systems, such as electrons, atoms and molecules.

“These will be simplified models; they won’t represent real materials. But if you design the system appropriately, they’ll have enough properties of the real materials that you can learn something about their physics,” says Daniel Gottesman at the University of Maryland.

Quantum chemistry simulations may sound more niche than scheduling flights, but some of the possible outcomes – finding a room-temperature superconductor, say – would be transformative.

The extent to which all this can truly be realised is significantly dependent on quantum algorithms, the instructions that tell quantum computers how to run – and help correct those pesky errors. This is a challenging new field that Vedran Dunjko at Leiden University in the Netherlands says is forcing researchers like him to confront fundamental questions about what information and computing are.


Original Submission

posted by hubie on Sunday June 29, @11:32PM   Printer-friendly

https://www.bbc.com/news/articles/c6256wpn97ro

Work has begun on a controversial project to create the building blocks of human life from scratch, in what is believed to be a world first.

The research has been taboo until now because of concerns it could lead to designer babies or unforeseen changes for future generations.

But now the World's largest medical charity, the Wellcome Trust, has given an initial £10m to start the project and says it has the potential to do more good than harm by accelerating treatments for many incurable diseases.

Dr Julian Sale, of the MRC Laboratory of Molecular Biology in Cambridge, who is part of the project, told BBC News the research was the next giant leap in biology.

"The sky is the limit. We are looking at therapies that will improve people's lives as they age, that will lead to healthier aging with less disease as they get older.

"We are looking to use this approach to generate disease-resistant cells we can use to repopulate damaged organs, for example in the liver and the heart, even the immune system," he said.

But critics fear the research opens the way for unscrupulous researchers seeking to create enhanced or modified humans.

Dr Pat Thomas, director of the campaign group Beyond GM, said: "We like to think that all scientists are there to do good, but the science can be repurposed to do harm and for warfare".

[...] The Human Genome Project enabled scientists to read all human genes like a bar code. The new work that is getting under way, called the Synthetic Human Genome Project, potentially takes this a giant leap forward – it will allow researchers not just to read a molecule of DNA, but to create parts of it – maybe one day all of it - molecule by molecule from scratch.

[...] "Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems".

The project's work will be confined to test tubes and dishes and there will be no attempt to create synthetic life. But the technology will give researchers unprecedented control over human living systems.

And although the project is hunting for medical benefits, there is nothing to stop unscrupulous scientists misusing the technology.

[...] Ms Thomas is concerned about how the technology will be commercialised by healthcare companies developing treatments emerging from the research.

"If we manage to create synthetic body parts or even synthetic penis, then who owns them. And who owns the data from these creations? "

Given the potential misuse of the technology, the question for Wellcome is why they chose to fund it. The decision was not made lightly, according to Dr Tom Collins, who gave the funding go-ahead.

"We asked ourselves what was the cost of inaction," he told BBC News.

"This technology is going to be developed one day, so by doing it now we are at least trying to do it in as responsible a way as possible and to confront the ethical and moral questions in as upfront way as possible".

A dedicated social science programme will run in tandem with the project's scientific development and will be led by Prof Joy Zhang, a sociologist, at the University of Kent.

"We want to get the views of experts, social scientists and especially the public about how they relate to the technology and how it can be beneficial to them and importantly what questions and concerns they have," she said.


Original Submission

posted by hubie on Sunday June 29, @06:50PM   Printer-friendly
from the TikTok-wars dept.

Commandos, secret operations and drones now offer action video that is effective for messaging on social media:

Israel's airstrikes on Iran exploded across the world's screens as a public display of military firepower. Underpinning that was a less visible but equally vital Israeli covert operation that pinpointed targets, guided the attacks and struck Iran from within.

Agents from Israel's spy agency, Mossad, operated inside Iran before and during the initial attacks earlier this month, Israeli officials said. The disclosure was itself an act of psychological warfare—a boast of Israel's ability to act with impunity inside Iran's borders and Tehran's failure to stop it.

Israel flaunted its tactical success by releasing grainy video emblazoned with Mossad's seal that it said showed operatives and drone strikes inside Iran.

Not long ago, such covert operations stayed secret. Today, belligerents from Ukraine to the U.S. increasingly broadcast their triumphs, with messages amplified in real time by social-media networks.

When T.E. Lawrence wanted to publicize his World War I secret forays deep into Ottoman territory, he wrote a book and articles. Nobody saw those commando raids for half a century until the blockbuster film "Lawrence of Arabia" recreated his exploits.

These days, barely hours pass before the world sees action footage of Ukraine's latest drone attacks on Russian military targets. Israel's detonation of explosives hidden inside Hezbollah militants' pagers played out in almost real time across the internet. The U.S. repeatedly fed social media the details—and sometimes imagery—of its special-operations strikes on Islamic State leaders in recent years.

The result is a major shift in warfare: Call it the battle of timelines. Spying and clandestine operations, in the traditional sense, have never been so difficult. Biometric data makes document forgery obsolete. Billions of cameras, attached to phones, rearview mirrors and doorbells, stand ready to capture the movements of any operative hoping to lurk invisibly. In seconds, artificial intelligence can rifle through millions of photos to identify the faces of foreign spies operating in the wild.

Instead, fighting in Ukraine and the Middle East is bringing a new doctrine to spycraft stemming from changes in both what their organizers seek to achieve and how information spreads. Operations that would have once been designed to remain under wraps are now meant to be seen, to produce spectacular optics. They play out not just on the battlefield, but also on social media, boosting morale at home while demoralizing the enemy watching from the other side of the screen.

[...] The communication war is raging in an information free-for-all. Governments and elites that until the middle of the 20th century controlled their information environment are today trying just to navigate it, said Ofer Fridman, a former Israeli officer and a scholar of war studies at King's College London. "Now they're struggling to communicate with their target audience through overwhelming noise," he said.

[...] For its part, Russia has made minimal effort to cover its own tracks in its barely disguised spree of covert operations in Europe. The GRU, the Russian military-intelligence organization, has repeatedly hired European civilians over social media, paying them to burn down a shopping mall in Warsaw, or an IKEA in Lithuania, according to Western officials. When a Russian helicopter pilot who defected to Ukraine was shot dead in Spain last year, Russia's spy chiefs didn't deny involvement—they all but boasted of it.

Alternate article source


Original Submission

posted by janrinok on Sunday June 29, @02:05PM   Printer-friendly

First images from world's largest digital camera reveal galaxies and cosmic collisions:

Millions of stars and galaxies fill a dreamy cosmic landscape in the first-ever images released from a new astronomical observatory with the largest digital camera in the world.

In one composite released Monday, bright pink clouds of gas and dust light up the Trifid and Lagoon nebulas, located several thousand light-years away from Earth. In another, a bonanza of stars and galaxies fills the sky, revealing stunning spirals and even a trio of galaxies merging and colliding.

A separate video uncovered a swarm of new asteroids, including 2,104 never-before-seen space rocks in our solar system and seven near-Earth asteroids that pose no danger to the planet.

The images and videos from the Vera C. Rubin Observatory represented just over 10 hours of test observations and were sneak peeks ahead of an event Monday that was livestreamed from Washington, D.C.

Keith Bechtol, an associate professor in the physics department at the University of Wisconsin-Madison who has been involved with the Rubin Observatory for nearly a decade, is the project's system verification and validation scientist, making sure the observatory's various components are functioning properly.

He said teams were floored when the images streamed in from the camera.

"There were moments in the control room where it was just silence, and all the engineers and all the scientists were just seeing these images, and you could just see more and more details in the stars and the galaxies," Bechtol told NBC News. "It was one thing to understand at an intellectual level, but then on this emotional level, we realized basically in real time that we were doing something that was really spectacular."

In one of the newly released images, the Rubin Observatory was able to spot objects in our cosmic neighborhood — asteroids in our solar system and stars in the Milky Way — alongside far more distant galaxies that are billions of light-years away.

"In fact, for most of the objects that you see in these images, we're seeing light that was emitted before the formation of our solar system," Bechtol said. "We are seeing light from across billions of years of cosmic history. And many of these galaxies have never been seen before."

Astronomers have been eagerly anticipating the first images from the new observatory, with experts saying it could help solve some of the universe's most enduring mysteries and revolutionize our understanding of the cosmos.

"We're entering a golden age of American science," Harriet Kung, acting director of the Energy Department's Office of Science, said in a statement.

"We anticipate that the observatory will give us many insights into our past, our future and possibly the fate of the universe," Kung said during Monday's event.

The Vera C. Rubin Observatory is jointly operated by the Energy Department and the U.S. National Science Foundation.

The facility, named after the American astronomer who discovered evidence of dark matter in the universe, sits atop Cerro Pachón, a mountain in central Chile. The observatory is designed to take roughly 1,000 images of the Southern Hemisphere sky each night, covering the entire visible southern sky every three to four nights.

The early images were the result of a series of test observations, but they mark the beginning of an ambitious 10-year mission that will involve scanning the sky every night for a decade to capture every detail and visible change.

"The whole design of the observatory has been built around this capability to point and shoot, point and shoot," Bechtol said. "Every 40 seconds we're moving to a new part of the sky. A simple way to think of it is that we're trying to bring the night sky to life in a way that we haven't been able to do."

By repeating that process every night for the next 10 years, scientists will be able to compile enormous images of the entire visible southern sky, allowing them to see stars changing in brightness, asteroids moving across the solar system, supernova explosions and untold other cosmic phenomena.

"Through this remarkable scientific facility, we will explore many cosmic mysteries, including the dark matter and dark energy that permeate the universe," Brian Stone, chief of staff at the National Science Foundation, said in a statement.

See also: Vera C. Rubin Observatory - Wikipedia


Original Submission

posted by janrinok on Sunday June 29, @09:19AM   Printer-friendly
from the preben-preben-and-preben dept.

In the not to distant future you'll have/own the copyright for your face, voice bodily features as far as digital reproductions are concerned. That is if you live in Denmark. In an effort to combat deep fakes their citizens will gain those rights. Question is if it will matter much if the deep fakes are made or stored outside of Denmark. But it could perhaps be the start of something other EU countries might adapt if it turns out well for them.

The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.

It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.

It will also cover "realistic, digitally generated imitations" of an artist's performance without consent. Violation of the proposed rules could result in compensation for those affected.

What about, identical, twins? Or people that just look really like each other? Unclear as of yet. Also as noted how is the enforcement going to take place.

https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence


Original Submission

posted by janrinok on Sunday June 29, @04:46AM   Printer-friendly

The reason that the site was offline is that the cable to the NOC has been cut - again! It is not something that we could control.

We apologise for the problem. You could have stayed up-to-date with the cause and rectification if you had joined us on our back-up IRC - Libera.Chat, ##soylentnews (irc.libera.chat/6697)

posted by janrinok on Sunday June 29, @04:35AM   Printer-friendly

The Fedora project is planning to reduce its package maintenance burden by dropping support for 32-bit x86 (i686) packages from the distribution's repositories. The plan detailed in the (mislabelled) change proposal is to drop 32-bit packages for Fedora 44. "By dropping completely the i686 architecture, Fedora will decrease the burden on package maintainers, release engineering, infrastructure, and users. Building and maintaining packages for i686 (and 32-bit architectures in general, but i686 is the last 32-bit architecture - partially - supported by Fedora) has been requiring more and more effort.

Many projects have already been officially dropping support for building and / or running on 32-bit architectures, requiring either adding back support for this architecture downstream in Fedora, or requiring packaging changes in a significant number of packages to adapt to this dropped support." The discussion under the proposal points out some of the situations where users will be unable to run software, such as the Steam gaming portal, under the current plan.

- - https://distrowatch.com/dwres.php?resource=showheadline&story=20014


Original Submission

posted by jelizondo on Saturday June 28, @08:15PM   Printer-friendly
from the black-is-the-new-blue dept.

Rejoice! No more Blue Screens of Death. It will now become the Black Screen of Death. Good thing the abbreviation will be unchanged, BSOD. Lets hope it won't be come as prevalent and lethal as the Black Death.

The Blue Screen of Death (BSOD) has held strong in Windows for nearly 40 years, but that's about to change. Microsoft revealed earlier this year that it was overhauling its BSOD error message in Windows 11, and the company has now confirmed that it will soon be known as the Black Screen of Death. The new design drops the traditional blue color, frowning face, and QR code in favor of a simplified black screen.

https://www.theverge.com/news/692648/microsoft-bsod-black-screen-of-death-color-change-official

Is there a favorite horrible crash message? The bomb? The cryptic codes? Sad faces? If there is such as thing. A black and red blinking Guru Meditation always warms my soul.


Original Submission

posted by jelizondo on Saturday June 28, @03:30PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

SPARCS will deploy an electrodynamic tether to attempt a controlled reentry

More and more satellites are being added to low Earth orbit (LEO) every month. As that number continues to increase, so do the risks of that critical area surrounding Earth becoming impassable, trapping us on the planet for the foreseeable future. Ideas from different labs have presented potential solutions to this problem, but one of the most promising, electrodynamic tethers (EDTs), have only now begun to be tested in space. A new CubeSat called the Spacecraft for Advanced Research and Cooperative Studies (SPARCS) mission from researchers at the Sharif University of Technology in Tehran hopes to contribute to that effort by testing an EDT and intersatellite communication system as well as collecting real-time data on the radiation environment of its orbital path.

SPARCS actually consists of two separate CubeSats. SPARCS-A is a 1U CubeSat primarily designed as a communications platform, with the mission design requiring it to talk to SPARCS-B, which is a 2U CubeSat that, in addition to the communication system, contains a EDT. That EDT, which can measure up to 12 meters in length, is deployed via a servomotor, with a camera watching to ensure proper deployment.

EDTs are essentially giant poles with electric current running through them. They use this current, and the tiny magnetic field it produces, to push off of the Earth’s natural magnetic sphere using a property called the Lorentz force. This allows the satellite to adjust its orbit without the use of fuel, simply by orienting its EDT in a specific direction (which the EDT itself can assist with) and then using the Lorentz force to either push it up into a higher orbit, or—more significant for the purposes for technology demonstration—to slow the CubeSat down to a point where it can make a controlled entry into the atmosphere.

The final piece of SPARCS’ kit is its dosimeter, which is intended to monitor the radiation environment of its orbit. As anyone familiar with spacecraft design knows, radiation hardening of electronics is absolutely critical to the success of a mission, but it is also expensive and time consuming, so it is best done at a minimal required level. Understanding the radiation environment of this popular orbital path can help future engineers make better, and hopefully less expensive, design decisions tailored to operation in this specific area.


Original Submission

posted by janrinok on Saturday June 28, @11:45AM   Printer-friendly

https://distrowatch.com/dwres.php?resource=showheadline&story=20013

Typically open source operating systems work to close off avenues for potential bugs and exploits, but sometimes, in the quest for convenience or performance, security takes a lower priority. We see an example of this in Canonical disabling a vulnerability mitigation for Intel GPUs in order to gain a 20% performance boost. The issue report on Launchpad states: "After discussion between Intel and Canonical's security teams, we are in agreement that Spectre no longer needs to be mitigated for the GPU at the Compute Runtime level.

At this point, Spectre has been mitigated in the kernel, and a clear warning from the Compute Runtime build serves as a notification for those running modified kernels without those patches. For these reasons, we feel that Spectre mitigations in Compute Runtime no longer offer enough security impact to justify the current performance trade-off."


Original Submission

posted by janrinok on Saturday June 28, @07:03AM   Printer-friendly
from the edit-news-oldtime dept.

Microsoft "relaunches" MS Editor (edit) as open source with some improvements and old limitations removed. You no longer need MS DOS to run it either.

The fact that Microsoft's 1991 design philosophy from MS-DOS translates so well to 2025 suggests that most fundamental aspects of text editing haven't changed much despite 34 years of tech evolution.

Or it's just really hard to mess up a simple text editor. Yet they somehow succeeded.

So they mess up, or enhance, notepad. But figure out they need an actual texteditor and not an AI fueled monstrosity so they look deep in the DOS catalog. At least they didn't bring back Edlin.

Any other old DOS commands that are missing and need a comeback/reboot? Which might seem like an odd question for people still used to a command line interface. But for other people ...

https://arstechnica.com/gadgets/2025/06/microsoft-surprises-ms-dos-fans-with-remake-of-ancient-text-editor-that-works-on-linux/
https://devblogs.microsoft.com/commandline/edit-is-now-open-source/
https://github.com/microsoft/edit


Original Submission

posted by janrinok on Saturday June 28, @02:17AM   Printer-friendly

The Federal Housing Finance Agency is working to let borrowers use crypto as part of their federal mortgage applications without converting it to cash:

In a landmark shift for the U.S. housing finance system, the Federal Housing Finance Agency has issued a directive ordering Fannie Mae and Freddie Mac to formally consider cryptocurrency as an asset in single-family mortgage loan risk assessments.

The move, signed by FHFA Director William J. Pulte on Wednesday, signals a new era of crypto integration into traditional financial infrastructure — this time within the core of American home lending.

The order directs both housing finance giants to develop proposals that include digital assets — without requiring borrowers to liquidate them into U.S. dollars prior to a loan closing.

Pulte said in a post on X that the move aligns with President Donald Trump's vision "to make the United States the crypto capital of the world."

Historically, cryptocurrency has been excluded from underwriting frameworks due to volatility, regulatory uncertainty, and the inability to easily verify reserves. This directive changes that.


Original Submission