Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favo(u)rite topic for articles?

  • /dev/random
  • Hardware
  • Software
  • News
  • OS
  • Science
  • Security
  • Other - please expand in the comments, or give your least favo(u)rite instead

[ Results | Polls ]
Comments:38 | Votes:95

posted by janrinok on Tuesday April 08, @09:55PM   Printer-friendly

No one can seem to agree on what an AI agent is:

Silicon Valley is bullish on AI agents. OpenAI CEO Sam Altman said agents will "join the workforce" this year. Microsoft CEO Satya Nadella predicted that agents will replace certain knowledge work. Salesforce CEO Marc Benioff said that Salesforce's goal is to be "the number one provider of digital labor in the world" via the company's various "agentic" services.

But no one can seem to agree on what an AI agent is, exactly.

In the last few years, the tech industry has boldly proclaimed that AI "agents" — the latest buzzword — are going to change everything. In the same way that AI chatbots like OpenAI's ChatGPT gave us new ways to surface information, agents will fundamentally change how we approach work, claim CEOs like Altman and Nadella.

That may be true. But it also depends on how one defines "agents," which is no easy task. Much like other AI-related jargon (e.g. "multimodal," "AGI," and "AI" itself), the terms "agent" and "agentic" are becoming diluted to the point of meaninglessness.

That threatens to leave OpenAI, Microsoft, Salesforce, Amazon, Google, and the countless other companies building entire product lineups around agents in an awkward place. An agent from Amazon isn't the same as an agent from Google or any other vendor, and that's leading to confusion — and customer frustration.

[...] So why the chaos?

Well, agents — like AI — are a nebulous thing, and they're constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI's Operator, Google's Project Mariner, and Perplexity's shopping agent — and their capabilities are all over the map.

Rich Villars, GVP of worldwide research at IDC, noted that tech companies "have a long history" of not rigidly adhering to technical definitions.

"They care more about what they are trying to accomplish" on a technical level, Villars told TechCrunch, "especially in fast-evolving markets."

But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai.

"The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning," Ng said in a recent interview, "but about a year ago, marketers and a few big companies got a hold of them."

[...] "Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes," Rowan said. "This can result in varied interpretations of what AI agents should deliver, potentially complicating project goals and results. Ultimately, while the flexibility can drive creative solutions, a more standardized understanding would help enterprises better navigate the AI agent landscape and maximize their investments."

Unfortunately, if the unraveling of the term "AI" is any indication, it seems unlikely the industry will coalesce around one definition of "agent" anytime soon — if ever.


Original Submission

posted by janrinok on Tuesday April 08, @05:12PM   Printer-friendly
from the marching-morons dept.

The Overpopulation Project has an English translation of Frank Götmark's short essay which explores the idea that Homo sapiens is an invasive specie. The essay was originally published on March 30th in Svenska Dagbladet and has been very slightly modified.

An invasive species can be defined as an alien, non-native species that spreads and causes various forms of damage. Such species are desirable to regulate and, in the best case, eliminate from a country. But compared to our population growth they are a minor problem, at least in Sweden and many European countries. In North America and Australia, they are a larger problem. But again, they cause a lot less damage than Homo sapiens, who is in any case the cause of their spread.

Invasive species tend to appear near buildings and infrastructure; for example, on roadsides and other environments that are easily colonized, or in the sea via ballast in ships. It is often difficult to draw boundaries in time and space for invasive species. For example, in Sweden several species came in via seeds in agriculture during the 19th century and became common, such as certain weeds.

The idea has been explored before, for example back in 2015 by Scientific American. It's also relevant to note that the global population might be underestimated substantially.

Previously:
(2019) July 11 is World Population Day
(2016) Bioethicist: Consider Having Fewer Children in the Age of Climate Change
(2015) Poll Shows Giant Gap Between what US Public and Scientists Think
(2014) The Climate-Change Solution No One Will Talk About


Original Submission

posted by janrinok on Tuesday April 08, @12:23PM   Printer-friendly

NASA's Webb Exposes Complex Atmosphere of Starless Super-Jupiter - NASA Science:

An international team of researchers has discovered that previously observed variations in brightness of a free-floating planetary-mass object known as SIMP 0136 must be the result of a complex combination of atmospheric factors, and cannot be explained by clouds alone.

Using NASA's James Webb Space Telescope to monitor a broad spectrum of infrared light emitted over two full rotation periods by SIMP 0136, the team was able to detect variations in cloud layers, temperature, and carbon chemistry that were previously hidden from view.

The results provide crucial insight into the three-dimensional complexity of gas giant atmospheres within and beyond our solar system. Detailed characterization of objects like these is essential preparation for direct imaging of exoplanets, planets outside our solar system, with NASA's Nancy Grace Roman Space Telescope, which is scheduled to begin operations in 2027.

SIMP 0136 is a rapidly rotating, free-floating object roughly 13 times the mass of Jupiter, located in the Milky Way just 20 light-years from Earth. Although it is not classified as a gas giant exoplanet — it doesn't orbit a star and may instead be a brown dwarf — SIMP 0136 is an ideal target for exo-meteorology: It is the brightest object of its kind in the northern sky. Because it is isolated, it can be observed with no fear of light contamination or variability caused by a host star. And its short rotation period of just 2.4 hours makes it possible to survey very efficiently.

Prior to the Webb observations, SIMP 0136 had been studied extensively using ground-based observatories and NASA's Hubble and Spitzer space telescopes.

"We already knew that it varies in brightness, and we were confident that there are patchy cloud layers that rotate in and out of view and evolve over time," explained Allison McCarthy, doctoral student at Boston University and lead author on a study published today in The Astrophysical Journal Letters. "We also thought there could be temperature variations, chemical reactions, and possibly some effects of auroral activity affecting the brightness, but we weren't sure."

To figure it out, the team needed Webb's ability to measure very precise changes in brightness over a broad range of wavelengths.

Using NIRSpec (Near-Infrared Spectrograph), Webb captured thousands of individual 0.6- to 5.3-micron spectra — one every 1.8 seconds over more than three hours as the object completed one full rotation. This was immediately followed by an observation with MIRI (Mid-Infrared Instrument), which collected hundreds of spectroscopic measurements of 5- to 14-micron light — one every 19.2 seconds, over another rotation.

The result was hundreds of detailed light curves, each showing the change in brightness of a very precise wavelength (color) as different sides of the object rotated into view.

"To see the full spectrum of this object change over the course of minutes was incredible," said principal investigator Johanna Vos, from Trinity College Dublin. "Until now, we only had a little slice of the near-infrared spectrum from Hubble, and a few brightness measurements from Spitzer."

The team noticed almost immediately that there were several distinct light-curve shapes. At any given time, some wavelengths were growing brighter, while others were becoming dimmer or not changing much at all. A number of different factors must be affecting the brightness variations.

"Imagine watching Earth from far away. If you were to look at each color separately, you would see different patterns that tell you something about its surface and atmosphere, even if you couldn't make out the individual features," explained co-author Philip Muirhead, also from Boston University. "Blue would increase as oceans rotate into view. Changes in brown and green would tell you something about soil and vegetation."

To figure out what could be causing the variability on SIMP 0136, the team used atmospheric models to show where in the atmosphere each wavelength of light was originating.

"Different wavelengths provide information about different depths in the atmosphere," explained McCarthy. "We started to realize that the wavelengths that had the most similar light-curve shapes also probed the same depths, which reinforced this idea that they must be caused by the same mechanism."

One group of wavelengths, for example, originates deep in the atmosphere where there could be patchy clouds made of iron particles. A second group comes from higher clouds thought to be made of tiny grains of silicate minerals. The variations in both of these light curves are related to patchiness of the cloud layers.

A third group of wavelengths originates at very high altitude, far above the clouds, and seems to track temperature. Bright "hot spots" could be related to auroras that were previously detected at radio wavelengths, or to upwelling of hot gas from deeper in the atmosphere.

Some of the light curves cannot be explained by either clouds or temperature, but instead show variations related to atmospheric carbon chemistry. There could be pockets of carbon monoxide and carbon dioxide rotating in and out of view, or chemical reactions causing the atmosphere to change over time.

"We haven't really figured out the chemistry part of the puzzle yet," said Vos. "But these results are really exciting because they are showing us that the abundances of molecules like methane and carbon dioxide could change from place to place and over time. If we are looking at an exoplanet and can get only one measurement, we need to consider that it might not be representative of the entire planet."

This research was conducted as part of Webb's General Observer Program 3548.

See also:


Original Submission

posted by janrinok on Tuesday April 08, @07:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[CFD: Computational Fluid Dynamics]

CFD simulation is cut down from almost 40 hours to less than two using 1,024 Instinct MI250X accelerators paired with Epyc CPUs.

AMD processors were instrumental in achieving a new world record during a recent Ansys Fluent computational fluid dynamics (CFD) simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory (ORNL). According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly.

Frontier was once the fastest supercomputer in the world, and it was also the first one to break into exascale performance. It replaced the Summit supercomputer, which was decommissioned in November 2024. However, the El Capitan supercomputer, located at the Lawrence Livermore National Laboratory, broke Frontier’s record at around the same time. Both Frontier and El Capitan are powered by AMD GPUs, with the former boasting 9,408 AMD EPYC processors and 37,632 AMD Instinct MI250X accelerators. On the other hand, the latter uses 44,544 AMD Instinct MI300A accelerators.

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia’s market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

“By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges, enabling breakthroughs in efficiency, sustainability, and innovation,” said Brad McCredie, AMD Senior Vice President for Data Center Engineering.

Even though AMD can deliver top-tier performance at a much cheaper price than Nvidia, many AI data centers prefer Team Green because of software issues with AMD’s hardware.

One high-profile example was Tiny Corp’s TinyBox system, which had problems with instability with its AMD Radeon RX 7900 XTX graphics cards. The problem was so bad that Dr. Lisa Su had to step in to fix the issues. And even though it was purportedly fixed, the company still released two versions of the TinyBox AI accelerator — one powered by AMD and the other by Nvidia. Tiny Corp also recommended the more expensive Team Green version, with its six RTX 4090 GPUs, because of its driver quality.

If Team Red can fix the software support on its great hardware, then it could likely get more customers for its chips and get a more even footing with Nvidia in the AI GPU market.


Original Submission

posted by hubie on Tuesday April 08, @02:56AM   Printer-friendly
from the having-rings-is-cool dept.

Earth Had Saturn-Like Rings 466 Million Years Ago, New Study Suggests

The temporary structure likely consisted of debris from a broken-up asteroid:

Earth may have sported a Saturn-like ring system 466 million years ago, after it captured and wrecked a passing asteroid, a new study suggests.

The debris ring, which likely lasted tens of millions of years, may have led to global cooling and even contributed to the coldest period on Earth in the past 500 million years.

That's according to a fresh analysis of 21 crater sites around the world that researchers suspect were all created by falling debris from a large asteroid between 488 million and 443 million years ago, an era in Earth's history known as the Ordovician during which our planet witnessed dramatically increased asteroid impacts.

A team led by Andy Tomkins, a professor of planetary science at Monash University in Australia, used computer models of how our planet's tectonic plates moved in the past to map out where the craters were when they first formed over 400 million years ago. The team found that all the craters had formed on continents that floated within 30 degrees of the equator, suggesting they were created by the falling debris of a single large asteroid that broke up after a near-miss with Earth.

"Under normal circumstances, asteroids hitting Earth can hit at any latitude, at random, as we see in craters on the moon, Mars and Mercury," Tomkins wrote in The Conversation. "So it's extremely unlikely that all 21 craters from this period would form close to the equator if they were unrelated to one another."

The chain of crater locations all hugging the equator are consistent with a debris ring orbiting Earth, scientists say. That's because such rings typically form above planets' equators, as occurs with those circling Saturn, Jupiter, Uranus and Neptune. The chances that these impact sites were created by unrelated, random asteroid strikes is about 1 in 25 million, the new study found.

[...] "Over millions of years, material from this ring gradually fell to Earth, creating the spike in meteorite impacts observed in the geological record," Tomkins added in a university statement. "We also see that layers in sedimentary rocks from this period contain extraordinary amounts of meteorite debris."

The team found that this debris, which represented a specific type of meteorite and was found to be abundant in limestone deposits across Europe, Russia and China, had been exposed to a lot less space radiation than meteorites that fall today. Those deposits also reveal signatures of multiple tsunamis during the Ordovician period, all of which can be best explained by a large, passing asteroid capture-and-break-up scenario, the researchers argue.

The new study is a "new and creative idea that explains some observations," Birger Schmitz of Lund University in Sweden told New Scientist. "But the data are not yet sufficient to say that the Earth indeed had rings."

Searching for a common signature in specific asteroids grains across the newly studied impact craters would help test the hypothesis, Schmitz added.

Earth may have had a ring system 466 million years ago

Earth may have had a ring system 466 million years ago:

In a discovery that challenges our understanding of Earth's ancient history, researchers have found evidence suggesting that Earth may have had a ring system, which formed around 466 million years ago, at the beginning a period of unusually intense meteorite bombardment known as the Ordovician impact spike.

This surprising hypothesis, published today in Earth and Planetary Science Letters, stems from plate tectonic reconstructions for the Ordovician period noting the positions of 21 asteroid impact craters. All these craters are located within 30 degrees of the equator, despite over 70 per cent of Earth's continental crust being outside this region, an anomaly that conventional theories cannot explain.

The research team believes this localised impact pattern was produced after a large asteroid had a close encounter with Earth. As the asteroid passed within Earth's Roche limit, it broke apart due to tidal forces, forming a debris ring around the planet—similar to the rings seen around Saturn and other gas giants today.

[...] "What makes this finding even more intriguing is the potential climate implications of such a ring system," he said.

The researchers speculate that the ring could have cast a shadow on Earth, blocking sunlight and contributing to a significant global cooling event known as the Hirnantian Icehouse.

This period, which occurred near the end of the Ordovician, is recognised as one of the coldest in the last 500 million years of Earth's history.

"The idea that a ring system could have influenced global temperatures adds a new layer of complexity to our understanding of how extra-terrestrial events may have shaped Earth's climate," Professor Tomkins said.

Normally, asteroids impact the Earth at random locations, so we see impact craters distributed evenly over the Moon and Mars, for example. To investigate whether the distribution of Ordovician impact craters is non-random and closer to the equator, the researchers calculated the continental surface area capable of preserving craters from that time.

They focused on stable, undisturbed cratons with rocks older than the mid Ordovician period, excluding areas buried under sediments or ice, eroded regions, and those affected by tectonic activity. Using a GIS approach (Geographic Information System), they identified geologically suitable regions across different continents. Regions like Western Australia, Africa, the North American Craton, and small parts of Europe were considered well-suited for preserving such craters. Only 30 per cent of the suitable land area was determined to have been close to the equator, yet all the impact craters from this period were found in this region. The chances of this happening are like tossing a three-sided coin (if such a thing existed) and getting tails 21 times.

The implications of this discovery extend beyond geology, prompting scientists to reconsider the broader impact of celestial events on Earth's evolutionary history. It also raises new questions about the potential for other ancient ring systems that could have influenced the development of life on Earth.

Could similar rings have existed at other points in our planet's history, affecting everything from climate to the distribution of life? This research opens a new frontier in the study of Earth's past, providing new insights into the dynamic interactions between our planet and the wider cosmos.

Journal Reference: https://doi.org/10.1016/j.epsl.2024.118991


Original Submission #1Original Submission #2

posted by Fnord666 on Monday April 07, @10:11PM   Printer-friendly
from the just-wait-for-the-GNU/QNodeOS-sniping-to-begin dept.

Operating system for quantum networks is a first:

Researchers in the Netherlands, Austria, and France have created what they describe as the first operating system for networking quantum computers. Called QNodeOS, the system was developed by a team led by Stephanie Wehner at Delft University of Technology. The system has been tested using several different types of quantum processor and it could help boost the accessibility of quantum computing for people without an expert knowledge of the field.

In the 1960s, the development of early operating systems such as OS/360 and UNIX represented a major leap forward in computing. By providing a level of abstraction in its user interface, an operating system enables users to program and run applications, without having to worry about how to reconfigure the transistors in the computer processors. This advance laid the groundwork for the many of the digital technologies that have revolutionized our lives.

"If you needed to directly program the chip installed in your computer in order to use it, modern information technologies would not exist," Wehner explains. "As such, the ability to program and run applications without needing to know what the chip even is has been key in making networks like the Internet actually useful."

The users of nascent quantum computers would also benefit from an operating system that allows quantum (and classical) computers to be connected in networks. Not least because most people are not familiar with the intricacies of quantum information processing.

However, quantum computers are fundamentally different from their classical counterparts, and this means a host of new challenges faces those developing network operating systems.

"These include the need to execute hybrid classical–quantum programs, merging high-level classical processing (such as sending messages over a network) with quantum operations (such as executing gates or generating entanglement)," Wehner explains.

Within these hybrid programs, quantum computing resources would only be used when specifically required. Otherwise, routine computations would be offloaded to classical systems, making it significantly easier for developers to program and run their applications.

[...] In addition, Wehner's team considered that, unlike the transistor circuits used in classical systems, quantum operations currently lack a standardized architecture – and can be carried out using many different types of qubits.

Wehner's team addressed these design challenges by creating a QNodeOS, which is a hybridized network operating system. It combines classical and quantum "blocks", that provide users with a platform for performing quantum operations.

[...] QNodeOS is still a long way from having the same impact as UNIX and other early operating systems. However, Wehner's team is confident that QNodeOS will accelerate the development of future quantum networks.

"It will allow for easier software development, including the ability to develop new applications for a quantum Internet," she says. "This could open the door to a new area of quantum computer science research."


Original Submission

posted by Fnord666 on Monday April 07, @05:26PM   Printer-friendly
from the AI-boostery dept.

Slashdot also featured this story, via bleepingcomputer.com summary. The original story is here: https://www.microsoft.com/en-us/security/blog/2025/03/31/analyzing-open-source-bootloaders-finding-vulnerabilities-faster-with-ai/

At first I thought this would be an advert for Microsoft Copilot tacked onto a tale of security hounds doing their stuff with vulnerabilities in GRUB2, but it does seem that AI saved some time for the investigators, and the article is worth a read.

Here is my summary:

"By leveraging Microsoft Security Copilot to expedite the vulnerability discovery process, Microsoft Threat Intelligence uncovered several vulnerabilities in multiple open-source bootloaders, impacting all operating systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot as well as IoT devices. The vulnerabilities found in the GRUB2 bootloader (commonly used as a Linux bootloader) and U-boot and Barebox bootloaders (commonly used for embedded systems), could allow threat actors to gain and execute arbitrary code.

Using Security Copilot, we were able to identify potential security issues in bootloader functionalities, focusing on filesystems due to their high vulnerability potential. This approach saved our team approximately a week's worth of time that would have otherwise been spent manually reviewing the content. Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability.

[...] Through a combination of static code analysis tools (such as CodeQL), fuzzing the GRUB2 emulator (grub-emu) with AFL++, manual code analysis, and using Microsoft Security Copilot, we have uncovered several vulnerabilities.

Using Security Copilot, we initially explored which functionalities in a bootloader have the most potential for vulnerabilities, with Copilot identifying network, filesystems, and cryptographic signatures as key areas of interest. Given our ongoing analysis of network vulnerabilities and the fact that cryptography is largely handled by UEFI, we decided to focus on filesystems.

Using the JFFS2 filesystem code as an example, we prompted Copilot to find all potential security issues, including exploitability analysis. Copilot identified multiple security issues, which we refined further by requesting Copilot to identify and provide the five most pressing of these issues. In our manual review of the five identified issues, we found three were false positives, one was not exploitable, and the remaining issue, which warranted our attention and further investigation, was an integer overflow vulnerability."


Original Submission

posted by hubie on Monday April 07, @12:41PM   Printer-friendly
from the snooper's-charter-2? dept.

Arthur T Knackerbracket has processed the following story:

The UK's technology secretary revealed the full breadth of the government's Cyber Security and Resilience (CSR) Bill for the first time this morning, pledging £100,000 ($129,000) daily fines for failing to act against specific threats under consideration.

Slated to enter Parliament later this year, the CSR bill was teased in the King's Speech in July, shortly after the Labour administration came into power. The gist of it was communicated at the time – to strengthen the NIS 2018 regulations and future-proof the country's most critical services from cyber threats – and Peter Kyle finally detailed the plans for the bill at length today.

Kyle said the CSR bill comprises three key pillars: Expanding the regulations to bring more types of organization into scope; handing regulators greater enforcement powers; and ensuring the government can change the regulations quickly to adapt to evolving threats.

Additional amendments are under consideration and may add to the confirmed pillars by the time the legislation makes its way through official procedures. These include bringing datacenters into scope, publishing a unified set of strategic objectives for all regulators, and giving the government the power to issue ad-hoc directives to in-scope organizations.

The latter means the government would be able to order regulated entities to make specific security improvements to counter a certain threat or ongoing incident, and this is where the potential fines come in.

If, for example, a managed service provider (MSP) – a crucial part of the IT supply chain – failed to patch against a widely exploited vulnerability within a time frame specified by a government order, and was then hit by attacks, it could face daily fines of £100,000 or 10 percent of turnover for each day the breach continues.

"Resilience is not improving at the rate necessary to keep pace with the threat and this can have serious real-world impacts," said Kyle. "The government's legislative plan for cyber security will address the vulnerabilities in our cyber defenses to minimize the impact of attacks and improve the resilience of our critical infrastructure, services, and digital economy."

[...] The third pillar – giving the government the authority to flexibly adapt the regulations as new threats emerge – is the lesser known of the three and wasn't really referred to in the King's Speech.

This could bring even more organizations into scope quickly, change regulators' responsibilities where necessary, or introduce new requirements for in-scope entities.

[...] In revealing the bill's details today, the tech secretary said the UK continues to face "unprecedented threats" to CNI, citing various attacks that plagued the country in recent times. Synnovis, Southern Water, local authorities, and those in the US and Ukraine all got a mention, and that's just scratching the surface of the full breadth of recent attacks.

Kyle said in an interview with The Telegraph that shortly after the UK's Labour party was elected, he was briefed by the country's spy chiefs about the threat to critical services – a session that left him "deeply concerned" over the state of cybersecurity.

"I was really quite shocked at some of the vulnerabilities that we knew existed and yet nothing had been done," he said.

[...] However, William Richmond-Coggan, partner of dispute management at legal eagle Freeths, warned:

"Even if every organization that the new rules are directed to had the budget, technical capabilities and leadership bandwidth to invest in updating their infrastructure to meet the current and future wave of cyber threats, it is likely to be a time consuming and costly process bringing all of their systems into line.

"And with an ever evolving cyber threat profile, those twin investments of time and budget need to be incorporated as rolling commitments – achieving a cyber secure posture is not a 'one and done'. Of at least equal importance is the much needed work of getting individuals employed in these nationally important organisations to understand that cyber security is only as strong as its weakest link, and that everyone has a role to play in keeping such organisations safe."


Original Submission

posted by hubie on Monday April 07, @07:56AM   Printer-friendly
from the good-advice dept.

Cell Phone OPSEC for Border Crossings - Schneier on Security:

Cell Phone OPSEC for Border Crossings

I have heard stories of more aggressive interrogation of electronic devices at US border crossings. I know a lot about securing computers, but very little about securing phones.

Are there easy ways to delete data—files, photos, etc.—on phones so it can't be recovered? Does resetting a phone to factory defaults erase data, or is it still recoverable? That is, does the reset erase the old encryption key, or just sever the password that access that key? When the phone is rebooted, are deleted files still available?

We need answers for both iPhones and Android phones. And it's not just the US; the world is going to become a more dangerous place to oppose state power.

Posted on April 1, 2025 at 7:01 AM56 Comments

See also: Yes, border control can go through your phone. Here's what travelers should know.


Original Submission

posted by hubie on Monday April 07, @03:10AM   Printer-friendly
from the to-infinity-and-beyond dept.

Dawn Aerospace aims to make transporting things to space - whether supplies to the ISS or pharmaceuticals for testing - cheaper, faster and greener:

It has all the qualities of an aircraft but with its rocket engine, the Dawn Mk-II Aurora can fly faster and higher than any jet.

"We have a real path to this being the first vehicle that flies to 100 km altitude - the border of space - twice in a day," says Stefan Powell, a co-founder of Dawn Aerospace.

"No one's ever done that."

Dawn Aerospace is a New Zealand company working on developing greener and more convenient alternatives to traditional space transportation.

By now the company has over 120 employees spread across its headquarters here and in the Netherlands, but Powell says it all started with a group of university students who had a shared goal.

"We decided we wanted to break the European altitude records for sub-orbital rockets, so we got together and we started building quite big rockets.

"To do that we actually needed to develop new propellants that were appropriate for students to use because rockets are generally pretty complicated... so we actually ended up developing entirely new classes of rockets," he says.

After convincing the Spanish military to let them use their base in the south of Spain as a launch pad, Powell and his classmates successfully broke the record in 2015.

This was during a time where satellite launching really started ramping up, but Powell says it was clear this trajectory wasn't sustainable. For example, some of the fuels used for satellites were incredibly toxic.

"A teaspoon of hydrazine in this room will kill everyone, it's just gnarly and toxic. It's a great propellant - don't get me wrong, it's worked super well for 70 years now so there's a reason that they use it - but it's mostly militaries and governments and whatnot that have billion-dollar budgets that can deal with this level of toxicity," he says.

[...] Right now the plane can fly to sub-orbital level, i.e. the border of space, but the ultimate goal is to get it flying all the way into space twice a day.

"That's always been the dream," he says.

Powell says the Aurora will also offer a easier, cheaper way for various other space activities, including for scientists developing new medication.

"They already go to the International Space Station for that. But that's tens of millions of dollars per test that they want to do, it takes years to actually get on station and then years to actually get your sample back. But with us, they'll be able to fly multiple times a week at a much much lower cost."

Dawn Aerospace Mk-II Aurora Becomes First Civil Aircraft To Fly Supersonic Since Concorde. - Tech Business News:

The Mk-II Aurora soared to an altitude of 25 kilometers, a significant leap from its previous performance in August when it reached Mach 0.92.

That earlier milestone represented speeds three times faster and altitudes five times higher than those achieved in tests conducted in 2023. The Aurora's rapid advancements demonstrate Dawn's relentless pursuit of pushing the boundaries of aeronautics.

This latest test flight underscores the company's broader ambitions: achieving hypersonic speeds of 6,173–12,348 km/h and flying to altitudes exceeding 100 kilometers. Even more impressive is their aim to accomplish these feats twice within a single day.

With each successive test, Dawn Aerospace is bringing its vision of commercial space travel closer to reality, promising to revolutionise how humanity approaches the frontier of space exploration.

"As a company, we have been working for more than seven years to design, develop, test, and deliver supersonic flight. We are now achieving this and will start commercial payload operations in the coming months," said CEO of Dawn Aerospace, Stefan Powell.

[...] Unlike many of its competitors in a field dominated by billionaire-backed ventures and government-funded programs, Dawn Aerospace has adopted a lean development approach.

To date, the company has spent just $10 million on its flight program and plans to complete it with only $20 million — an extraordinarily small budget by aerospace standards.

If these streamlined production methods carry over to operations, the result could be significantly more affordable space flights for customers.

Dawn also diversifies its revenue streams by producing low-emissions propulsion systems for satellites, supporting its ambitious projects with additional income. Even with limited financial resources, the startup is aiming for extraordinary achievements.

The ultimate goal? To develop the Mk-III, an orbital-stage aircraft capable of launching satellites into low-Earth orbit. The innovation would place Dawn Aerospace in direct competition with industry giants like Elon Musk's SpaceX.

See also:


Original Submission

posted by hubie on Sunday April 06, @10:22PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Neuralink, Synchron, and Neuracle are expanding clinical trials and trying to zero in on an actual product.

Tech companies are always trying out new ways for people to interact with computers—consider efforts like Google Glass, the Apple Watch, and Amazon’s Alexa. You’ve probably used at least one.

But the most radical option has been tried by fewer than 100 people on Earth—those who have lived for months or years with implanted brain-computer interfaces, or BCIs.

Implanted BCIs are electrodes put in paralyzed people’s brains so they can use imagined movements to send commands from their neurons through a wire, or via radio, to a computer. In this way, they can control a computer cursor or, in few cases, produce speech.  

[...] The impression of progress comes thanks to a small group of companies that are actively recruiting volunteers to try BCIs in clinical trials. They are Neuralink, backed by the world’s richest person, Elon Musk; New York–based Synchron; and China’s Neuracle Neuroscience. 

Each is trialing interfaces with the eventual goal of getting the field’s first implanted BCI approved for sale. 

“I call it the translation era,” says Michelle Patrick-Krueger, a research scientist who carried out a detailed survey of BCI trials with neuroengineer Jose Luis Contreras-Vidal at the University of Houston. “In the past couple of years there has been considerable private investment. That creates excitement and allows companies to accelerate.”

That’s a big change, since for years BCIs have been more like a neuroscience parlor trick, generating lots of headlines but little actual help to patients. 

Patrick-Krueger says the first time a person controlled a computer cursor from a brain implant was in 1998. That was followed by a slow drip-drip of tests in which university researchers would find a single volunteer, install an implant, and carry out studies for months or years.

Over 26 years, Patrick-Krueger says, she was able to document a grand total of 71 patients who’ve ever controlled a computer directly with their neurons. 

That means you are more likely to be friends with a Mega Millions jackpot winner than know someone with a BCI.

[...] “One thing is to have them work, and another is how to actually deploy them,” says Contreras-Vidal. “Also, behind any great news are probably technical issues that need to be addressed.” These include questions about how long an implant will last and how much control it offers patients.

Larger trials from three companies are now trying to resolve these questions and set the groundwork for a real product.

[...] Her BCI survey yielded other insights. According to her data, implants have lasted as long as 15 years, more than half of patients are in the US, and roughly 75% of BCI recipients have been male. 

The data can’t answer the big question, though. And that is whether implanted BCIs will progress from breakthrough demonstrations into breakout products, the kind that help many people.

“In the next five to 10 years, it’s either going to translate into a product or it’ll still stay in research,” Patrick-Krueger says. “I do feel very confident there will be a breakout.”


Original Submission

posted by hubie on Sunday April 06, @05:34PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Japan's government-backed chipmaker Rapidus has begun adjusting equipment in order to start test production of wafers later this month. The company, which aims to begin high-volume production on its 2nm-class process technology by 2027, plans to complete the first test wafers by July, according to Bloomberg. After that, the company intends to release process design kits (PDKs) to early customers and offer them an opportunity to prototype their designs.
 
  Rapidus began installing semiconductor production equipment, including ASML's advanced EUV and DUV lithography systems, into its Innovative Integration for Manufacturing (IIM) facility in Chitose, Hokkaido late last year. By now, the company has probably reached the 'first light' on wafer milestone with advanced tools, so it is reasonable to expect it to be able to start pilot production of its own circuits using its 2nm fabrication process that relies on gate-all-around transistors.
 
  One of Rapidus' main advantages over established players like TSMC, Samsung Foundry, and Intel Foundry is projected to be its fully automated advanced packaging capability that will operate at the same fab as the wafer processing, something no company has done yet. This would greatly speed up the production cycle for designs that require advanced packaging. However, for now, Rapidus will only offer pilot production of semiconductor wafers themselves and will not offer test packaging services.
 
  Rapidus is currently setting up a new research and development center, named Rapidus Chiplet Solutions (RCS), at Seiko Epson Corporation's Chitose Plant, located next to the IIM facility. Preparatory work for RCS has been ongoing since October, 2024, and starting in this month, the company will begin installing production equipment at the site, focusing on post-fabrication stages. The facility will be used to build a pilot line aimed at developing scalable manufacturing techniques. Work at RCS will include advancement of redistribution layer (RDL) interposer structures, three-dimensional packaging methods, assembly design kits (ADKs) for complex back-end operations, and known good die (KGD) testing processes.
 
  "The construction of the IIM manufacturing facility at Rapidus has progressed smoothly, and by the end of last fiscal year we had completed the installation of the semiconductor manufacturing equipment necessary for the start of pilot operations," said Dr. Atsuyoshi Koike, representative director and CEO of Rapidus. […] "With the approval of the NEDO project plan and budget, we will start up the pilot line in April, which will steadily lead to the start of mass production targeted for 2027."


Original Submission

posted by hubie on Sunday April 06, @12:47PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

An alliance of cloud service providers in Europe is investing €1 million into the Fulcrum Project, an open source cloud federation tech that gives an alternative to local customers anxious about using US hypercalers.

Speaking to The Register, Francisco Mingorance, Secretary General at the Cloud Infrastructure Service Providers in Europe (CISPE) association, explained that as part of the settlement reached with Microsoft last year an innovation fund was set up using Microsoft's cash.

Some €1 million of that has now been allocated to Fulcrum, an open source project to aggregate products from smaller tech vendors to rival the hyperscalers, he said.

"It's happening," he said. "There's been work going on for over a year on this, you know, coding and everything, testing, proof of concept...

"We cannot wait another five years. I mean, the sector lost half of its market share in four years to hyperscalers."

According to CISPE, the project marks "a significant step towards European cloud sovereignty" and is designed to "enable European cloud providers to pool and federate their infrastructures, offering a scalable and competitive alternative to foreign-controlled hyperscale cloud providers."

[...] Led by Opiquad, the open source code of the Fulcrum Core Project was officially unveiled at last week's CloudConf in Turin, Italy. Things are moving fast – Mingorance told us the team is aiming for July 2025 for "the first aggregated services available for purchase composition."

Emile Chalouhi, CEO of Opiquad, told The Register, "I think this is the only way to actually be able to finally create a common digital market.

"Smaller providers have access to resources that they didn't have before, and in locations where they didn't have them before."

[...] "Our goal here is to go to the market ASAP. We don't have all the superstructures that you might have in a lot of these other European projects.

"We don't have to mediate with the politicians or with a public project or with all these things. We're just building it, bottom-up to go live, and everything that's really bottom-up needs to be open, public, and go to the market fast."

[...] There is growing unease in Europe from some customers in the public and the private sector that are no longer happy to rely on US-headquartered cloud providers, so it seems the Trump administration is a galvanizing force for change overseas as well as on US soil... or in the clouds.


Original Submission

posted by hubie on Sunday April 06, @08:02AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A robotics and machine learning engineer has developed a command-line interface tool that monitors power use from a smart plug and then tunes system performance based on electricity pricing. The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later.

Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance.

Naveen’s power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop — even one designed and used for gaming.

WattWise optimizes the system’s clock speed based on the current system load, power consumption as reported by the smart plug, and the time — with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.

At the moment, the app only supports one smart plug at a time and only works with the Kasa brand. However, Naveem says there are plans to add support for multiple plugs, more smart plug brands, integration with other power management tools, and other features. The app in its current form is a pretty simple tool, but sometimes simple is all you need to solve a problem.

Naveen made WattWise open source under the MIT license, and you can download it directly from GitHub. If you’re interested, you can leave feedback and contributions, or you can fork it and adapt it for other systems. Note that the current version only contains the dashboard, not the actual power optimizer, which still needs further work.


Original Submission

posted by hubie on Sunday April 06, @03:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A newly discovered bacterial weapon against fungi can kill even drug-resistant strains, raising hopes for a new antifungal drug.

Fungal infections have been spreading rapidly and widely in recent years, fueled in part by climate change. Some fungi, including Candida auris, have developed resistance to some highly effective antifungal drugs which have been in use for decades. So scientists have been searching for new drugs to keep fungi in check.

Researchers in China may have found a new type of antifungal called mandimycin, the team reports March 19 in Nature. Mandimycin killed fungal infections in mice more effectively than amphotericin B and several other commonly used antifungal drugs. It even worked against resistant C. auris strains.

Bacteria are masters at fending off fungi, says Martin Burke, a chemist at the University of Illinois Urbana-Champaign. “There’s been this war raging for 2 billion years,” he says. Bacteria and fungi have been “building weapons to try to compete with each other for nutrients in the environment.” Humans have been spying on both armies to learn how to make antibiotics and antifungal drugs. 

In one such mission, Zongqiang Wang of China Pharmaceutical University in Nanjing and colleagues combed more than 300,000 bacterial genomes looking for possible weapons against fungi. One strain of Streptomyces netropsis contained a cluster of genes that encode enzymes for building the compound mandimycin.

The antifungal has a backbone structure similar to some other antifungal drugs but has two sugar molecules tacked onto its tail. Those sugars are important for how the molecule kills fungi, because they change the target that the weapon is aimed at.

[...] Instead of ergosterol, mandimycin is attracted to phospholipids, the major building blocks of membranes, Wang and colleagues discovered. It’s the sugars on the tail that allow mandimycin to target phospholipids, particularly one called phosphatidylinositol, the team found. Removing those sugars caused mandimycin to latch on to ergosterol, though more weakly than existing antifungals.

While intact mandimycin proved to be a potent fungi killer, it was far less toxic to mice’s kidneys and to human kidney cells grown in lab dishes than amphotericin B. Bacteria escaped mandimycin unscathed.

The ability to destroy fungi but not harm human and bacterial cells has Burke puzzled.

“This is the wild part about mandimycin that I don’t understand,” he says. Why doesn’t it kill the bacteria that produce it?

Only fungi have ergosterol in their membranes, so other cells aren’t harmed by drugs that soak it up. But fungi, bacteria and mammals all have phospholipids, which means pulling those out of membranes should be damaging across the board, including to the mandimycin-making bacteria. Wang and colleagues suggest that mandimycin’s attacks might be specific to phospholipids found in fungi, but not in other types of cells.

That is just one of the mysteries researchers will need to solve before mandimycin can be tested in people, Burke says. “It’s one of those exciting papers that opens a lot of doors, [and] pretty much behind every one is another question.”

Journal References:
    • Q. Deng et al. A polyene macrolide targeting phospholipids in the fungal cell membrane. Nature. Published online March 19, 2025. doi: 10.1038/s41586-025-08678-9
    • New antifungal breaks the mould. Nature. Published online March 19, 2025. doi: 10.1038/d41586-025-00801-0


Original Submission