Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is Your Operating System of Choice?

  • MacOS - Any Version
  • Debian Based - Any Version
  • Redhat Based - Any Version
  • BSD - Any Version
  • Arch Based - Any Version
  • Any other *nix
  • Windows - Any Version
  • The poll creator is dumb for not including my OS

[ Results | Polls ]
Comments:84 | Votes:331

posted by janrinok on Friday May 09, @09:19PM   Printer-friendly
from the every-line-of-code-counts dept.

"We have now sunk to a depth in which restatement of the obvious is the first duty of intelligent men." (George Orwell).

Few people remember this, but back in 2003 there was a bit of an uproar in the IT community when Intel dared introduce a unique, retrievable, ID, the PSN number, in its new Pentium III CPU.

It is kinda hard to believe, but that little privacy backlash was strong enough to force Intel to withdraw the feature, starting with Tualatin-based Pentium IIIs. That withdrawal lasted until 2015, when it was (silently) introduced again, as the Protected Processor Identification Number (PPIN), with Intel's Ivy Bridge architecture.

So, only a good ten years ago we believed in privacy. Now we still do, perhaps, but somehow the industry moved the needle to obligatory consent -- without opt-out possibility -- with any and all privacy violations that can be dreamt up in Big (and Not So Big) Tech boardrooms.

Something similar is happening with software, argues Bert Hubert in a piece on IEEE Spectrum. Where once on-premise software and hardware was the rule, trying to get a request for on-prem hardware signed off nowadays is a bit like asking for a coal-fired electricity generator. Things simply *have* to be in the Magically Secure Cloud, and software needs to be developed agile, with frameworks.

The way we build and ship software these days is mostly ridiculous, he claims: apps using millions of lines of code to open a garage door, and simple programs importing 1,600 external code libraries. Software security is dire, which is a function both of the quality of the code and the sheer amount of it.

Let me briefly go over the terrible state of software security, and then spend some time on why it is so bad. I also mention some regulatory and legislative things going on that we might use to make software quality a priority again. Finally, I talk about an actual useful piece of software I wrote as a proof of concept that one can still make minimal and simple yet modern software.


Original Submission

posted by hubie on Friday May 09, @04:33PM   Printer-friendly
from the "might-be" dept.

Arthur T Knackerbracket has processed the following story:

Cerabyte recently conducted an experiment that seemed more like a culinary exercise than a technology showcase. The German storage startup plunged a sliver of its archival glass storage into a kettle of boiling salt water, then roasted it in a pizza oven.

Despite enduring temperatures of 100°C in the kettle and 250°C in the oven, the storage medium emerged unscathed, with its data fully intact. This experiment – along with a similar live demonstration at the Open Compute Project Summit in Dublin – was not just a spectacle. It was Cerabyte's way of proving a bold claim: its storage media can withstand conditions that would destroy conventional data storage.

Founded in 2022, Cerabyte is on a mission to upend the world of digital archiving. The company's technology relies on an ultra-thin ceramic layer – just 50 to 100 atoms thick – applied to a glass substrate.

Using femtosecond lasers, data is etched into the ceramic in nanoscale holes. Each 9 cm² chip can store up to 1 GB of information per side, written at a rate of two million bits per laser pulse. Cerabyte claims the result is a medium as durable as ancient hieroglyphs, with a projected lifespan of 5,000 years or more.

The durability of glass is well known. Its resistance to aging, fire, water, radiation, and even electromagnetic pulses makes it a natural candidate for "cold storage." Cerabyte's tests – including boiling the media in salt water for days (long enough to corrode the kettle itself) and baking it at high heat – were designed to underscore this resilience.

While the company has not disclosed how the ceramic layer or its bond to the glass would fare under physical shock, the media's resistance to environmental hazards is clear.

Cerabyte's ambitions extend beyond durability. The startup aims to reduce the cost of archival storage to less than $1 per terabyte by 2030 – a target that could transform the economics of long-term data retention.

[...] Unlike other archival methods – magnetic tape, hard drives, or even optical discs, all of which degrade over decades – Cerabyte's ceramic-on-glass approach promises to eliminate the need for regular data migration or energy-hungry maintenance.


Original Submission

posted by hubie on Friday May 09, @11:46AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Things continue to change thanks to the Supreme Court’s Carpenter decision. Prior to that, it was assumed the Third Party Doctrine justified all sorts of data dragnets, so long as the data was held by a third party. But that doctrine assumed the data being grabbed by law enforcement was being handed over knowingly and voluntarily. The Carpenter decision pointed out this simply wasn’t true: cell tower location data is demanded from all cell phones in the tower coverage area and location data (along with identifying info about the device itself) was taken, rather than volunteered.

This has led to a number of interesting decisions, including a couple of state-level court decisions regarding mass collections of cell tower location data. Cell tower dumps generate records of all cell phones in certain areas during certain times, the same way geofence warrants work, but using more accurate cell site location info (CSLI).

Now, even with a warrant, courts are finding cell tower dumps to be unconstitutional. In 2022, the top court in Massachusetts said these warrants may still be constitutional, but only if law enforcement followed a stringent set of requirements. Earlier this year, a magistrate judge in Mississippi came down on cell tower dumps even more forcefully, declaring that if geofence warrants (those seeking Google location data) were unconstitutional, then it just made sense warrants seeking more accurate data with a similarly-sized dragnet also violated the Fourth Amendment.

Those rulings are limited to those states (and, in the case of the magistrate judge, likely just limited to his jurisdiction). But now there’s something at a much higher level, which is definitely headed to a showdown at the Ninth Circuit Appeals Court as soon as the DOJ gets around to appealing this ruling. Here’s Matthew Gault, reporting on this decision for 404Media.

The government tried to argue that if the warrant was unconstitutional, it didn’t matter because this really wasn’t a search under the Fourth Amendment. It hinted the Third Party Doctrine applied instead. The court disagrees, citing the expert for the defense, who pointed out not only was the data not voluntarily handed over to cell service providers, but even the de-duplicated list of responding devices turned this into an extremely broad search.

Even if further efforts were made to eliminate false positives, it’s too little too late. A warrant can’t be salvaged because things were done after the warrant had been served and information obtained. It’s a general warrant, says the court, precisely the thing the Fourth Amendment was erected to protect against.

Now, the bad news, at least for Spurlock. Pretty much every judge involved, along with the investigators who crafted the warrant, had almost zero experience in handling cell tower dump warrants. (I suspect that this is because, prior to Carpenter, most law enforcement agencies handled this with subpoenas that weren’t subject to judicial review. On the other hand, this happened in a sparsely populated area where double murders aren’t exactly common, so there may have never been a reason to use one before.) Since everyone appears to be breaking new ground here, the good faith exception applies. No evidence is suppressed.

But this holding stands going forward, which means Nevada law enforcement will need to be a lot more careful when crafting cell tower dump warrants or, better off, avoid them altogether and get back on the right side of the Fourth Amendment’s particularity requirements. Since this requires federal and local law enforcement to be better at their jobs, it’s safe to assume the DOJ will ask for this ruling to be overturned. Until that happens, the law of the land is clear: Cell tower dumps (and geofence warrants) are unconstitutional.


Original Submission

posted by mrpg on Friday May 09, @07:01AM   Printer-friendly
from the it-would-be-a-shame-if-you-lost-that-finger-no? dept.

From Nate Anderson over at Ars Technica:

French gendarmes have been busy policing crypto crimes, but these aren't the usual financial schemes, cons, and HODL! shenanigans one usually reads about. No, these crimes involve abductions, (multiple) severed fingers, and (multiple) people rescued from the trunks of cars—once after being doused with gasoline.

This previous weekend was particularly nuts, with an older gentleman snatched from the streets of Paris' 14th arrondissement on May 1 by men in ski masks. The 14th is a pleasant place—I highly recommend a visit to the catacombs in Place Denfert-Rochereau—and not usually the site of snatch-and-grab operations. The abducted man was apparently the father of someone who had made a packet in crypto. The kidnappers demanded a multimillion-euro ransom from the man's son.

According to Le Monde, the abducted father was taken to a house in a Parisian suburb, where one of the father's fingers was cut off in the course of ransom negotiations. Police feared "other mutilations" if they were unable to find the man, but they did locate and raid the house this weekend, arresting five people in their 20s. (According to the BBC, French police used "phone signals" to locate the house.)

[...] And a few weeks before that, attackers went to the home of someone whose son was a "crypto-influencer based in Dubai." At the father's home, the kidnappers "tied up [the father's] wife and daughter and forced him into a car. The man's influencer son received a ransom demand and contacted police. The two women were then quickly freed. The father was only discovered 24 hours later in the boot of a car in Normandy, tied up and showing signs of physical violence, having been sprinkled with petrol."

It's not just France, either. Early this year, three British men kidnapped another British man while all of them were in Spain; the kidnappers demanded 30,000 euros in crypto "or be tortured and killed." The kidnapped man escaped by jumping off a balcony 30 feet high, breaking both ankles.

Or there's the Belgian man who posted online that "his crypto wallet was now worth €1.6 million." His wife was the victim of an attempted abduction within weeks.


Original Submission

posted by Fnord666 on Friday May 09, @02:14AM   Printer-friendly
from the make-the-switch dept.

The openSUSE project is encouraging people who currently run Windows 10 and whose computers are not compatible with Windows 11 to consider a migration to Linux instead of throwing out their old hardware. "The openSUSE Project's Upgrade to Freedom campaign urges people to extend the life of their device rather than becoming e-waste. Since millions of Windows 10 users may believe their devices will become useless and contribute to the waste of fully functional devices, installing a Linux operating systems like openSUSE or another Linux distribution is more reasonable.

A new initiative called End of 10 has launched that shares the purposes and origin of openSUSE's Upgrade to Freedom efforts. As the End of 10 initiative also intends to help people extend the life of devices that would otherwise become e-waste, rather than dilute the messaging and narrative, members of openSUSE marketing have decided to transition the Upgrade to Freedom campaign to joining the End of 10 initiative."


Original Submission

posted by Fnord666 on Thursday May 08, @09:29PM   Printer-friendly
from the roll-your-own dept.

People who are interested in atomic distributions, particularly ones with immutable filesystems, but have not found one which suits them may find a tutorial on Fedora Magazine valuable. Daniel Mendizabal offers a step-by-step guide to creating a customized, immutable Linux distribution, from initial concept through to producing a bootable ISO. "Mainstream sources like Fedora and Universal Blue offer various atomic desktops with curated configurations and package selections for the average user. But what if you're ready to take control of your desktop and customise it entirely, from packages and configurations to firewall, DNS, and update schedules? Thanks to bootc and the associated tools, building a personalised desktop experience is no longer difficult."


Original Submission

posted by mrpg on Thursday May 08, @04:44PM   Printer-friendly
from the 6G dept.

Arthur T Knackerbracket has processed the following story:

When it comes to long-term prosperity in the high-tech world, it's all about setting standards. Intel once set the standard with x86, PCIe, and USB and now the vast majority of devices use these technologies in one way or another. Nvidia now enjoys its investments in the CUDA ecosystem and is setting the standard in AI compute in general. To a large degree, Nvidia's efforts made the U.S. industry the leader in AI. However, containing AI hardware in the U.S. will provoke rapid development of competing AI ecosystems that can eventually outperform the one developed in America.

"We are at an inflection point: the United States needs to decide if it is going to continue to lead the global development and deployment of AI or if we are going to retreat and retrench," a remark by Nvidia's chief executive Jensen Huang (republished by Ray Wang [x.com] reads) to the U.S. lawmakers reads. "America cannot lead by slowing down. If we step back, others will step in. And the global AI ecosystem will fragment — technologically, economically, and ideologically."

[...] The new U.S. export rules for compute GPUs — known as the AI Diffusion Rule [tomshardware.com] — come into effect on May 15. Under the Biden administration's AI Diffusion framework, unrestricted access to high-end AI chips like Nvidia's H100 is reserved for companies in the U.S. and a select group of 18 allied countries classified as 'Tier 1.' Companies in 'Tier 2' nations are subject to an annual limit of approximately 50,000 H100-class GPUs, unless they secure verified end user (VEU) approval. They can still import up to 1,700 units per year without a license, and these do not count toward the national quota. However, countries listed as 'Tier 3' — including China, Russia, and Macau — are essentially blocked from receiving such hardware due to arms embargo restrictions. The Trump administration is now reviewing this tier system to make it more straightforward and enforceable, and is rumored to make limitations for Tier 2 nations even stricter.

Not only will Nvidia cease to be able to sell its GPUs to China, which is one of its largest markets, but its Chinese customers will be forced to either use its GPUs in the cloud, or switch to processors developed in China, such as those designed by Huawei or one of the aforementioned companies. While this will slow down development of China's AI sector in the short term, it will give a strong boost for its AI hardware ecosystem in the mid and long-term future.

[...] The U.S. has already seen the consequences of ceding technological leadership, when Huawei gained a dominant foothold in global 5G deployments by offering cheaper and faster-to-deploy infrastructure. This serves as a cautionary example of how losing control over foundational standards can shift both market power and geopolitical influence. Nevertheless, whether the current administration has learnt from similar past mistakes remains to be seen.


Original Submission

posted by mrpg on Thursday May 08, @12:00PM   Printer-friendly
from the not-funny-anymore dept.

Arthur T Knackerbracket has processed the following story:

An investigation by 404 Media has uncovered a major security breach at TeleMessage, an Israeli company that provides modified versions of encrypted messaging apps – most notably Signal – to US government agencies and private-sector clients for message archiving. The breach, which exposed sensitive communications, has raised urgent concerns about the security of high-level government and organizational messaging.

The issue gained public attention after a Reuters photograph captured Mike Waltz, a former National Security Adviser to Donald Trump, using a Signal-like app during a cabinet meeting. The app, TeleMessage, closely mimics Signal's interface but is designed to retain and archive messages for compliance purposes – unlike the original Signal, which is built for privacy and strict end-to-end encryption.

[...] 404 Media reports that a hacker exploited a vulnerability in TeleMessage's backend system, gaining access to archived messages from some users. Alarmingly, the breach was relatively easy: the hacker claimed it took only 15 to 20 minutes to gain access, using credentials found in intercepted data to enter the backend panel, where they could view usernames, passwords, and message content.


Original Submission

posted by Fnord666 on Thursday May 08, @07:22AM   Printer-friendly

People trust legal advice generated by ChatGPT more than a lawyer – new study:

People who aren't legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers – at least, when they don't know which of the two provided the advice. That's the key finding of our new research, which highlights some important concerns about the way the public increasingly relies on AI-generated content. We also found the public has at least some ability to identify whether the advice came from ChatGPT or a human lawyer.

AI tools like ChatGPT and other large language models (LLMs) are making their way into our everyday life. They promise to provide quick answers, generate ideas, diagnose medical symptoms, and even help with legal questions by providing concrete legal advice.

But LLMs are known to create so-called "hallucinations" – that is, outputs containing inaccurate or nonsensical content. This means there is a real risk associated with people relying on them too much, particularly in high-stakes domains such as law. LLMs tend to present advice confidently, making it difficult for people to distinguish good advice from decisively voiced bad advice.

We ran three experiments on a total of 288 people. In the first two experiments, participants were given legal advice and asked which they would be willing to act on. When people didn't know if the advice had come from a lawyer or an AI, we found they were more willing to rely on the AI-generated advice. This means that if an LLM gives legal advice without disclosing its nature, people may take it as fact and prefer it to expert advice by lawyers – possibly without questioning its accuracy.

Even when participants were told which advice came from a lawyer and which was AI-generated, we found they were willing to follow ChatGPT just as much as the lawyer.

One reason LLMs may be favoured, as we found in our study, is that they use more complex language. On the other hand, real lawyers tended to use simpler language but use more words in their answers.

The third experiment investigated whether participants could distinguish between LLM and lawyer-generated content when the source is not revealed to them. The good news is they can – but not by very much.

In our task, random guessing would have produced a score of 0.5, while perfect discrimination would have produced a score of 1.0. On average, participants scored 0.59, indicating performance that was slightly better than random guessing, but still relatively weak.


Original Submission

posted by janrinok on Thursday May 08, @02:36AM   Printer-friendly

Driverless trucks are officially running their first regular long-haul routes, making roundtrips between Dallas and Houston:

On Thursday [May 1, 2025], autonomous trucking firm Aurora announced it launched commercial service in Texas under its first customers, Uber Freight and Hirschbach Motor Lines, which delivers time- and temperature-sensitive freight. Both companies conducted test runs with Aurora, including safety drivers to monitor the self-driving technology dubbed "Aurora Driver." Aurora's new commercial service will no longer have safety drivers.

  "We founded Aurora to deliver the benefits of self-driving technology safely, quickly, and broadly, said Chris Urmson, CEO and co-founder of Aurora, in a release on Thursday. "Now, we are the first company to successfully and safely operate a commercial driverless trucking service on public roads."

The trucks are equipped with computers and sensors that can see the length of over four football fields. In four years of practice hauls the trucks' technology has delivered over 10,000 customer loads. As of Thursday, the company's self-driving tech has completed over 1,200 miles without a human in the truck.

Aurora is starting with a single self-driving truck and plans to add more by the end of 2025.

Self-driving technology continued to garner attention after over a decade of hype, especially from auto companies like Tesla, GM and others that have poured billions into the tech. Companies in the market of autonomous trucking or driving, tend to use states like Texas and California as their testing grounds for the technology.

California-based Gatik does short-haul deliveries for Fortune 500 retailers like Walmart. Another California tech firm, Kodiak Robotics, delivers freight daily for customers across the South but with safety drivers. Waymo, a subsidiary of Google parent company Alphabet, had an autonomous trucking arm but dismantled it in 2023 to focus on its self-driving ride-hailing services.

However, consumers and transportation officials have raised alarms on the safety record of autonomous vehicles. Aurora released its own safety report this year detailing how its technology works.

Unions that represent truck drivers are usually opposed to the driverless technology because of the threat of job loss and concerns over safety.


Original Submission

posted by hubie on Wednesday May 07, @09:55PM   Printer-friendly

Black hole analogs are one of our best tools for understanding how they work:

Researchers have created the first laboratory analog of the 'black hole bomb', a theoretical concept developed by physicists in the 1970s.

If there's one thing black holes are known for, it's their insatiable, inescapable gravity. Stuff goes into a black hole. You're not really going to get much out.

From beyond the event horizon, this is, as far as we know, true. But from the space around a black hole, you might be able to get something. As Roger Penrose proposed in 1971, the powerful rotational energy of a spinning black hole could be used to amplify the energy of nearby particles.

Then, physicist Yakov Zel'Dovich figured out that you didn't need a black hole to see this phenomenon in action. An axially symmetrical body rotating in a resonance chamber, he figured, could produce the same energy transfer and amplification, albeit on a much smaller scale.

Later work by other physicists found that, if you enclose the entire apparatus in a mirror, a positive feedback loop is generated, amplifying the energy until it explodes from the system.

This concept was named the black hole bomb, and a team of physicists led by Marion Cromb of the University of Southampton in the UK now claim to have brought it to life. A paper describing their experiment has been uploaded to preprint server arXiv.

It doesn't, just to set your mind at ease, pose any danger. It consists of a rotating aluminum cylinder, placed inside layers of coils that generate magnetic fields that rotate around it, at controllable speeds.

[...] We can't experimentally replicate this gravitational effect; what the team's experiment does is simulate it, using magnetic fields as a proxy for the particles, with the coils around the system acting as the reflector to produce the feedback loop.

When they ran the experiment, they found that, when the cylinder is rotating faster than, and in the same direction as, the magnetic field, the magnetic field is amplified, compared to when there is no cylinder. When the cylinder rotates more slowly than the magnetic field, however, the magnetic field is dampened.

This is a really interesting result, because it demonstrates a very clear amplification effect, based on the theories described decades ago.

"The system satisfies the experimental conditions speculated by Zel'dovich for the observation of spontaneous generation and also the conditions outlined by Press et al. for black hole bombs," the researchers write in their paper.

"The experiments presented here are a direct realization of the rotating absorber amplifier first proposed by Zel'dovich in 1971 and later developed by Press and Teukolsky into the concept of black hole bomb."

Because we can't probe black holes directly, analogs such as this are an excellent way to understand their properties. Determining any potential practical applications is going to require a lot more development and testing.

For now, however, the experiment could represent a significant step towards better understanding the physics of the most gravitationally extreme objects in the Universe.

The team's preprint is available on arXiv.

Journal References:
    • PENROSE, R., FLOYD, R. M.. Extraction of Rotational Energy from a Black Hole, Nature Physical Science (DOI: 10.1038/physci229177a0)
    • PRESS, WILLIAM H., TEUKOLSKY, SAUL A.. Floating Orbits, Superradiant Scattering and the Black-hole Bomb, Nature (DOI: 10.1038/238211a0)
    • Cromb, Marion, Braidotti, Maria Chiara, Vinante, Andrea, et al. Creation of a black hole bomb instability in an electromagnetic system, (DOI: 10.48550/arXiv.2503.24034)


Original Submission

posted by hubie on Wednesday May 07, @05:07PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Times has seen plans indicating that the British government will soon announce a roadmap for installing solar panels on virtually all newly-built houses. If the legislation passes this year, the requirements might come into force in 2027.

According to experts, the plan will require 80% of new homes to cover 40% of their ground area with solar panels. Another 19% of new builds would have lower requirements due to factors such as roof angle, orientation, and shade. About one percent might be exempt from including panels.

Although the plans would make building new properties up to around £4,000 more expensive, the panels could help families save up to £1,000 on energy bills annually, potentially paying off the extra building costs in four years.

If implemented, the initiative would bring the UK closer to its goal of decarbonizing its electric grid by 2030.

Part of the strategy involves installing up to 47 gigawatts of solar power capacity by the end of this decade. The government is also expected to announce government loans for installing solar panels on existing homes, but building scaffolding and rewiring old buildings for solar is far more expensive than building it into new structures.

Although panels can dramatically reduce (and sometimes erase) energy bills, mass adoption can also throw power grids off balance. In Australia, which has adopted solar energy with remarkable speed over the last two decades, the technology sometimes generates more power than grids can withstand.


Original Submission

posted by hubie on Wednesday May 07, @12:22PM   Printer-friendly
from the if-it-keeps-Recall-from-being-installed-I'd-consider-it-a-push dept.

https://www.bleepingcomputer.com/news/security/hackers-abuse-ipv6-networking-feature-to-hijack-software-updates/

A China-aligned APT threat actor named "TheWizards" abuses an IPv6 networking feature to launch adversary-in-the-middle (AitM) attacks that hijack software updates to install Windows malware.

According to ESET, the group has been active since at least 2022, targeting entities in the Philippines, Cambodia, the United Arab Emirates, China, and Hong Kong. Victims include individuals, gambling companies, and other organizations.

The attacks utilize a custom tool dubbed "Spellbinder" by ESET that abuses the IPv6 Stateless Address Autoconfiguration (SLAAC) feature to conduct SLACC attacks.

SLAAC is a feature of the IPv6 networking protocol that allows devices to automatically configure their own IP addresses and default gateway without needing a DHCP server. Instead, it utilizes Router Advertisement (RA) messages to receive IP addresses from IPv6-supported routers.

The hacker's Spellbinder tool abuses this feature by sending spoofed RA messages over the network, causing nearby systems to automatically receive a new IPv6 IP address, new DNS servers, and a new, preferred IPv6 gateway.

This default gateway, though, is the IP address of the Spellbinder tool, which allows it to intercept communications and reroute traffic through attacker-controlled servers.

"Spellbinder sends a multicast RA packet every 200 ms to ff02::1 ("all nodes"); Windows machines in the network with IPv6 enabled will autoconfigure via stateless address autoconfiguration (SLAAC) using information provided in the RA message, and begin sending IPv6 traffic to the machine running Spellbinder, where packets will be intercepted, analyzed, and replied to where applicable," explains ESET.

ESET said attacks deploy Spellbinder using an archive named AVGApplicationFrameHostS.zip, which extracts into a directory mimicking legitimate software: "%PROGRAMFILES%\AVG Technologies."

Within this directory are AVGApplicationFrameHost.exe, wsc.dll, log.dat, and a legitimate copy of winpcap.exe. The WinPcap executable is used to side-load the malicious wsc.dll, which loads Spellbinder into memory.

Once a device is infected, Spellbinder begins capturing and analyzing network traffic attempting to connect specific domains, such as those related to Chinese software update servers.

[...] To protect against these types of attacks, organizations can monitor IPv6 traffic or turn off the protocol if it is not required in their environment.


Original Submission

posted by hubie on Wednesday May 07, @07:37AM   Printer-friendly

A breakthrough in Hilbert's sixth problem is a major step in grounding physics in math:

When the greatest mathematician alive unveils a vision for the next century of research, the math world takes note. That's exactly what happened in 1900 at the International Congress of Mathematicians at Sorbonne University in Paris. Legendary mathematician David Hilbert presented 10 unsolved problems as ambitious guideposts for the 20th century. He later expanded his list to include 23 problems, and their influence on mathematical thought over the past 125 years cannot be overstated.

Hilbert's sixth problem was one of the loftiest. He called for "axiomatizing" physics, or determining the bare minimum of mathematical assumptions behind all its theories. Broadly construed, it's not clear that mathematical physicists could ever know if they had resolved this challenge. Hilbert mentioned some specific subgoals, however, and researchers have since refined his vision into concrete steps toward its solution.

In March mathematicians Yu Deng of the University of Chicago and Zaher Hani and Xiao Ma of the University of Michigan posted a new paper to the preprint server arXiv.org that claims to have cracked one of these goals. If their work withstands scrutiny, it will mark a major stride toward grounding physics in math and may open the door to analogous breakthroughs in other areas of physics.

In the paper, the researchers suggest they have figured out how to unify three physical theories that explain the motion of fluids. These theories govern a range of engineering applications from aircraft design to weather prediction — but until now, they rested on assumptions that hadn't been rigorously proven. This breakthrough won't change the theories themselves, but it mathematically justifies them and strengthens our confidence that the equations work in the way we think they do.

Each theory differs in how much it zooms in on a flowing liquid or gas. At the microscopic level, fluids are composed of particles — little billiard balls bopping around and occasionally colliding — and Newton's laws of motion work well to describe their trajectories.

But when you zoom out to consider the collective behavior of vast numbers of particles, the so-called mesoscopic level, it's no longer convenient to model each one individually. In 1872 Austrian theoretical physicist Ludwig Boltzmann addressed this when he developed what became known as the Boltzmann equation. Instead of tracking the behavior of every particle, the equation considers the likely behavior of a typical particle. This statistical perspective smooths over the low-level details in favor of higher-level trends. The equation allows physicists to calculate how quantities such as momentum and thermal conductivity in the fluid evolve without painstakingly considering every microscopic collision.

Zoom out further, and you find yourself in the macroscopic world. Here we view fluids not as a collection of discrete particles but as a single continuous substance. At this level of analysis, a different suite of equations — the Euler and Navier-Stokes equations — accurately describe how fluids move and how their physical properties interrelate without recourse to particles at all.

The three levels of analysis each describe the same underlying reality — how fluids flow. In principle, each theory should build on the theory below it in the hierarchy: the Euler and Navier-Stokes equations at the macroscopic level should follow logically from the Boltzmann equation at the mesoscopic level, which in turn should follow logically from Newton's laws of motion at the microscopic level. This is the kind of "axiomatization" that Hilbert called for in his sixth problem, and he explicitly referenced Boltzmann's work on gases in his write-up of the problem. We expect complete theories of physics to follow mathematical rules that explain the phenomenon from the microscopic to the macroscopic levels. If scientists fail to bridge that gap, then it might suggest a misunderstanding in our existing theories.

Unifying the three perspectives on fluid dynamics has posed a stubborn challenge for the field, but Deng, Hani and Ma may have just done it. Their achievement builds on decades of incremental progress. Prior advancements all came with some sort of asterisk, though; for example, the derivations involved only worked on short timescales, in a vacuum or under other simplifying conditions.

The new proof broadly consists of three steps: derive the macroscopic theory from the mesoscopic one; derive the mesoscopic theory from the microscopic one; and then stitch them together in a single derivation of the macroscopic laws all the way from the microscopic ones.

Journal Reference: arXiv:2503.01800 [math.AP] https://doi.org/10.48550/arXiv.2503.01800


Original Submission

posted by hubie on Wednesday May 07, @02:51AM   Printer-friendly

Seven gas turbines planned to juice datacenter demand by 2027:

Developers on Wednesday announced plans to bring up to 4.5 gigawatts of natural gas-fired power online by 2027 at the site of what was once Pennsylvania's largest coal plant, as part of a proposed datacenter campus running AI and high-performance computing workloads.

Development of the 3,200-acre natural gas-powered datacenter campus is being led by Homer City Redevelopment (HCR) and is expected to exceed $10 billion for power infrastructure and site readiness alone, with additional billions anticipated for the datacenter development.

As we understand it, the plant and server campus will be next to each other, as depicted in this video. The power station site will need rebuilding not only to turn it into a gas-fired system but also because it's pretty much demolished, save for electrical infrastructure such as transmission lines that can be reused.

HCR has yet to disclose a tenant for what's hoped to be a massive datacenter complex, with its emphasis for now largely on building out the energy infrastructure and datacenter shell in anticipation of future demand.

The project's backers, including Knighthead Capital Management, appear confident that demand will follow, with the campus designed to deliver up to 4.5 gigawatts of power to run AI and hyperscale workloads.

[...] Until that happens, the site won't exactly be a bright spot on hyperscalers' annual sustainability reports, though HCR claims the gas turbines will cut greenhouse gas emissions by 60-65 percent per megawatt-hour compared to the plant's retired coal units.

Kiewit Power Constructors is expected to begin work on the facility later this year with the first generators installed in 2026; the site is expected to start generating power by 2027 — just in time for Nvidia's 600 kilowatt Kyber racks to make their debut.


Original Submission

Today's News | May 8  >