Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long have you had your current mobile phone?

  • 0-6 months
  • 6-12 months
  • 1-2 years
  • 2-4 years
  • 4+ years
  • My phone belongs in a technology museum.
  • Do 2 tin cans and a very long piece of string count?
  • I don't have a mobile phone you insensitive clod!

[ Results | Polls ]
Comments:19 | Votes:107

posted by janrinok on Sunday May 18, @07:20PM   Printer-friendly

Last week, a U.S. congressman announced a plan to introduce a bill that would mandate producers of high-performance AI processors to track them geographically in a bid to limit their usage by unauthorized foreign actors, such as China. Senator Tom Cotton of Arkansas then introduced a legislative measure later in the week. The bill covers hardware that goes way beyond just AI processors, and would give the Commerce Secretary power to verify the location of hardware, and put mandatory location controls on commercial companies. To make matters even more complicated, geo-tracking features would be required for high-performance graphics cards as well.

The bill covers a wide range of products classified as 3A090, 4A090, 4A003.z, and 3A001.z export control classification numbers (ECCNs), so advanced processors for AI, AI servers (including rack-scale solutions), HPC servers, and general-purpose electronics of strategic concern due to potential military utility or dual-use risk. It should be noted that many high-end graphics cards (such as Nvidia's GeForce RTX 4090 and RTX 5090) are also classified as a 3A090 product, so it looks like such add-in-boards will also have to add geo-tracking capabilities.

The first and central provision of the bill is the requirement for tracking technology to be embedded in any high-end processor module or device that falls under the U.S. export restrictions. This condition would take effect six months after the legislation is enacted, which will make the lives of companies like AMD, Intel, and Nvidia harder, as adding a feature to already developed products is a tough task. The mechanism must allow verification of a chip's or device's physical location, enabling the U.S. government to confirm whether it remains at the approved endpoint. Yet, exporters would be obliged to keep track of their products.

The bill authorizes the Secretary of Commerce to verify the ownership and location of regulated processors and systems after export and maintain a centralized registry of current locations and end-users. Nvidia, as well as other exporters, would also be obligated to inform the Bureau of Industry and Security if there is evidence that a component has been redirected from its authorized destination. Additionally, any indications of tampering or manipulation must be reported.

The bill, if supported by lawmakers, will mandate a one-year study to be conducted jointly by the Department of Commerce and the Department of Defense, which will identify additional protective measures that could be introduced in the future. Beyond the initial study, the same two departments are required to conduct yearly assessments for three consecutive years following the bill's enactment. These reviews must evaluate the most current advancements in security technologies applicable to products under export control. Based on these assessments, the departments may determine whether new requirements should be imposed. 

If the assessment concludes that additional mechanisms are appropriate, the Commerce Department must finalize rules within two years requiring covered chips and systems to incorporate these secondary features. A detailed implementation roadmap must also be submitted to the relevant congressional committees. All development and deployment of these mechanisms must preserve the confidentiality of sensitive commercial technologies. 

Finally, the legislation emphasizes confidentiality in all stages of developing and applying these new technical requirements. Any proposed safeguards or tracking features must be designed and implemented in a way that protects the proprietary information and trade secrets of American developers, such as AMD, Intel, and Nvidia. This condition ensures that while national security is strengthened, industrial competitiveness is not undermined. 

Is it even possible? Does the "tracking" stop if an American purchases the GPU?

See also: Nvidia says it is not sending GPU designs to China after reports of new Shanghai operation [JR]


Original Submission

Processed by drussell

posted by janrinok on Sunday May 18, @02:38PM   Printer-friendly

https://www.theregister.com/2025/05/15/voyager_1_survives_with_thruster_fix/

NASA has revived a set of thrusters on the nearly 50-year-old Voyager 1 spacecraft after declaring them inoperable over two decades ago.

It's a nice long-distance engineering win for the team at NASA's Jet Propulsion Laboratory, responsible for keeping the venerable Voyager spacecraft flying - and a critical one at that, as clogging fuel lines threatened to derail the backup thrusters currently in use.

The things you have to deal with when your spacecraft is operating more than four decades beyond its original mission plan, eh? Voyager 1 launched in 1977.

JPL reported Wednesday that the maneuver, completed in March, restarted Voyager 1's primary roll thrusters, which are used to keep the spacecraft aligned with a tracking star. That guide star helps keep its high-gain antenna aimed at Earth, now over 15.6 billion miles (25 billion kilometers) away, and far beyond the reach of any telescope.

Those primary roll thrusters stopped working in 2004 after a pair of internal heaters lost power. Voyager engineers long believed they were broken and unfixable. The backup roll thrusters in use are now at risk due to residue buildup in their fuel lines, which could cause failure as early as this fall.

Without roll thrusters, Voyager 1 would lose its ability to stay properly oriented and eventually drift out of contact.


Original Submission

posted by janrinok on Sunday May 18, @09:53AM   Printer-friendly
from the your-private-data-wants-to-be-"free" dept.

White House scraps plan to block data brokers from selling Americans' sensitive data:

A senior Trump administration official has scrapped a plan that would have blocked data brokers from selling Americans' personal and financial information, including Social Security numbers.

The Consumer Financial Protection Bureau (CFPB) said in December 2024 it planned to close a loophole under the Fair Credit Reporting Act, the federal law that protects Americans' personal data collected by consumer reporting agencies, such as credit bureaus and renter-screening companies. The rule would have treated data brokers no differently than any other company covered under the federal law and would have required them to comply with the law's privacy rules.

The rule was withdrawn early Tuesday, according to its listing in the Federal Register. The CFPB's acting director, Russell Vought, who also serves as the director of the White House's Office of Management and Budget, wrote that the rule is "not aligned with the Bureau's current interpretation" of the Fair Credit Reporting Act.

[...] Privacy advocates have long called for the government to use the Fair Credit Reporting Act to rein in data brokers.

The decision by CFPB to cancel the rule comes days after the Financial Technology Association, an industry lobby group representing non-bank fintech companies, wrote to Vought in his capacity as the White House's budget director. The lobby group asked the administration to withdraw the CFPB's rule, claiming it would be "harmful to financial institutions' efforts to detect and prevent fraud."


Original Submission

posted by kolie on Sunday May 18, @05:12AM   Printer-friendly
from the Digital-Sovereignty dept.

The TECHPOWERUP reports:

https://www.techpowerup.com/336529/hygon-prepares-128-core-512-threaded-x86-cpu-with-four-way-smt-and-avx-512-support

Chinese server CPU maker Hygon, which owns a Zen core IP from AMD, has put a roadmap for C86-5G, its most powerful server processor to date, featuring up to 128 cores and an astonishing 512 threads. Thanks to a complete microarchitectural redesign, the new chip delivers more than 17 percent higher instructions per cycle (IPC) than its predecessor. It also supports the AVX-512 vector instruction set and four-way simultaneous multithreading, making it a strong contender for highly parallel workloads. Sixteen channels of DDR5-5600 memory feed data-intensive tasks, while CXL 2.0 interconnect support enables seamless scaling across multiple sockets. Built on an unknown semiconductor node, the C86-5G includes advanced power management and a hardened security engine. With 128 lanes of PCIe 5.0, it offers ample bandwidth for accelerators, NVMe storage, and high-speed networking. Hygon positions this flagship CPU as ideal for artificial intelligence training clusters, large-scale analytics platforms, and virtualized enterprise environments.

It is not clear to me where Hygon CPUs are actually made, but since the processor features instructions optimized for Chinese government obligatory encryption algorithms, it's most probably SMIC, not TSMC, for trust reason. 4-way SMT is very interesting, not even AMD can pull that just now.

The C86-5G is the culmination of five years of steady development. The journey began with the C86-1G, an AMD-licensed design that served as a testbed for domestic engineers. It offered up to 32 cores, 64 threads, eight channels of DDR4-2666 memory, and 128 lanes of PCIe 3.0. Its goal was to absorb proven technology and build local know-how. Next came the C86-2G, which kept the same core count but introduced a revamped floating-point unit, 21 custom security instructions, and hardware-accelerated features for memory encryption, virtualization, and trusted computing. This model marked Hygon's first real step into independent research and development. With the C86-3G, Hygon rolled out a fully homegrown CPU core and system-on-chip framework. Memory support increased to DDR4-3200, I/O doubled to PCIe 4.0, and on-die networking included four 10 GbE and eight 1 GbE ports. The C86-4G raised the bar further by doubling compute density to 64 cores and 128 threads, boosting IPC by around 15 percent and adding 12-channel DDR5-4800 memory plus 128 lanes of PCIe 5.0. Socket options expanded to dual and quad configurations. Now, with the C86-5G, Hygon has shown it can compete head-to-head with global server CPU leaders, putting more faith in China's growing capabilities in high-performance computing.

Beside genuine Zen made by AMD, there are now three IP-licensed and independent AMD64 platform advanced manufacturers on this globe: Zhaoxin/VIA, Hygon and Intel. That means, political friction will have much less effect on future progress of this architecture.

Definitely, AMD64 architecture is perspective and worth to learn, at both the instruction set and machine levels.


Original Submission

posted by kolie on Sunday May 18, @12:24AM   Printer-friendly
from the glazed-and-confused-tech-desk dept.

Tor has announced Oniux, a new command-line tool for routing any Linux application securely through the Tor network for anonymized network connections.

Unlike classic methods like torsocks, which rely on user-space tricks, Oniux uses Linux namespaces to create a fully isolated network environment for each application, preventing data leaks even if the app is malicious or misconfigured.

Linux namespaces are a kernel feature that allows processes to run in isolated environments, each with its own view of specific system resources like networking, processes, or file mounts.

Oniux uses Linux namespaces to isolate apps at the kernel level, so all their traffic is forced through Tor.

"We are excited to introduce oniux: a small command-line utility providing Tor network isolation for third-party applications using Linux namespaces," reads a Tor blog post.

"Built on Arti, and onionmasq, oniux drop-ships any Linux program into its own network namespace to route it through Tor and strips away the potential for data leaks."

It achieves this by placing each app in its own network namespace with no access to the host's interfaces, and instead attaching a virtual interface (onion0) that routes through Tor using onionmasq.

It also uses mount namespaces to inject a custom /etc/resolv.conf for Tor-safe DNS, and user/PID namespaces to safely set up the environment with minimal privileges.

This setup ensures leak-proof, kernel-enforced Tor isolation for any Linux app.

On the other hand, Torsocks works by using an 'LD_PRELOAD' hack to intercept network-related function calls in dynamically linked Linux applications and redirect them through a Tor SOCKS proxy.

The problem with this approach is that raw system calls aren't caught by Torsocks, and malicious apps can avoid using libc functions to cause leaks.

Moreover, Torsocks doesn't work with static binaries at all, and doesn't offer true isolation, as apps still access the host's real network interfaces.

The Tor project published a comparison table highlighting the qualitative differences between the two solutions.

Despite the obvious advantages of Oniux, Tor highlights that the project is still experimental and hasn't been tested extensively under multiple conditions and scenarios.

That said, the tool may not work as expected, so its use in critical operations is discouraged.

Instead, Tor calls for enthusiasts who can test Oniux and report any problems they encounter so the tool can reach maturity quickly and become ready for broader deployment.

The Tor Project has published the source code, and those interested in testing Oniux must first ensure they have Rust installed on their Linux distribution, and then install the tool using the command:

cargo install --git https://gitlab.torproject.org/tpo/core/oniux oniux@0.4.0

Tor gives some usage examples like accessing an .onion site (oniux curl http://example.onion), "torifying" the shell session (oniux bash), or running a GUI app over Tor in the desktop environment (oniux hexchat).


Original Submission

posted by hubie on Saturday May 17, @07:39PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

So, we have a non-functional market...

That is one of the prevailing messages dished out by the cyber arm of the British intelligence squad at GCHQ's National Cyber Security Centre (NCSC) in recent years at its annual conference. The cyber agency's CTO, Ollie Whitehouse, first pitched the idea during a keynote at last year's event, and once again it was a primary talking point of this week's CYBERUK, but not one that went down well with everyone.

Whitehouse said this week that "the market does not currently support and reward those companies that make that investment and build secure products." The risks introduced here are then shouldered by customers – companies, governments – rather than the vendors themselves.

"So, we have a non-functional market," he added.

"When we need to build an ecosystem that's capable of meeting this modern threat, we have to find ways where we can incentivize those vendors to be rewarded for their hard work, for those that go the extra mile, for those that build the secure technologies which our foundations are going to rely on in the future.

"Those that build secure technology make prosperous companies. They make celebrated companies, and they make successful companies ultimately. Because without that, nothing changes, and we repeat the last 40 years."

That's the NCSC's line – one that will most likely resonate with any organization popped by one of the myriad decades-old vulns vendors can't seem to stamp out. 

But there is a disconnect between the agency's message and the views of major players elsewhere in the industry. From first being pitched as a necessary play for a more cyber-secure ecosystem, now the agency's steadfast stance on the matter has become a question of whether or not to intervene.

[...] McKenzie's take was that customers will ultimately drive vendor change. If they start prioritizing security, that's what vendors will give them. A string of cockups will quickly out those who don't provide value, and then it becomes a case of having to improve to survive.

He said: "I think there are only some products where I think maybe, you know, they're a little bit smoke and mirrors, but I think that's rare, and then it quickly becomes known in the market that they don't work. So, I don't agree. I think there's absolutely a market, and there is a return on investment for security and resilience."

Likewise, Walsh highlighted that cybersecurity failures are costly for organizations, alluding to the fact that victims of security snafus will certainly consider the ROI when deciding to renew, or not renew, certain vendor contracts.

Aung downplayed the idea of the need for improved incentives too, saying "there are certainly organizations out there who are cutting corners knowingly and putting their customers at risk knowingly. But, I think the vast majority are just grappling with [various external factors] and in an arms race at the same time. So I think it's a complex picture."

[...] Whitehouse put forth the idea of perhaps punishing vendors that fall short of expectations, not just incentivizing them to do better, during last year's CYBERUK, and this was again put on the table this week, with his industry peers once more siding against the CTO's stance.

If you look at someone like CrowdStrike or Microsoft Defender, they did really well in that endpoint marketplace because they provided the most features...

McKenzie said "he's not a fan" of the idea. In his view, it goes back to customers eventually abandoning sub-par vendors and, when speaking to The Register, he pointed to historical events that illustrate how the market itself will drive change.

"What we need is we need purchasers of security to prioritize the features and functionalities they want and then incentivize those organizations.

"If you look at someone like CrowdStrike or Microsoft Defender, they did really well in that endpoint marketplace because they provided the most features. There are other things that weren't as good. They don't grow."

With the shift from antivirus to EDR, vendors that offer the best will perform the best, he argued. 

[...] Parallels can be drawn with the automotive industry. The European NCAP program was introduced in the late 1990s, providing customers an easy way to understand how different manufacturers were performing on safety.

Before that, we had the likes of Volvo scooping up swathes of market share off the back of its reputation for producing safe cars, or German and Japanese brands for their reliability.

Perhaps the same principles could apply to security vendors, all vying for stellar, market-shifting trustworthiness. And then it goes back to purchasers dictating which security vendors end up doing well.

[...] Whitehouse said: "Some of you would have heard me say that... we know more that's in our sausages than our software, and that's probably not right for 2025, so the food labelling standards are coming to software soon. You heard it here first."


Original Submission

posted by hubie on Saturday May 17, @02:50PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Vulnerability Database (EUVD) is now fully operational, offering a streamlined platform to monitor critical and actively exploited security flaws amid the US struggles with budget cuts, delayed disclosures, and confusion around the future of its own tracking systems.

As of Tuesday, the full-fledged version of the website is up and running.

"The EU is now equipped with an essential tool designed to substantially improve the management of vulnerabilities and the risks associated with it," ENISA Executive Director Juhan Lepassaar said in a statement announcing the EUVD. 

"The database ensures transparency to all users of the affected ICT products and services and will stand as an efficient source of information to find mitigation measures," Lepassaar continued.

The European Union Agency for Cybersecurity (ENISA) first announced the project in June 2024 under a mandate from the EU's Network and Information Security 2 Directive, and quietly rolled out a limited-access beta version last month during a period of uncertainty surrounding the United States' Common Vulnerabilities and Exposures (CVE) program

More broadly, Uncle Sam has been hard at work slashing CISA and other cybersecurity funding while key federal employees responsible for the US government's secure-by-design program have jumped ship

Plus, on Monday, CISA said it would no longer publish routine alerts - including those detailing exploited vulnerabilities - on its public website. Instead, these updates will be delivered via email, RSS feeds, and the agency's account on X.

With all this, a cybersecurity professional could be forgiven for doubting the US government's commitment to hardening networks and rooting out vulnerabilities.

Enter the EUVD. The EUVD is similar to the US government's National Vulnerability Database (NVD) in that it identifies each disclosed bug (with both a CVE-assigned ID and its own EUVD identifier), notes the vulnerability's criticality and exploitation status, and links to available advisories and patches.

Unlike the NVD, which is still struggling with a backlog of vulnerability submissions and is not very easy to navigate, the EUVD is updated in near real-time and highlights both critical and exploited vulnerabilities at the top of the site.

The EUVD provides three dashboard views: one for critical vulnerabilities, one for those actively exploited, and one for those coordinated by members of the EU CSIRTs network.

Information is sourced from open-source databases as well as advisories and alerts issued by national CSIRTs, mitigation and patching guidelines published by vendors, and exploited vulnerability details.

ENISA is also a CVE Numbering Authority (CNA), meaning it can assign CVE identifiers and coordinate vulnerability disclosures under the CVE program. Even as an active CNA, however, ENISA seems to be in the dark about what's next for the embattled US-government-funded CVE program, which is only under contract with MITRE until next March.

The launch announcement notes that "ENISA is in contact with MITRE to understand the impact and next steps following the announcement on the funding to the Common Vulnerabilities and Exposures Program."


Original Submission

posted by hubie on Saturday May 17, @10:05AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

On Monday, the US Court of Appeals for the Federal Circuit said scientists Jennifer Doudna and Emmanuelle Charpentier will get another chance to show they ought to own the key patents on what many consider the defining biotechnology invention of the 21st century.

The pair shared a 2020 Nobel Prize for developing the versatile gene-editing system, which is already being used to treat various genetic disorders, including sickle cell disease

But when key US patent rights were granted in 2014 to researcher Feng Zhang of the Broad Institute of MIT and Harvard, the decision set off a bitter dispute in which hundreds of millions of dollars—as well as scientific bragging rights—are at stake.

[...] The CRISPR patent battle is among the most byzantine ever, putting the technology alongside the steam engine, the telephone, the lightbulb, and the laser among the most hotly contested inventions in history.

In 2012, Doudna and Charpentier were first to publish a description of a CRISPR gene editor that could be programmed to precisely cut DNA in a test tube. There’s no dispute about that.

However, the patent fight relates to the use of CRISPR to edit inside animal cells—like those of human beings. That’s considered a distinct invention, and one both sides say they were first to come up with that very same year. 

In patent law, this moment is known as conception—the instant a lightbulb appears over an inventor’s head, revealing a definite and workable plan for how an invention is going to function.

In 2022, a specialized body called the Patent Trial and Appeal Board, or PTAB, decided that Doudna and Charpentier hadn’t fully conceived the invention because they initially encountered trouble getting their editor to work in fish and other species. Indeed, they had so much trouble that Zhang scooped them with a 2013 publication demonstrating he could use CRISPR to edit human cells.

There’s a surprise twist in the battle to control genome editing.

The Nobelists appealed the finding, and yesterday the appeals court vacated it, saying the patent board applied the wrong standard and needs to reconsider the case. 

According to the court, Doudna and Charpentier didn’t have to “know their invention would work” to get credit for conceiving it. What could matter more, the court said, is that it actually did work in the end. 

[...] The decision is likely to reopen the investigation into what was written in 13-year-old lab notebooks and whether Zhang based his research, in part, on what he learned from Doudna and Charpentier’s publications. 

The case will now return to the patent board for a further look, although Sherkow says the court finding can also be appealed directly to the US Supreme Court.


Original Submission

posted by hubie on Saturday May 17, @05:15AM   Printer-friendly

https://www.bleepingcomputer.com/news/security/bluetooth-61-enhances-privacy-with-randomized-rpa-timing/

By Bill Toulas (May 11, 2025)

The Bluetooth Special Interest Group (SIG) has announced Bluetooth Core Specification 6.1, bringing important improvements to the popular wireless communication protocol.

One new feature highlighted in the latest release is the increased device privacy via randomized Resolvable Private Addresses (RPA) updates.

"Randomizing the timing of address changes makes it much more difficult for third parties to track or correlate device activity over time," reads SIG's announcement.

A Resolvable Private Address (RPA) is a Bluetooth address created to look random and is used in place of a device's fixed MAC address to protect user privacy. It allows trusted devices to securely reconnect without revealing their true identity.

[...] The Controller picks a random value in the defined range using a NIST-approved random number generator, and updates the RPA. This makes tracking significantly harder, as there is no pattern in the value selection.

More details about how the new privacy feature works can be found in the specification document published along with the announcement.

Another feature highlighted in the announcement is better power efficiency starting from Bluetooth 6.1, which stems from allowing the chip (Controller) to autonomously handle the randomized RPA updates.

[...] While Bluetooth 6.1 has made exciting steps forward, it's important to underline that actual support in hardware and firmware may take years to arrive.

The first wave of chips with Bluetooth 6.1 should not be realistically expected before 2026, and even then, early implementations may not immediately expose all the newly available features, as testing and validation may be required.


Original Submission

posted by hubie on Saturday May 17, @12:31AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Looks like inflated GPU prices are here to stay

A new report claims that Nvidia has recently raised the official prices of nearly all of its products to combat the impact of tariffs and surging manufacturing costs on its business, with gaming graphics cards receiving a 5 to 10% hike while AI GPUs see up to a 15% increase.

As reported by Digitimes Taiwan (translated), Nvidia is facing "multiple crises," including a $5.5 billion hit to its quarterly earnings over export restrictions on AI chips, including a ban on sales of its H20 chips to China.

Digitimes reports that CEO Jensen Huang has been "shuttling back and forth" between the US and China to minimize the impact of tariffs, and that "in order to maintain stable profitability," Nvidia has reportedly recently raised official prices for almost all its products, allowing its partners to increase prices accordingly.

Despite the hikes, Digitimes claims Nvidia's financial report at the end of the month "should be within financial forecasts and deliver excellent profit results," driven by strong demand for AI chips outside of China and the expanding spending from cloud service providers.

The report states that Nvidia has applied official price hikes to numerous products to keep its earnings stable, with partners following suit. As an example, Digitimes cites the RTX 5090, bought at premium prices upon release without hesitation, such that channel pricing "quickly doubled."

The report notes that following the AI chip ban, RTX 5090 prices climbed further still, surging overnight from around NT$90,000 to NT$100,000, with other RTX 50 series cards also increasing by 5-10%. Digitimes notes Nvidia has also raised the price of its H200 and B200 chips, with server vendors increasing prices by up to 15% accordingly.

According to the publication's supply chain sources, price hikes have been exacerbated by the shift of Blackwell chip production to TSMC's US plant, which has driven a significant rise in the price of production, materials, and logistics.


Original Submission

posted by janrinok on Friday May 16, @07:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

One of the ultimate goals of medieval alchemy has been realized, but only for a fraction of a second. Scientists with the European Organization for Nuclear Research, better known as CERN, were able to convert lead into gold using the Large Hadron Collider (LHC), the world's most powerful particle accelerator. Unlike the examples of transmutation we see in pop culture, these experiments with the LHC involve smashing subatomic particles together at ridiculously high speeds to manipulate lead's physical properties to become gold.

The LHC is often used to smash lead ions together to create extremely hot and dense matter similar to what was observed in the universe following the Big Bang. While conducting this analysis, the CERN scientists took note of the near-misses that caused a lead nucleus to drop its neutrons or protons. Lead atoms only have three more protons than gold atoms, meaning that in certain cases the LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.

Alchemists back in the day may be astonished by this achievement, but the experiments conducted between 2015 and 2018 only produced about 29 picograms of gold, according to CERN. The organization added that the latest trials produced almost double that amount thanks to regular upgrades to the LHC, but the mass made is still trillions of times less than what's necessary for a piece of jewelry. Instead of trying to chase riches, the organization's scientists are more interested in studying the interaction that leads to this transmutation.

"It is impressive to see that our detectors can handle head-on collisions producing thousands of particles, while also being sensitive to collisions where only a few particles are produced at a time, enabling the study of electromagnetic 'nuclear transmutation' processes," Marco Van Leeuwen, spokesperson for the A Large Ion Collider Experiment project at the LHC, said in a statement.


Original Submission

Processed by drussell

posted by janrinok on Friday May 16, @03:01PM   Printer-friendly
from the Or-how-MBA-culture-killed-Bell-Labs dept.

canopic jug writes:

The 1517 Fund has an article exploring why Bell Labs worked so well, and what is lacking in today's society to recreate such a research environment:

There have been non-profit and corporate giants with larger war chests than Ma Bell. AT&T started Bell Labs when its revenue was under $13 B (current USD). During the great depression, when Mervin Kelly laid the foundation for the lab, AT&T's revenue was $22 B (current USD).

Inflation adjusted, Google has made more than AT&T did at Bell Labs' start since 2006. Microsoft, 1996. Apple, 1992.

Each has invested in research. None have a Bell Labs.

Academia's worse. Scientists at the height of their careers spend more time writing grants than doing research. Between 1975 and 2005, the amount of time scientists at top tier universities spent on research declined by 20%. Time spent on paperwork increased by 100%. To quote the study, "experienced secular decline in research time, on the order of 10h per week." 2

[...] Reportedly, Kelly and others would hand people problems and then check in a few years later.3 Most founders and executives I know balk at this idea. After all, "what's stopping someone from just slacking off?" Kelly would contend that's the wrong question to ask. The right question is, "Why would you expect information theory from someone who needs a babysitter?"

Micromanagement and quantification also take their toll.

Previously:
(2024) The Incredible Story Behind the First Transistor Radio
(2024) Is It Possible to Recreate Bell Labs?
(2022) Unix History: A Mighty Origin Story
(2019) Vintage Computer Federation East 2019 -- Brian Kernighan Interviews Ken Thompson
(2017) US Companies are Investing Less in Science


Original Submission

Processed by kolie

posted by hubie on Friday May 16, @10:20AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In 2018, about 13.5 percent of the more than 2.6 million deaths from cardiovascular disease among people ages 55 to 64 globally could have been related to exposure to a type of chemical called a phthalate, researchers report April 28 in eBioMedicine.

Phthalates are a group of chemicals found in shampoos, lotions, food packaging and medical supplies including blood bags. The chemicals are often added to plastics to make them softer and more flexible.

Phthalates can enter the body when you consume contaminated food, breathe them in or absorb them through the skin. Once inside, they act as endocrine disruptors, which means they affect hormones. Previous research has also linked the chemicals to diabetes, obesity, pregnancy complications and heart disease.

The new study looked at the effects of one particular phthalate, known as di-2-ethylhexylphthalate, or DEHP, which is often added to PVC plastics to soften them. Sara Hyman, a research scientist at NYU Langone Health, and colleagues focused on the relationship between DEHP exposure levels and cardiovascular disease, the leading cause of death worldwide. Hyman and colleagues compared estimated DEHP exposure in 2008 with death rates from cardiovascular disease ten years later in different parts of the world. By studying how the two changed together, they determined what portion of those deaths might be attributable to phthalates.

More than 350,000 excess deaths worldwide were associated with DEHP exposure in 2018, the team found. About three-quarters of those occurred in the Middle East, South Asia, East Asia and the Pacific. This disparity might be due to the regions’ growing plastics industries, the researchers suggest. The new work does not show that DEHP exposure directly causes heart disease, though — only that there’s an association between the two.

[...] The findings offer yet another reason to decrease plastic use, researchers say. “We’re going to become the plastic planet,” Zhou says. “We need to start to really address this serious issue.”

S. Hyman et al. Phthalate exposure from plastics and cardiovascular disease: global estimates of attributable mortality and years life lost. eBioMedicine, 105730. Published online April 28, 2025. doi: 10.1016/j.ebiom.2025.105730.


Original Submission

posted by hubie on Friday May 16, @05:32AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Federal Trade Commission has delayed the start of a rule that aims to make the process of canceling subscriptions less of a nightmare. Last year, the FTC voted to ratify amendments to a regulation known as the Negative Option Rule, adding a new "click-to-cancel" rule that requires companies to be upfront about the terms of subscription signups and prohibits them "from making it any more difficult for consumers to cancel than it was to sign up." Surprising no one, telecom companies were not happy, and sued the FTC. While the rule was nevertheless set to be implemented on May 14, the FTC now says enforcement has been pushed back 60 days to July 14.

Some parts of the updated Negative Option Rule went into effect on January 19, but the enforcement of certain provisions were deferred to May 14 by the previous administration to give companies more time to comply. Under the new administration, the FTC says it has "conducted a fresh assessment of the burdens that forcing compliance by this date would impose" and decided it "insufficiently accounted for the complexity of compliance."

Once the July 14 deadline hits, the FTC says "regulated entities must be in compliance with the whole of the Rule because the Commission will begin enforcing it." But, the statement adds, "if that enforcement experience exposes problems with the Rule, the Commission is open to amending" it.

Previously:
    • Judge Rules SiriusXM's Annoying Cancellation Process is Illegal
    • The US Government Wants to Make It Easier for You to Click the 'Unsubscribe' Button
    • Clingy Virgin Media Won't Let Us Go, Customers Complain
    • Publishers and Advertisers Push Back at FTC's 'Click-to-Cancel' Proposal
    • The End of "Click to Subscribe, Call to Cancel"? - News Industry's Favorite Retention Tactic


Original Submission

posted by hubie on Friday May 16, @12:45AM   Printer-friendly

Research out of the University of Connecticut proposes neural resonance theory, which says neurons in our body physically synchronize with music that create stable patterns that affect our entire body.

In a nutshell
       • Brain-music synchronization: Your brain doesn't just predict music patterns—it physically synchronizes with them through neural oscillations that affect your entire body.
        • Stability creates preference: Musical sounds with simple frequency relationships (like perfect fifths) create more stable neural patterns, explaining why certain combinations sound pleasant across cultures.
        • Cultural attunement: While some aspects of music perception are universal, your brain becomes "attuned" to the music you frequently hear, explaining cultural preferences while maintaining recognition of basic musical structures.

What is Neural Resonance Theory?

Neural Resonance Theory (NRT) is a scientific approach that explains how your brain processes music using fundamental physics principles rather than abstract predictions.

In simpler terms, NRT suggests that:

    • Your brain contains billions of neurons that naturally oscillate (rhythmically fire) at different frequencies
    • When you hear music, these neural oscillations physically synchronize with the sound waves
    • This synchronization creates stable patterns in your brain that correspond to musical elements
    • The more stable these patterns are, the more pleasant or "right" the music feels

Unlike traditional theories that say your brain is constantly making predictions about what comes next in music, NRT proposes that your brain actually embodies the music's structure through its own physical patterns.

This physical synchronization explains why music can directly affect your movements and emotions without conscious thought—your brain and body are literally vibrating in harmony with the music.

Read the rest of the article: https://studyfinds.org/brain-cells-synchronize-to-music/

Journal Reference: Harding, E.E., Kim, J.C., Demos, A.P. et al. Musical neurodynamics. Nat. Rev. Neurosci. 26, 293–307 (2025). https://doi.org/10.1038/s41583-025-00915-4


Original Submission

Today's News | May 19 | May 17  >