Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
It is equipment, not labor that defines chipmaking costs.
Comments made by TSMC founder Morris Chang about high fab building costs in Arizona and higher operating costs in the U.S. created the impression that producing chips in America is way too expensive to be financially viable. However, analysts from TechInsights believe that this is not the case. According to the firm's recent study, the costs of wafers at TSMC's Fab 21 near Phoenix, Arizona, are only about 10% higher than those of similar wafers processed in Taiwan.
"It costs TSMC less than 10% more to process a 300mm wafer in Arizona than the same wafer made in Taiwan," wrote G. Dan Hutcheson from TechInsights.
While it definitely costs more to build a fab in the U.S. than in Taiwan, TSMC's cost was significantly higher because it built its first overseas fab in decades at a brand-new site with a new, sometimes unskilled workforce, according to Hutcheson. According to other people familiar with the fab-building process, it does not cost twice as much to build a fab in the USA than in Taiwan.
The dominant factor of semiconductor production cost is the cost of equipment, which contributes well over two-thirds of overall wafer expenses. Tools made by leading companies like ASML, Applied Materials, KLA, Lam Research, or Tokyo Electron cost the same amount of money in Taiwan and the U.S.; they effectively neutralize location-based cost differences.
A major source of confusion about wafer prices comes from labor costs. Wages in the U.S. are roughly triple those in Taiwan, which many mistakenly take as a significant factor in chip production. However, with the advanced automation of today's wafer fabrication facilities, labor accounts for less than 2% of the total cost, according to TechInsights's wafer cost model. Based on this model, the overall expense gap between operating costs of a fab in Arizona and Taiwan is minimal despite big differences in salaries and other local costs.
It should be noted that wafers that TSMC currently produces at Fab 21 travel back to Taiwan to get diced, tested, and packaged. Some of them then go to China or elsewhere to be put into actual devices; some will travel back to the U.S., though. Therefore, their logistics are somewhat more complicated than those of typical wafers processed in Taiwan. However, this hardly dramatically adds to costs, and TSMC now plans to build packaging capacity in the U.S. Nonetheless, TSMC is rumored to charge a 30% premium for chips made in the U.S.
Arthur T Knackerbracket has processed the following story:
A four-day working week pilot programme is being squarely aimed at the UK tech sector with the final results to be assessed by academics.
The post-pandemic world of work has changed, with many employees demanding more flexibility in their labor location and the hours they put in, amid a tension that many corporations would prefer to revert back to more traditional styles.
With this in mind, consultancy 4 Day Week Foundation is urging tech businesses of all shapes and sizes to sign up to a six-month trial from June 30, starting with a six-week workshop and training that begins May 22.
"Nothing better represents the future of work than the tech sector which we know is an agile industry ripe for embracing new ways of working such as a four-day week," said Sam Hunt, business network coordinator at the consultant.
"As hundreds of British companies have already shown, a four-day, 32 hour working week with no loss of pay can be a win-win for workers and employers," he added. "The 9-5, 5 day working week was invented 100 years ago and no longer suits the realities of modern life."
The idea is simple, cram the normal working week into four days instead of five, with no loss of pay for the employee.
[...] Prior to the pandemic, Microsoft tested out the four-day-week at its offices in Japan, giving its entire local workforce Fridays off without impact to pay. This initiative, Work-Life Choice Challenge Summer 2019, led to more efficient meetings, a happier workers and a reported 40 percent hike in productivity, according to Microsoft.
"Work a short time, rest well and learn a lot," Microsoft Japan president and CEO Takuya Hirano said at the time. "I want employees to think about and experience how they can achieve the same results with 20 percent less working time."
Overheads plunged too: electricity use in the office was down - disproportionately - by 23 percent – and 59 percent fewer pages were printed. This was in addition to 92 percent of staff saying they enjoyed a shorter working week.
However, tycoons at the Redmond-based cloud and software biz have so far not replicated the initiative elsewhere. Microsoft does run a hybrid work policy, however, allowing staff to work remotely and from the office for a number of days a week.
Arthur T Knackerbracket has processed the following story:
A coalition of nine European Union countries, led by the Netherlands, has been formed to accelerate plans for a potential second funding package under the European Chips Act. This initiative aims to present proposals by summer, following the mixed results of the 2023 Chips Act, which, despite preventing a decline in Europe's industry, failed to meet its key objectives due to slow approval processes and less state support than that provided by the U.S. and China.
Dutch Economy Minister Dirk Beljaarts emphasized the need for a more targeted approach in the potential second funding program. "We need to allocate funds," Beljaarts told Reuters. "Both private and public funds to push the sector, also to make sure that the trickle-down effect takes place and that (small and medium-size) companies also benefit." This strategy aims to address gaps in areas such as chip packaging and advanced production, particularly after Intel shelved plans for a cutting-edge factory in Germany.
The coalition, which includes Austria, Belgium, Finland, France, Germany, Italy, Poland, Spain, and the Netherlands, is focused on three main priorities: enhancing production capabilities, mobilizing public and private investment, and fostering talent within the sector.
Europe boasts strong research and development capabilities, with companies like ASML leading the chipmaking-tools market. However, the region lags behind in advanced chip production, with only Intel utilizing cutting-edge technology in Ireland. The industry's stakeholders include major chip manufacturers like Bosch, Infineon, NXP, and STMicroelectronics, along with equipment suppliers ASML and ASM.
Following a meeting in Brussels, organizations such as ESIA and SEMI Europe are set to formally propose their needs to the European Commission's digital official, Henna Virkkunen. Their requests include direct support for semiconductor design, manufacturing, R&D, materials, and equipment.
The European Chips Act, launched in 2023, aimed to reduce Europe's dependence on foreign semiconductor supplies and bolster the region's technological sovereignty. However, it has faced challenges, including a scarcity of skilled workers and slow approval processes.
The Act has a total investment goal of €43 billion, with the Chips Joint Undertaking playing a pivotal role in bridging the gap between research and commercialization. Despite these efforts, critics argue that government intervention may not be the most effective strategy, as it can distort competition and favor inefficient producers.
https://techxplore.com/news/2025-03-harnessing-nature-fractals-flexible-electronics.html
By using leaf skeletons as templates, researchers harnessed nature's intrinsic hierarchical fractal structures to improve the performance of flexible electronic devices. Wearable sensors and electronic skins are examples of flexible electronics.
A research team at the University of Turku, Finland, has developed an innovative approach to replicating bioinspired microstructures found in plant leaf skeletons, eliminating the need for conventional cleanroom technologies. The work is published in the journal npj Flexible Electronics.
Fractal patterns are self-replicating structures in which the same shape repeats at increasingly smaller scales. They can be created mathematically and also occur in nature. For example, tree branches, leaf veins, vascular networks, and many floral patterns, such as cauliflower, follow a fractal structure.
Researchers created surfaces that mimic fractal patterns by utilizing dried tree leaf skeletons. Different manufacturing materials were sprayed onto the leaf skeletons, after which the new surfaces were separated from the leaf skeleton, and the researchers compared the structural properties and durability of the surfaces made from different materials.
This biomimetic surface, with more than 90% replication accuracy, is highly compatible with flexible electronic applications, offering enhanced stretchability, conformal attachment to skin, and superior breathability.
The advantages of surfaces based on fractal patterns are that their self-repeating hierarchical structures maximize the surface area while maintaining the surface's mechanical flexibility. These unique patterns enhance the surface's stretchability, and in electronic materials, the structure improves electrical conductivity, energy efficiency, energy dissipation, and charge transport.
These properties ensure durability and high performance under mechanical stress, making the surfaces ideal for next-generation flexible electronics, such as wearable sensors, transparent electrodes, and bioelectronic skin.
Compared to artificial fractals like kirigami or origami, leaf skeleton fractals offer naturally optimized, hierarchical, and scalable structures. They provide superior flexibility, breathability, and transparency while maintaining a high surface-area-to-volume ratio.
While leaf skeletons provide excellent fractal structures, they are not inherently stretchable, durable, or scalable due to their fixed dimensions and degradability. By replicating these patterns using stretchable and durable polymers using leaf skeletons as templates, researchers were able to create surfaces with enhanced flexibility and longevity, making large-scale production also feasible.
"We have succeeded in merging nature's efficient designs with modern materials, which opens new possibilities for flexible and wearable electronics," says Doctoral Researcher Amit Barua at the University of Turku.
To make these biomimetic surfaces conductive, researchers applied a simple layer of metal nanowires, achieving a surface resistivity of approximately 20 Ω. These conductive surfaces were then integrated into applications such as tactile sensing, heating, and electronic skin devices.
The Finnix project and DistroWatch are observing the 25th anniversary of the Finnix live distro a few days ago:
From Finnix:
Today is a very special day: March 22 is the 25 year anniversary of the first public release of Finnix, the oldest live Linux distribution still in production. Finnix 0.03 was released on March 22, 2000, and to celebrate this anniversary, I'm proud to announce the 35th Finnix release, Finnix 250!
Besides the continuing trend of Finnix version number inflation (the previous release was Finnix 126), Finnix 250 is simply a solid regular release, with the following notes:
From DistroWatch:
The Finnix distribution is a small, self-contained, bootable live Linux distribution for system administrators, based on Debian. The project's latest version is Finnix 250 which marks the project's 25th anniversary.
Other live distros come and go. However, Finnix is a special live distro because it contains so many pre-installed system administration tools that it has been a goto tool for system recovery and repair for two and a half decades.
Previously:
(2016) Refracta 8.0: Devuan on a Stick
(2015) Slackware Live Edition Beta Available
(2014) Snowden Used Special Linux Distro for Anonymity
Arthur T Knackerbracket has processed the following story:
Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.
Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.
On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.
They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.
The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.
They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.
“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.
The researchers previously developed a quantum computing module, which enabled them to send information-carrying microwave photons in either direction along a waveguide.
In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.
Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.
The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.
The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.
“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.
“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.
However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.
The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted. Once the receiver module absorbs that “half-photon,” the two modules become entangled. But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.
To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.
“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.
They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency. When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent. This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.
“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says. In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.
“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.
https://phys.org/news/2025-03-decades-quest-antibiotic-compounds.html
A team of chemists, biologists and microbiologists led by researchers in Arts & Sciences at Washington University in St. Louis has found a way to tweak an antimalarial drug and turn it into a potent antibiotic, part of a project more than 20 years in the making. Importantly, the new antibiotic should be largely impervious to the tricks that bacteria have evolved to become resistant to other drugs.
"Antibiotic resistance is one of the biggest problems in medicine," said Timothy Wencewicz, an associate professor of chemistry in Arts & Sciences. "This is just one step on a long journey to a new drug, but we proved that our concept worked."
The findings are published in ACS Infectious Diseases. The lead author of the study, John Georgiades, AB '24, is now a graduate student at Princeton University who took over the project while he was an undergraduate in Wencewicz's lab. Other co-authors include Joseph Jez, the Spencer T. Olin Professor in Biology; Christina Stallings, a professor of molecular microbiology at the School of Medicine; and Bruce Hathaway, a professor emeritus at Southeast Missouri State University.
A new approach to antibiotics is sorely needed because many common drugs are losing their punch, Wencewicz said. He points to Bactrim, a combination of the drugs sulfamethoxazole and trimethoprim. Often prescribed to treat ear infections and urinary tract infections, Bactrim blocks a bacteria's ability to produce folate, an important nutrient for fast-growing germs.
"It's been prescribed so often that resistance is now very common," Wencewicz said. "For a long time, people have been thinking about what's going to replace Bactrim and where we go from here."
Instead of creating new antibiotics out of whole cloth, Georgiades, Wencewicz and their team used chemistry to tweak cycloguanil, an existing drug used to treat malaria. "It's a slick way to give new life to a drug that is already FDA-approved," Wencewicz said.
Like Bactrim, cycloguanil works by blocking the enzymes that organisms need to produce folate. It has saved millions of people from malaria over the decades, but it was useless against bacteria because it didn't have a way to penetrate the membrane that surrounds bacterial cells.
After many trials, researchers were able to attach various chemical keys to cycloguanil that opened the door to the bacterial membrane. Once the new compounds reached the inner workings of the cell, they staged a two-pronged attack on the enzymes that bacteria need to produce folate.
"Dual-action antibiotics tend to be much more effective than drugs that just take one approach," Wencewicz said. Bacteria may be able to evolve resistance to one part of the attack, but they won't easily find a way to stop both at once, he explained.
The new compound proved to be effective against a wide range of bacteria, including Escherichia coli and Staphylococcus aureus, two of the most common causes of bacterial infections. Unlike Bactrim and other existing drugs that target folate, some of the new compounds also showed power against Pseudomonas aeruginosa, a pathogen that often infects people with weakened immune systems.
More information: John D. Georgiades et al, Expanding the Landscape of Dual Action Antifolate Antibacterials through 2,4-Diamino-1,6-dihydro-1,3,5-triazines, ACS Infectious Diseases (2025). DOI: 10.1021/acsinfecdis.4c00768
https://spectrum.ieee.org/jumping-robot
When you see a squirrel jump to a branch, you might think (and I myself thought, up until just now) that they're doing what birds and primates would do to stick the landing: just grabbing the branch and hanging on. But it turns out that squirrels, being squirrels, don't actually have prehensile hands or feet, meaning that they can't grasp things with any significant amount of strength. Instead, they manage to land on branches using a "palmar" grasp, which isn't really a grasp at all, in the sense that there's not much grabbing going on. It's more accurate to say that the squirrel is mostly landing on its palms and then balancing, which is very impressive.
This kind of dynamic stability is a trait that squirrels share with one of our favorite robots: Salto. Salto is a jumper too, and it's about as non-prehensile as it's possible to get, having just one limb with basically no grip strength at all. The robot is great at bouncing around on the ground, but if it could move vertically, that's an entire new mobility dimension that could lead to some potentially interesting applications, including environmental scouting, search and rescue, and disaster relief.
In a paper published today in Science Robotics, roboticists have now taught Salto to leap from one branch to another like squirrels do, using a low torque gripper and relying on its balancing skills instead.
While we're going to be mostly talking about robots here (because that's what we do), there's an entire paper by many of the same robotics researchers that was published in late February in the Journal of Experimental Biology about how squirrels land on branches this way. While you'd think that the researchers might have found some domesticated squirrels for this, they actually spent about a month bribing wild squirrels on the UC Berkeley campus to bounce around some instrumented perches while high speed cameras were rolling.
Squirrels aim for perfectly balanced landings, which allow them to immediately jump again. They don't always get it quite right, of course, and they're excellent at recovering from branch landings where they go a little bit over or under where they want to be. The research showed how squirrels use their musculoskeletal system to adjust their body position, dynamically absorbing the impact of landing with their forelimbs and altering their mass distribution to turn near misses into successful perches.
It's these kinds of skills that Salto really needs to be able to usefully make jumps in the real world. When everything goes exactly the way it's supposed to, jumping and perching is easy, but that almost never happens and the squirrel research shows how important it is to be able to adapt when things go wonky. It's not like the little robot has a lot of degrees of freedom to work with—it's got just one leg, just one foot, a couple of thrusters, and that spinning component which, believe it or not, functions as a tail. And yet, Salto manages to (sometimes!) make it work.
Journal Reference: https://doi.org/10.1126/scirobotics.adq1949
US sperm donor giant California Cryobank is warning customers it suffered a data breach that exposed customers' personal information:
California Cryobank is a full-service sperm bank providing frozen donor sperm and specialized reproductive services, such as egg and embryo storage. The company is the largest sperm bank in the US and services all 50 states and more than 30 countries worldwide.
California Cryobank detected suspicious activity on its network on April 21, 2024, and isolated the computers from the IT network.
"Through our investigation, CCB determined that an unauthorized party gained access to our IT environment and may have accessed and/or acquired files maintained on certain computer systems between April 20, 2024 and April 22, 2024," reads a from California Cryobank.
[...] An almost a year-long investigation has determined that the attack exposed varying personal data for customers, including names, bank accounts and routing numbers, Social Security numbers, driver's license numbers, payment card numbers, and/or health insurance information.
https://newatlas.com/materials/carbon-negative-cement-sand-substitute-seawater-electricity-co2/
Concrete is the most widely used artificial material on the planet – which is a shame, because making it also happens to be one of the most polluting processes. Worse still, at a global scale it requires huge amounts of sand, which is getting harder (financially and environmentally) to mine from coasts, seafloors and riverbeds.
An unassuming new material from Northwestern could help solve both problems. Composed of calcium carbonate and magnesium hydroxide in different ratios, it's pretty simple to make – just take some seawater, zap it with electricity and bubble some CO2 through it.
The whole process is similar to how corals and mollusks build their shells, according to the team.
If you really want to get your thinking cap on, here's how they do it: two electrodes in the tank emit a low electrical current that splits the water molecules into hydrogen gas and hydroxide ions. When CO2 gas is added, the chemical composition of the water changes, increasing levels of bicarbonate ions. These hydroxide and bicarbonate ions react with other natural ions in seawater, producing solid minerals that gather at the electrodes.
The end result is a versatile white material that not only stores carbon, but can stand in for sand or gravel in cement, and also forms a foundational powder for other building materials like plaster and paint.
Intriguingly, the researchers found the material could be tweaked by adjusting the flow rate, timing and duration of the CO2 and seawater, and the voltage and current of the electricity. By tweaking the manufacturing process, the researchers can make the material with different properties for different purposes
"We showed that when we generate these materials, we can fully control their properties, such as the chemical composition, size, shape and porosity," said Alessandro Rotta Loria, lead author of the study. "That gives us some flexibility to develop materials suited to different applications."
This process is far greener than the usual method of making these building materials. Not only does it reduce the need to strip mine huge quantities of sand from the natural environment, but the only gaseous byproduct is hydrogen, which can itself be captured for use as clean fuel. The CO2 used to make the material could even come from emissions from regular cement production, in which case the process could make regular cement greener as a byproduct.
"We could create a circularity where we sequester CO2 right at the source," Rotta Loria said. "And, if the concrete and cement plants are located on shorelines, we could use the ocean right next to them to feed dedicated reactors where CO2 is transformed through clean electricity into materials that can be used for myriad applications in the construction industry. Then, those materials would truly become carbon sinks."
Journal Reference: https://doi.org/10.1002/adsu.202400943
Arthur T Knackerbracket has processed the following story:
While hunting in West Texas, a deer hunter spotted a strange object in a creek bed. Suspecting it might be a fossil, he took a photo and showed it to a ranch manager.
“I was skeptical,” O2 Ranch manager Will Juett said in a Sul Ross State University statement. “I figured it was likely just an old stump, but imagined how great it would be if he was right.”
The deer hunter was right, and the discovery was more than great, because it wasn’t just any fossil. An interdisciplinary team of researchers identified it as a mammoth tusk, an incredibly rare find for West Texas.
[...] “A local who subsequently wrote his PhD dissertation on it found one [a mammoth tusk] in Fort Stockton in the 1960s,” Schroeder said, adding that the specimen is currently the only mammoth tusk in Texas’ Trans-Pecos region to have been carbon-dated. “There was a big range of error [in carbon dating] back then. Now we can get it down to a narrower range within 500 years.”
While the statement doesn’t name a specific mammoth species, the tusk might have belonged to a Columbian mammoth, a distant cousin of the more familiar woolly mammoth. The shaggy elephantine animal could reach up to 13 feet in height (almost 4 meters) and weigh around 10 tons.
Columbian mammoths inhabited regions of North America, including modern-day Texas, before going extinct around 11,700 years ago along with many other Ice Age mammals. Though the reason behind the disappearance of the Ice Age’s iconic megafauna remains a hotly debated topic, scientists frequently cite climate change, and human hunting may have also played a role.
“Seeing that mammoth tusk just brings the ancient world to life,” Juett said. “Now, I can’t help but imagine that huge animal wandering around the hills on the O2 Ranch. My next thought is always about the people that faced those huge tusks with only a stone tool in their hand!”
Gaia runs faster on Ryzen AI PCs, using the XDNA NPU and RDNA iGPU:
Running large language models (LLMs) on PCs locally is becoming increasingly popular worldwide. In response, AMD is introducing its own LLM application, Gaia, an open-source project for running local LLMs on any Windows machine.
Gaia is designed to run various LLM models on Windows PCs and features further performance optimizations for machines equipped with its Ryzen AI processors (including the Ryzen AI Max 395+). Gaia uses the open-source Lemonade SDK from ONNX TurnkeyML for LLM inference. Models can allegedly adapt for different purposes with Gaia, including summarization and complex reasoning tasks.
[...] MD's new open-source project works by providing LLM-specific tasks through the Lemonade SDK and serving them across multiple runtimes. Lemonade allegedly "exposes an LLM web service that communicates with the GAIA application...via an OpenAI compatible Rest API." Gaia itself acts as an AI-powered agent that retrieves and processes data. It also "vectorizes external content (e.g., GitHub, YouTube, text files) and stores it in a local vector index."
Also at Phoronix.
AMD Press Release and GitHub Repository.
Italy is using its Piracy Shield law to go after Google, with a court ordering the Internet giant to immediately begin poisoning its public DNS servers. This is just the latest phase of a campaign that has also targeted Italian ISPs and other international firms like Cloudflare. The goal is aimed at preventing illegal football streams, but the effort has already caused collateral damage. Regardless, Italy's communication regulator praises the ruling and hopes to continue sticking it to international tech firms.
The Court of Milan issued this ruling in response to a complaint that Google failed to block pirate websites after they were identified by the national communication regulator, known as AGCOM. The court found that the sites in question were involved in the illegal streaming of Series A football matches, which has been a focus of anti-piracy crusaders in Italy for years. Since Google offers a public DNS service, it is subject to the site-blocking law.
Piracy Shield is often labeled as draconian by opponents because blocking content via DNS is messy. It blocks the entire domain, which has led to confusion when users rely on popular platforms to distribute pirated content. Just last year, Italian ISPs briefly blocked the entire Google Drive domain because someone, somewhere used it to share copyrighted material. This is often called DNS poisoning or spoofing in the context of online attacks, and the outcome is the same if it's being done under legal authority: a DNS record is altered to prevent someone typing a domain name from being routed to the correct IP address.
Cybercriminals are abusing Microsoft's Trusted Signing platform to code-sign malware executables with short-lived three-day certificates.
Threat actors have long sought after code-signing certificates as they can be used to sign malware to appear like they are from a legitimate company.
Signed malware also has the advantage of potentially bypassing security filters that would normally block unsigned executables, or at least treat them with less suspicion.
The holy grail for threat actors is to obtain Extended Validation (EV) code-signing certificates, as they automatically gain increased trust from many cybersecurity programs due to the more rigorous verification process. Even more important, EV certificates are believed to gain a reputation boost in SmartScreen, helping to bypass alerts that would normally be displayed for unknown files.
However, EV code-singing certificates can be difficult to obtain, requiring them to be stolen from other companies or for threat actors to set up fake businesses and spend thousands of dollars to purchase one. Furthermore, once the certificate is used in a malware campaign, it is usually revoked, making it unusable for future attacks.
Recently, cybersecurity researchers have seen threat actors utilizing the Microsoft Trusted Signing service to sign their malware with short-lived, three-day code-signing certificates.
[...] The Microsoft Trusted Signing service launched in 2024 and is a cloud-based service that allows developers to easily have their programs signed by Microsoft.
[...] "The service supports both public and private trust signing scenarios and includes a timestamping service."
[...] This increased security is accomplished by using short-lived certificates that can easily be revoked in the event of abuse and by never issuing the certificates directly to the developers, preventing them from being stolen in the event of a breach.
[...] "A Trusted Signing signature ensures that your application is trusted by providing base reputation on smart screen, user mode trust on Windows, and integrity check signature validation compliant," reads an FAQ on the Trusted Signing site.
To protect against abuse, Microsoft is currently only allowing certificates to be issued under a company name if they have been in business for three years.
However, individuals can sign up and get approved more easily if they are okay with the certificates being issued under their name.
A cybersecurity researcher and developer known as 'Squiblydoo,' who has been tracking malware campaigns abusing certificates for years, told BleepingComputer that they believe threat actors are switching to Microsoft's service out of convenience.
"I think there are a few reasons for the change. For a long time, using EV certificates has been the standard, but Microsoft has announced changes to EV certificates," Squiblydoo told BleepingComputer.
"However, the changes to EV certificates really aren't clear to anyone: not certificate providers, not attackers. However, due to these potential changes and lack of clarity, just having a code-signing certificate may be adequate for attacker needs."
"In this regard, the verification process for Microsoft's certificates is substantially easier than the verification process for EV certificates: due to the ambiguity over EV certificates, it makes sense to use the Microsoft certificates."
Arthur T Knackerbracket has processed the following story:
Over the past couple of weeks, I’ve been following news of the deaths of actor Gene Hackman and his wife, pianist Betsy Arakawa. It was heartbreaking to hear how Arakawa appeared to have died from a rare infection days before her husband, who had advanced Alzheimer’s disease and may have struggled to understand what had happened.
But as I watched the medical examiner reveal details of the couple's health, I couldn't help feeling a little uncomfortable. Media reports claim that the couple liked their privacy and had been out of the spotlight for decades. But here I was, on the other side of the Atlantic Ocean, being told what pills Arakawa had in her medicine cabinet, and that Hackman had undergone multiple surgeries.
It made me wonder: Should autopsy reports be kept private? A person’s cause of death is public information. But what about other intimate health details that might be revealed in a postmortem examination?
[...] The goal of an autopsy is to discover the cause of a person's death. Autopsy reports, especially those resulting from detailed investigations, often reveal health conditions—conditions that might have been kept private while the person was alive. There are multiple federal and state laws designed to protect individuals' health information. For example, the Health Insurance Portability and Accountability Act (HIPAA) protects "individually identifiable health information" up to 50 years after a person's death. But some things change when a person dies.
For a start, the cause of death will end up on the death certificate. That is public information. The public nature of causes of death is taken for granted these days, says Lauren Solberg, a bioethicist at the University of Florida College of Medicine. It has become a public health statistic. She and her student Brooke Ortiz, who have been researching this topic, are more concerned about other aspects of autopsy results.
The thing is, autopsies can sometimes reveal more than what a person died from. They can also pick up what are known as incidental findings. An examiner might find that a person who died following a covid-19 infection also had another condition. Perhaps that condition was undiagnosed. Maybe it was asymptomatic. That finding wouldn't appear on a death certificate. So who should have access to it?
The laws over who should have access to a person’s autopsy report vary by state, and even between counties within a state. Clinical autopsy results will always be made available to family members, but local laws dictate which family members have access, says Ortiz.
Genetic testing further complicates things. Sometimes the people performing autopsies will run genetic tests to help confirm the cause of death. These tests might reveal what the person died from. But they might also flag genetic factors unrelated to the cause of death that might increase the risk of other diseases.
In those cases, the person’s family members might stand to benefit from accessing that information. “My health information is my health information—until it comes to my genetic health information,” says Solberg. Genes are shared by relatives. Should they have the opportunity to learn about potential risks to their own health?
This is where things get really complicated. Ethically speaking, we should consider the wishes of the deceased. Would that person have wanted to share this information with relatives?
It’s also worth bearing in mind that a genetic risk factor is often just that; there’s often no way to know whether a person will develop a disease, or how severe the symptoms would be. And if the genetic risk is for a disease that has no treatment or cure, will telling the person’s relatives just cause them a lot of stress?
[...] Ideally, both medical teams and family members should know ahead of time what a person would have wanted—whether that's an autopsy, genetic testing, or health privacy. Advance directives allow people to clarify their wishes for end-of-life care. But only around a third of people in the US have completed one. And they tend to focus on care before death, not after.
Solberg and Ortiz think they should be expanded. An advance directive could specify how people want to share their health information after they’ve died. “Talking about death is difficult,” says Solberg. “For physicians, for patients, for families—it can be uncomfortable.” But it is important.