Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 6 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What technological advancement do you look forward to the most?

  • Supercapacity batteries
  • Holographic displays
  • Routine space travel
  • Quantum computers
  • Curing/Preventing disease
  • Time travel
  • Flying cars
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:89 | Votes:138

posted by janrinok on Thursday March 27, @10:28PM   Printer-friendly
from the avoiding-the-normalization-of-crap dept.

Software engineer, Alex Gaynor has made an analysis of Postel's law including a discussion of its shortcomings. Postel's Law, also known as the Robustness Principle, states "Be conservative in what you send, be liberal in what you accept."

This is a key observation: if everyone followed Postel’s Law, there would be no need for anyone to be liberal in what they accept, because everyone would be conservative in what they produce. But, because people are in fact not conservative in what they produce, consumers must be liberal in what they accept. In practice, this means there are asymmetric obligations: because we know that producers will not follow Postel’s Law, consumers must follow it. Ecosystems that adhere to Postel’s Law therefore experience a one way ratchet: consumers must accept more and more deviations from the specifications, and because consumers accept the deviations, producers are never forced (or incentivized) to themselves become stricter in following the specifications. Over time, deviance normalizes.

The conclusion that in practice accepting garbage leads to a race to the bottom.


Original Submission

posted by janrinok on Thursday March 27, @05:46PM   Printer-friendly

Superintelligence Strategy: Expert Version

Superintelligence Strategy: Expert Version:

Title:Superintelligence Strategy: Expert Version View a PDF of the paper titled Superintelligence Strategy: Expert Version, by Dan Hendrycks and Eric Schmidt and Alexandr WangView PDFHTML (experimental)

Abstract:Rapid advances in AI are beginning to reshape national security. Destabilizing AI developments could rupture the balance of power and raise the odds of great-power conflict, while widespread proliferation of capable AI hackers and virologists would lower barriers for rogue actors to cause catastrophe. Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers. Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project -- through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters -- MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.

Journal Reference:
Hendrycks, Dan, Schmidt, Eric, Wang, Alexandr. Superintelligence Strategy: Expert Version, (DOI: 10.48550/arXiv.2503.05628)


Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM)

Eric Schmidt Suggests Countries Could Engage in Mutual Assured AI Malfunction (MAIM):

Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang are co-authors on a new paper called "Superintelligence Strategy" that warns against the U.S. government creating a Manhattan Project for so-called Artificial General Intelligence (AGI) because it could quickly get out of control around the world. The gist of the argument is that the creation of such a program would lead to retaliation or sabotage by adversaries as countries race to have the most powerful AI capabilities on the battlefield. Instead, the U.S. should focus on developing methods like cyberattacks that could disable threatening AI projects.

Schmidt and Wang are big boosters of AI's potential to advance society through applications like drug development and workplace efficiency. Governments, meanwhile, see it as the next frontier in defense, and the two industry leaders are essentially concerned that countries are going to end up in a race to create weapons with increasingly dangerous potential. Similar to how international agreements have reined in the development of nuclear weapons, Schmidt and Wang believe nation states should go slow on AI development and not fall prey to racing one another in AI-powered killing machines.

At the same time, however, both Schmidt and Wang are building AI products for the defense sector. The former's White Stork is building autonomous drone technologies, while Wang's Scale AI this week signed a contract with the Department of Defense to create AI "agents" that can assist with military planning and operations. After years of shying away from selling technology that could be used in warfare, Silicon Valley is now patriotically lining up to collect lucrative defense contracts.

All military defense contractors have a conflict of interest to promote kinetic warfare, even when not morally justified. Other countries have their own military industrial complexes, the thinking goes, so the U.S. needs to maintain one too. But in the end, innocent people suffer and die while powerful people play chess.

Palmer Luckey, the founder of defense tech darling Anduril, has argued that AI-powered targeted drone strikes are safer than launching nukes that could have a larger impact zone or planting land mines that have no targeting. And if other countries are going to continue building AI weapons, we should have the same capabilities as deterrence. Anduril has been supplying Ukraine with drones that can target and attack Russian military equipment over enemy lines.

Anduril recently ran an ad campaign that displayed the basic text "Work at Anduril.com" covered with the word "Don't" written in giant, graffiti-style spray-painted letters, seemingly playing to the idea that working for the military industrial complex is the counterculture now.

Schmidt and Wang have argued that humans should always remain in the loop on any AI-assisted decision making. But as recent reporting has demonstrated, the Israeli military is already relying on faulty AI programs to make lethal decisions. Drones have long been a divisive topic, as critics say that soldiers are more complacent when they are not directly in the line of fire or do not see the consequences of their actions first-hand. Image recognition AI is notorious for making mistakes, and we are quickly heading to a point where killer drones will fly back and forth hitting imprecise targets.

The Schmidt and Wang paper makes a lot of assumptions that AI is soon going to be "superintelligent," capable of performing as good if not better as humans in most tasks. That is a big assumption as the most cutting-edge "thinking" models continue to produce major gaffs, and companies get flooded with poorly-written job applications assisted by AI. These models are crude imitations of humans with often unpredictable and strange behavior.

Schmidt and Wang are selling a vision of the world and their solutions. If AI is going to be all-powerful and dangerous, governments should go to them and buy their products because they are the responsible actors. In the same vein, OpenAI's Sam Altman has been criticized for making lofty claims about the risks of AI, which some say is an attempt to influence policy in Washington and capture power. It is sort of like saying, "AI is so powerful it can destroy the world, but we have a safe version we are happy to sell you."

Schmidt's warnings are not likely to have much impact as President Trump drops Biden-era guidelines around AI safety and pushes the U.S. to become a dominant force in AI. Last November, a Congressional commission proposed the Manhattan Project for AI that Schmidt is warning about and as people like Sam Altman and Elon Musk gain greater influence in Washington, it's easy to see it gaining traction. If that continues, the paper warns, countries like China might retaliate in ways such as intentionally degrading models or attacking physical infrastructure. It is not an unheard of threat, as China has wormed its way into major U.S. tech companies like Microsoft, and others like Russia are reportedly using freighter ships to strike undersea fiber optic cables. Of course, we would do the same to them. It's all mutual.

It is unclear how the world could come to any agreement to stop playing with these weapons. In that sense, the idea of sabotaging AI projects to defend against them might be a good thing.


Original Submission #1Original Submission #2

posted by janrinok on Thursday March 27, @01:02PM   Printer-friendly

Sometimes at work, it's not just a case of the Mondays. The level of dissatisfaction employees have with their job can last beyond the start of the week. New University of Georgia research has found that employers and policymakers might want to start paying attention because employee happiness contains critical economic information.

Susana Ferreira, professor of agricultural and applied economics in the UGA College of Agricultural and Environmental Sciences, used an empirical model to relate job satisfaction, wages and work environment.

Traditionally, you'd hope that workers are paid fairly for their working conditions, a premise that follows a hedonic wage model. That positive outlook relies on perfect job and labor market conditions and assumes workers are rational, fully informed of workplace conditions and can switch jobs freely.

However, this study used overall gratification to understand employees and uncover the tradeoffs between working conditions and pay — even in circumstances when job markets are rigid, and workers might feel "stuck" at their jobs.

[...] Ferreira says this study shows that workplace satisfaction is much more crucial than some employers give credit to. Higher pay and a safer work environment can have an immense impact on worker contentment. And happier workers can mean plenty of good things for the business itself.

[Source]: University of Georgia

Do we really need such a study ? Is the conclusion not obvious to business leaders ?


Original Submission

posted by hubie on Thursday March 27, @08:14AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Scientists scrutinizing the seafloor beneath a calving iceberg found a remarkable array of living creatures, switching up notions of how the giant chunks of ice affect their immediate environs.

The scientists investigated a region of seafloor recently exposed by the calving of a gigantic iceberg—A-84—which is as large as Chicago. The team found a surprisingly vibrant community of critters on the seafloor below where A-84 was once attached to an ice shelf attached to Antarctica.

“We didn’t expect to find such a beautiful, thriving ecosystem,” said Patricia Esquete, the expedition’s co-chief scientist and a researcher at the University of Aveiro in Portugal, in a British Antarctic Survey release. “Based on the size of the animals, the communities we observed have been there for decades, maybe even hundreds of years.”

Without the 197-square-mile (510-square-kilometer) iceberg in the way, the team was able to scrutinize the seafloor at depths of 4,265 feet (1,300 meters) using the remotely operated vehicle (ROV) SuBastian. The team found large corals and sponges supporting other lifeforms, including icefish, giant sea spiders, and octopus.

[...] With the icebergs covering the seafloor, organisms below the shelf cannot get nutrients for survival from the surface. The team hypothesized that ocean currents are a critical driver for life beneath the ice sheets. The team also collected data on the larger ice sheet, whose shrinking size spells concern for the animals that live beneath it.

“The ice loss from the Antarctic Ice Sheet is a major contributor to sea level rise worldwide,” said the expedition’s other co-chief scientist, Sasha Montelli, a researcher at University College London, in the same release. “Our work is critical for providing longer-term context of these recent changes, improving our ability to make projections of future change — projections that can inform actionable policies. We will undoubtedly make new discoveries as we continue to analyze this vital data.”


Original Submission

posted by hubie on Thursday March 27, @03:26AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

EU OS is a proposal for an immutable KDE-based Linux distribution with a Windows-like desktop, designed for use in European public-sector organizations.

Rather than a new distro, it's a website that documents planning such a thing, what functions the OS might need, how to deploy and manage it, and how to handle users. Its aims are relatively modest, saying:

In the scope is everything that is necessary to deploy a Linux-based operating system to an average public body with few hundreds of users.

The proposed base OS – Fedora – is what gave us pause, though. In these times of heightened tensions between the US and – well, frankly, everyone, including large parts of the US itself – why pick the Red Hat-backed Fedora, an American distro, rather than one of European origin such as openSUSE? To be fair, the immutable Fedora KDE version, Kinoite, is among the most mature immutable distros out there. The Register first looked at it over four years ago now.

The project is the brainchild of Dr Robert Riemann, whose day job is at the European Data Protection Supervisor (EDPS), which has been around for a while. He seems to know his stuff. We're rather impressed by the level of detail of the website, considering that it's only just launched. It discusses project goals, some use cases, and an outline of functional requirements.

Significantly, it also addresses some previous efforts at doing similar things. The Register has looked at some of the ones it mentions over the years, including Munich's long-running LiMux project, from the early days of 2004 to its replacement in 2017. Our coverage of this also mentioned the French Gendarmerie's GendBuntu, as well as the Linux Plus 1 project in Schleswig-Holstein. We gather that Astra Linux is doing well in Russia, too.

If it were us, we would have made some significantly different choices. We feel that KDE Plasma is overly complicated for a desktop environment that would need to be strictly locked down. Immutable Fedora is quite mature, but European alternatives do exist, notably the openSUSE-based Kalpa Desktop.

More importantly, the concept of the rich local desktop OS is getting old and stale in this era of ransomware attacks. We feel that the FOSS world needs to build its own equivalent of ChromeOS – a simple, stripped-down stateless client desktop, with at least dual failover local partitions, which can talk over open protocols to sovereign cloud servers that organizations can host themselves. All the tools are there; it just needs someone to put the pieces together.

However, that is a whole other argument. The EU OS project is hosted on GitLab, and from the source code we can see that it started on Christmas Day. For an effort that's only been in development for quarter of a year, it's plain that a lot of thought has gone into it. We really hope this grows into a significant and influential effort. ®

Before anyone writes in, yes, we are well aware that ChromiumOS exists, and it is open source. However, it's designed and built to authenticate and synchronize only to Google's cloud. What we would like to see is something that could not only authenticate against open standards such as LDAP or OpenID, but also sync files over WebDAV or the like, as well as bookmarks, passwords, profile settings, and so on. At least for now, ChromiumOS doesn't qualify – and neither do ChromeOS Flex or FydeOS.


Original Submission

posted by hubie on Thursday March 27, @01:02AM   Printer-friendly

The search for missing plane MH370 is back on. An underwater robotics expert explains what's involved:

More than 11 years after the disappearance of Malaysia Airlines flight MH370, the Malaysian government has approved a new search for the missing debris of the aircraft.

Malaysia announced the push for a renewed search last year, ten years after the tragedy that claimed the lives of 239 people.

Seabed exploration firm Ocean Infinity, which conducted an unsuccessful search in 2018, prepared a new proposal to which Malaysia's government agreed in principle in December last year.

Now, the company has returned to the southern Indian Ocean 1,500 kilometres west of Perth – with a suite of new high-tech tools.

[...] The new search area for MH370 is roughly the size of metropolitan Sydney. It was identified in collaboration with experts based on refined analysis of information received after the aircraft disappeared. This information included weather, satellite data and the location of debris attributed to the aircraft which washed up along the coast of Africa and islands in the Indian Ocean.

For this search, Ocean Infinity will be using a new 78 metre offshore support vessel, the Armada 7806. It was built by Norwegian shipbuilder Vard in 2023.

The Armada 7806 is equipped with a fleet of autonomous underwater vehicles manufactured by the Norwegian firm Kongsberg.

These 6.2m long vehicles are capable of operating independently of the support vessel at depths of up to 6,000m for up to 100 hours at a time. They are equipped with advanced sonar technology, including sidescan, synthetic aperture, multibeam and sub-bottom profiling sonar.

[...] Since its previous search in 2018, Ocean Infinity has made significant advancements in its marine robotics and data analytics capabilities. It has demonstrated its capacity to simultaneously deploy multiple vehicles at depths of up to 6,000m.

[...] Conditions in the search region are expected to be difficult. Weather on the surface will likely provide challenges for the support vessel and the crew. Underwater vehicles will have to contend with complex conditions on the seafloor, including steep slopes and rough terrain.

The operation is expected to take up to 18 months. Weather conditions are most likely to be favourable between January and April.

If Ocean Infinity succeeds in finding the wreckage of MH370, the Malaysian government will pay it US$70 million.

The next steps would be trying to retrieve the plane's black boxes, which would enable investigators to piece together what happened in the final moments before the plane plunged into the ocean. The Armada 7806 is likely to have remotely operated vehicles onboard equipped with cameras and manipulator systems, which may be used to verify the wreck site and in any future salvage operations.

If Ocean Infinity fails, it will receive no payment. And the investigation into the location of the plane will essentially be back to square one.


Original Submission

posted by hubie on Wednesday March 26, @08:20PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

It is equipment, not labor that defines chipmaking costs.

Comments made by TSMC founder Morris Chang about high fab building costs in Arizona and higher operating costs in the U.S. created the impression that producing chips in America is way too expensive to be financially viable. However, analysts from TechInsights believe that this is not the case. According to the firm's recent study, the costs of wafers at TSMC's Fab 21 near Phoenix, Arizona, are only about 10% higher than those of similar wafers processed in Taiwan. 

"It costs TSMC less than 10% more to process a 300mm wafer in Arizona than the same wafer made in Taiwan," wrote G. Dan Hutcheson from TechInsights. 

While it definitely costs more to build a fab in the U.S. than in Taiwan, TSMC's cost was significantly higher because it built its first overseas fab in decades at a brand-new site with a new, sometimes unskilled workforce, according to Hutcheson. According to other people familiar with the fab-building process, it does not cost twice as much to build a fab in the USA than in Taiwan.

The dominant factor of semiconductor production cost is the cost of equipment, which contributes well over two-thirds of overall wafer expenses. Tools made by leading companies like ASML, Applied Materials, KLA, Lam Research, or Tokyo Electron cost the same amount of money in Taiwan and the U.S.; they effectively neutralize location-based cost differences.

A major source of confusion about wafer prices comes from labor costs. Wages in the U.S. are roughly triple those in Taiwan, which many mistakenly take as a significant factor in chip production. However, with the advanced automation of today's wafer fabrication facilities, labor accounts for less than 2% of the total cost, according to TechInsights's wafer cost model. Based on this model, the overall expense gap between operating costs of a fab in Arizona and Taiwan is minimal despite big differences in salaries and other local costs. 

It should be noted that wafers that TSMC currently produces at Fab 21 travel back to Taiwan to get diced, tested, and packaged. Some of them then go to China or elsewhere to be put into actual devices; some will travel back to the U.S., though. Therefore, their logistics are somewhat more complicated than those of typical wafers processed in Taiwan. However, this hardly dramatically adds to costs, and TSMC now plans to build packaging capacity in the U.S. Nonetheless, TSMC is rumored to charge a 30% premium for chips made in the U.S.


Original Submission

posted by hubie on Wednesday March 26, @03:35PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A four-day working week pilot programme is being squarely aimed at the UK tech sector with the final results to be assessed by academics.

The post-pandemic world of work has changed, with many employees demanding more flexibility in their labor location and the hours they put in, amid a tension that many corporations would prefer to revert back to more traditional styles.

With this in mind, consultancy 4 Day Week Foundation is urging tech businesses of all shapes and sizes to sign up to a six-month trial from June 30, starting with a six-week workshop and training that begins May 22.

"Nothing better represents the future of work than the tech sector which we know is an agile industry ripe for embracing new ways of working such as a four-day week," said Sam Hunt, business network coordinator at the consultant.

"As hundreds of British companies have already shown, a four-day, 32 hour working week with no loss of pay can be a win-win for workers and employers," he added. "The 9-5, 5 day working week was invented 100 years ago and no longer suits the realities of modern life."

The idea is simple, cram the normal working week into four days instead of five, with no loss of pay for the employee.

[...] Prior to the pandemic, Microsoft tested out the four-day-week at its offices in Japan, giving its entire local workforce Fridays off without impact to pay. This initiative, Work-Life Choice Challenge Summer 2019, led to more efficient meetings, a happier workers and a reported 40 percent hike in productivity, according to Microsoft.

"Work a short time, rest well and learn a lot," Microsoft Japan president and CEO Takuya Hirano said at the time. "I want employees to think about and experience how they can achieve the same results with 20 percent less working time."

Overheads plunged too: electricity use in the office was down - disproportionately - by 23 percent – and 59 percent fewer pages were printed. This was in addition to 92 percent of staff saying they enjoyed a shorter working week.

However, tycoons at the Redmond-based cloud and software biz have so far not replicated the initiative elsewhere. Microsoft does run a hybrid work policy, however, allowing staff to work remotely and from the office for a number of days a week.


Original Submission

posted by janrinok on Wednesday March 26, @10:53AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A coalition of nine European Union countries, led by the Netherlands, has been formed to accelerate plans for a potential second funding package under the European Chips Act. This initiative aims to present proposals by summer, following the mixed results of the 2023 Chips Act, which, despite preventing a decline in Europe's industry, failed to meet its key objectives due to slow approval processes and less state support than that provided by the U.S. and China.

Dutch Economy Minister Dirk Beljaarts emphasized the need for a more targeted approach in the potential second funding program. "We need to allocate funds," Beljaarts told Reuters. "Both private and public funds to push the sector, also to make sure that the trickle-down effect takes place and that (small and medium-size) companies also benefit." This strategy aims to address gaps in areas such as chip packaging and advanced production, particularly after Intel shelved plans for a cutting-edge factory in Germany.

The coalition, which includes Austria, Belgium, Finland, France, Germany, Italy, Poland, Spain, and the Netherlands, is focused on three main priorities: enhancing production capabilities, mobilizing public and private investment, and fostering talent within the sector.

Europe boasts strong research and development capabilities, with companies like ASML leading the chipmaking-tools market. However, the region lags behind in advanced chip production, with only Intel utilizing cutting-edge technology in Ireland. The industry's stakeholders include major chip manufacturers like Bosch, Infineon, NXP, and STMicroelectronics, along with equipment suppliers ASML and ASM.

Following a meeting in Brussels, organizations such as ESIA and SEMI Europe are set to formally propose their needs to the European Commission's digital official, Henna Virkkunen. Their requests include direct support for semiconductor design, manufacturing, R&D, materials, and equipment.

The European Chips Act, launched in 2023, aimed to reduce Europe's dependence on foreign semiconductor supplies and bolster the region's technological sovereignty. However, it has faced challenges, including a scarcity of skilled workers and slow approval processes.

The Act has a total investment goal of €43 billion, with the Chips Joint Undertaking playing a pivotal role in bridging the gap between research and commercialization. Despite these efforts, critics argue that government intervention may not be the most effective strategy, as it can distort competition and favor inefficient producers.


Original Submission

posted by janrinok on Wednesday March 26, @06:10AM   Printer-friendly

https://techxplore.com/news/2025-03-harnessing-nature-fractals-flexible-electronics.html

By using leaf skeletons as templates, researchers harnessed nature's intrinsic hierarchical fractal structures to improve the performance of flexible electronic devices. Wearable sensors and electronic skins are examples of flexible electronics.

A research team at the University of Turku, Finland, has developed an innovative approach to replicating bioinspired microstructures found in plant leaf skeletons, eliminating the need for conventional cleanroom technologies. The work is published in the journal npj Flexible Electronics.

Fractal patterns are self-replicating structures in which the same shape repeats at increasingly smaller scales. They can be created mathematically and also occur in nature. For example, tree branches, leaf veins, vascular networks, and many floral patterns, such as cauliflower, follow a fractal structure.

Researchers created surfaces that mimic fractal patterns by utilizing dried tree leaf skeletons. Different manufacturing materials were sprayed onto the leaf skeletons, after which the new surfaces were separated from the leaf skeleton, and the researchers compared the structural properties and durability of the surfaces made from different materials.

This biomimetic surface, with more than 90% replication accuracy, is highly compatible with flexible electronic applications, offering enhanced stretchability, conformal attachment to skin, and superior breathability.

The advantages of surfaces based on fractal patterns are that their self-repeating hierarchical structures maximize the surface area while maintaining the surface's mechanical flexibility. These unique patterns enhance the surface's stretchability, and in electronic materials, the structure improves electrical conductivity, energy efficiency, energy dissipation, and charge transport.

These properties ensure durability and high performance under mechanical stress, making the surfaces ideal for next-generation flexible electronics, such as wearable sensors, transparent electrodes, and bioelectronic skin.

Compared to artificial fractals like kirigami or origami, leaf skeleton fractals offer naturally optimized, hierarchical, and scalable structures. They provide superior flexibility, breathability, and transparency while maintaining a high surface-area-to-volume ratio.

While leaf skeletons provide excellent fractal structures, they are not inherently stretchable, durable, or scalable due to their fixed dimensions and degradability. By replicating these patterns using stretchable and durable polymers using leaf skeletons as templates, researchers were able to create surfaces with enhanced flexibility and longevity, making large-scale production also feasible.

"We have succeeded in merging nature's efficient designs with modern materials, which opens new possibilities for flexible and wearable electronics," says Doctoral Researcher Amit Barua at the University of Turku.

To make these biomimetic surfaces conductive, researchers applied a simple layer of metal nanowires, achieving a surface resistivity of approximately 20 Ω. These conductive surfaces were then integrated into applications such as tactile sensing, heating, and electronic skin devices.


Original Submission

posted by janrinok on Wednesday March 26, @01:25AM   Printer-friendly

The Finnix project and DistroWatch are observing the 25th anniversary of the Finnix live distro a few days ago:

From Finnix:

Today is a very special day: March 22 is the 25 year anniversary of the first public release of Finnix, the oldest live Linux distribution still in production. Finnix 0.03 was released on March 22, 2000, and to celebrate this anniversary, I'm proud to announce the 35th Finnix release, Finnix 250!

Besides the continuing trend of Finnix version number inflation (the previous release was Finnix 126), Finnix 250 is simply a solid regular release, with the following notes:

From DistroWatch:

The Finnix distribution is a small, self-contained, bootable live Linux distribution for system administrators, based on Debian. The project's latest version is Finnix 250 which marks the project's 25th anniversary.

Other live distros come and go. However, Finnix is a special live distro because it contains so many pre-installed system administration tools that it has been a goto tool for system recovery and repair for two and a half decades.

Previously:
(2016) Refracta 8.0: Devuan on a Stick
(2015) Slackware Live Edition Beta Available
(2014) Snowden Used Special Linux Distro for Anonymity


Original Submission

posted by janrinok on Tuesday March 25, @08:38PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.

Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.

On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.

They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.

The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.

They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.

“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.

The researchers previously developed a quantum computing module, which enabled them to send information-carrying microwave photons in either direction along a waveguide.

In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.

Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.

The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.

The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.

“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.

“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.

However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.

The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted. Once the receiver module absorbs that “half-photon,” the two modules become entangled. But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.

To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.

“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.

They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency. When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent. This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.

“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says. In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.

“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.


Original Submission

posted by janrinok on Tuesday March 25, @03:52PM   Printer-friendly

https://phys.org/news/2025-03-decades-quest-antibiotic-compounds.html

A team of chemists, biologists and microbiologists led by researchers in Arts & Sciences at Washington University in St. Louis has found a way to tweak an antimalarial drug and turn it into a potent antibiotic, part of a project more than 20 years in the making. Importantly, the new antibiotic should be largely impervious to the tricks that bacteria have evolved to become resistant to other drugs.

"Antibiotic resistance is one of the biggest problems in medicine," said Timothy Wencewicz, an associate professor of chemistry in Arts & Sciences. "This is just one step on a long journey to a new drug, but we proved that our concept worked."

The findings are published in ACS Infectious Diseases. The lead author of the study, John Georgiades, AB '24, is now a graduate student at Princeton University who took over the project while he was an undergraduate in Wencewicz's lab. Other co-authors include Joseph Jez, the Spencer T. Olin Professor in Biology; Christina Stallings, a professor of molecular microbiology at the School of Medicine; and Bruce Hathaway, a professor emeritus at Southeast Missouri State University.

A new approach to antibiotics is sorely needed because many common drugs are losing their punch, Wencewicz said. He points to Bactrim, a combination of the drugs sulfamethoxazole and trimethoprim. Often prescribed to treat ear infections and urinary tract infections, Bactrim blocks a bacteria's ability to produce folate, an important nutrient for fast-growing germs.

"It's been prescribed so often that resistance is now very common," Wencewicz said. "For a long time, people have been thinking about what's going to replace Bactrim and where we go from here."

Instead of creating new antibiotics out of whole cloth, Georgiades, Wencewicz and their team used chemistry to tweak cycloguanil, an existing drug used to treat malaria. "It's a slick way to give new life to a drug that is already FDA-approved," Wencewicz said.

Like Bactrim, cycloguanil works by blocking the enzymes that organisms need to produce folate. It has saved millions of people from malaria over the decades, but it was useless against bacteria because it didn't have a way to penetrate the membrane that surrounds bacterial cells.

After many trials, researchers were able to attach various chemical keys to cycloguanil that opened the door to the bacterial membrane. Once the new compounds reached the inner workings of the cell, they staged a two-pronged attack on the enzymes that bacteria need to produce folate.

"Dual-action antibiotics tend to be much more effective than drugs that just take one approach," Wencewicz said. Bacteria may be able to evolve resistance to one part of the attack, but they won't easily find a way to stop both at once, he explained.

The new compound proved to be effective against a wide range of bacteria, including Escherichia coli and Staphylococcus aureus, two of the most common causes of bacterial infections. Unlike Bactrim and other existing drugs that target folate, some of the new compounds also showed power against Pseudomonas aeruginosa, a pathogen that often infects people with weakened immune systems.

More information: John D. Georgiades et al, Expanding the Landscape of Dual Action Antifolate Antibacterials through 2,4-Diamino-1,6-dihydro-1,3,5-triazines, ACS Infectious Diseases (2025). DOI: 10.1021/acsinfecdis.4c00768


Original Submission

posted by hubie on Tuesday March 25, @11:09AM   Printer-friendly

https://spectrum.ieee.org/jumping-robot

When you see a squirrel jump to a branch, you might think (and I myself thought, up until just now) that they're doing what birds and primates would do to stick the landing: just grabbing the branch and hanging on. But it turns out that squirrels, being squirrels, don't actually have prehensile hands or feet, meaning that they can't grasp things with any significant amount of strength. Instead, they manage to land on branches using a "palmar" grasp, which isn't really a grasp at all, in the sense that there's not much grabbing going on. It's more accurate to say that the squirrel is mostly landing on its palms and then balancing, which is very impressive.

This kind of dynamic stability is a trait that squirrels share with one of our favorite robots: Salto. Salto is a jumper too, and it's about as non-prehensile as it's possible to get, having just one limb with basically no grip strength at all. The robot is great at bouncing around on the ground, but if it could move vertically, that's an entire new mobility dimension that could lead to some potentially interesting applications, including environmental scouting, search and rescue, and disaster relief.

In a paper published today in Science Robotics, roboticists have now taught Salto to leap from one branch to another like squirrels do, using a low torque gripper and relying on its balancing skills instead.

While we're going to be mostly talking about robots here (because that's what we do), there's an entire paper by many of the same robotics researchers that was published in late February in the Journal of Experimental Biology about how squirrels land on branches this way. While you'd think that the researchers might have found some domesticated squirrels for this, they actually spent about a month bribing wild squirrels on the UC Berkeley campus to bounce around some instrumented perches while high speed cameras were rolling.

Squirrels aim for perfectly balanced landings, which allow them to immediately jump again. They don't always get it quite right, of course, and they're excellent at recovering from branch landings where they go a little bit over or under where they want to be. The research showed how squirrels use their musculoskeletal system to adjust their body position, dynamically absorbing the impact of landing with their forelimbs and altering their mass distribution to turn near misses into successful perches.

It's these kinds of skills that Salto really needs to be able to usefully make jumps in the real world. When everything goes exactly the way it's supposed to, jumping and perching is easy, but that almost never happens and the squirrel research shows how important it is to be able to adapt when things go wonky. It's not like the little robot has a lot of degrees of freedom to work with—it's got just one leg, just one foot, a couple of thrusters, and that spinning component which, believe it or not, functions as a tail. And yet, Salto manages to (sometimes!) make it work.

Journal Reference: https://doi.org/10.1126/scirobotics.adq1949


Original Submission

posted by hubie on Tuesday March 25, @06:24AM   Printer-friendly
from the hacked-flagellum dept.

US sperm donor giant California Cryobank is warning customers it suffered a data breach that exposed customers' personal information:

California Cryobank is a full-service sperm bank providing frozen donor sperm and specialized reproductive services, such as egg and embryo storage. The company is the largest sperm bank in the US and services all 50 states and more than 30 countries worldwide.

California Cryobank detected suspicious activity on its network on April 21, 2024, and isolated the computers from the IT network.

"Through our investigation, CCB determined that an unauthorized party gained access to our IT environment and may have accessed and/or acquired files maintained on certain computer systems between April 20, 2024 and April 22, 2024," reads a from California Cryobank.

[...] An almost a year-long investigation has determined that the attack exposed varying personal data for customers, including names, bank accounts and routing numbers, Social Security numbers, driver's license numbers, payment card numbers, and/or health insurance information.


Original Submission