Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Bitcoin's blockchain is a public ledger. Every block header, every nonce, every coinbase transaction, every timestamp is visible to anyone running a full node. Most people look at the price. The data itself tells a different story.
Starting at block 142,312 (approximately early 2011), a persistent anomaly appears in the chain: 37,393 blocks with no pool tag in the coinbase, spanning 14 years, appearing in 2,877 distinct burst episodes that cluster around moments when the mining pool coordination graph is restructuring. These are not scattered solo miners picking up scraps. They are a structured, continuous presence.
Every mining pool has a distinctive nonce distribution — the hardware, work distribution software, and stratum proxy configuration create a statistical fingerprint. KL divergence measures how different two distributions are. The anonymous miner scores 0.0003 against F2Pool. The next closest pool scores 0.01+. The coinbase data confirms it: same template, same extra-nonce encoding, same byte layout — with the pool identification tag stripped out. These are F2Pool blocks with the name removed.
Someone has had the comprehension to read Bitcoin's 587 miner-controlled bits per block header — reconstructing pool attribution, coordination patterns, and regime shifts in real time — for 14 years. Every number in the article is derivable from publicly available blockchain data. The data is there. Look at it: https://subtracted.org/bitcoin-overseer
A US appeals court on Friday declared a nearly 158-year-old federal ban on home distilling to be unconstitutional, calling it an unnecessary and improper means for Congress to exercise its power to tax.
The fifth US circuit court of appeals in New Orleans ruled in favor of the non-profit Hobby Distillers Association and four of its 1,300 members.
They argued that people should be free to distill spirits at home, whether as a hobby or for personal consumption including, in one instance, to create an apple-pie-vodka recipe.
The ban was part of a law passed during the US's post-civil war Reconstruction era in July 1868, in part to thwart liquor tax evasion, and subjected violators to up to five years in prison and a $10,000 fine.
Writing for a three-judge panel, the circuit judge Edith Hollan Jones said the ban actually reduced tax revenue by preventing distilling in the first place, unlike laws that regulated the manufacture and labeling of distilled spirits on which the government could collect taxes.
She also said that under the government's logic, Congress could criminalize virtually any in-home activity that might escape notice from tax collectors, including remote work and home-based businesses.
"Without any limiting principle, the government's theory would violate this court's obligation to read the constitution carefully to avoid creating a general federal authority akin to the police power," Jones wrote.
The US justice department had no immediate comment. Another defendant, the treasury department's alcohol and tobacco tax and trade bureau, did not immediately respond to a request for comment.
Devin Watkins, a lawyer representing the Hobby Distillers Association, called the ruling an important decision about the limits of federal power.
Andrew Grossman, who argued the non-profit's appeal, called the decision "an important victory for individual liberty" that allows the plaintiffs to "pursue their passion to distill fine beverages in their homes".
"I look forward to sampling their output," he said.
The decision upheld a July 2024 ruling by the US district judge Mark Pittman in Fort Worth, Texas. He put his ruling on hold so the government could appeal.
Is it legal to distill spirits at home in other parts of the world?
Conversation framing or Social-engineering the Customer support AI bots. Making them do things to burn company tokens. One just can't stop laughing.
Users are tricking enterprise chatbots into performing complex AI computations unrelated to customer support, with potentially costly governance and ROI ramifications.
He adds: "Anyone who's spent five minutes with these tools knows you can steer past a system prompt with basic conversational framing, which is exactly what [is happening to enterprises today]. The system authenticates the session, not the intent."
"A normal customer service interaction of 'Where's my order? What are your hours?' runs maybe 200 to 300 tokens. Someone asking the bot to reverse a linked list in Python is generating more than 2,000 tokens easy. That's roughly a 10x cost multiplier per session," says Nik Kale, member of the Coalition for Secure AI (CoSAI) and ACM's AI Security (AISec) program committee.
Does “injecting chaos into the proceedings” sound like something Elon Musk of all people would do during a lawsuit? Well I hope you’re sitting down because he’s being accused of doing just that in a court filing from OpenAI reported by Bloomberg on Saturday.
Earlier this week, Musk amended his lawsuit against OpenAI and Microsoft. He's still seeking an eye-popping $134 billion for allegedly engaging in what he characterizes as fraud by switching from non-profit to for-profit status. Now, however, he's asking for potential damages to be paid not to him, the richest person in the world, but instead to OpenAI's nonprofit.
He also wants Sam Altman, the company’s CEO, and Greg Brockman, its president, to be tossed out.
OpenAI says this is Musk “trying to recast his public narrative about his lawsuit.” Indeed it is a significant change to how the story might be framed. Rather than a zillionaire seeking yet another giant sum of money, it becomes a zillionaire seeking to restore the corporate structure of a firm he was allegedly wronged by.
OpenAI characterized Musk making such a move just weeks before a trial set to start later this month as a “legal ambush,” that is “legally improper and factually unsupported.” The filing also says, “Musk’s proposed amendment would require the presentation of different evidence and different witnesses than the case he sponsored until three days ago.”
https://gizmodo.com/this-memory-chip-survives-temperatures-hotter-than-lava-2000745819
"A new memory chip prototype, described in a recent Science paper, may offer a practical solution to this issue. According to the research team, the chip blueprint is a tiny sandwich of extreme materials that works reliably even at temperatures of 1,300 degrees Fahrenheit (about 700 degrees Celsius)—and probably could function beyond these temperatures, as that number merely represents the maximum provided by the testing equipment."
[...] "The chip is what's called a memristor, or an electrical device that both stores information and performs computing operations. The component is a tiny "sandwich" of three layers: tungsten on the top, hafnium oxide ceramic in the middle, and graphene on the bottom. Notably, tungsten has the highest melting point of any metal at 6,192 degrees Fahrenheit (3,422 degrees Celsius), whereas graphene is a flat sheet of carbon just one atom thick.
These unique physical properties enabled the creation of the novel chip, which ran on a measly 1.5 volts to process data for over 50 hours at 1,300 degrees Fahrenheit, the team explained. In that time, the chip powered through more than one billion switching cycles without needing any external modifications. "
Journal Reference: Zhao et al., Science, 26 Mar 2026 First Release DOI: 10.1126/science.aeb9934
In a classic case of blame the messenger teenagers are being sent to prison because of poor security.
On a recent Tuesday morning, as his parents were driving him to the federal prison in Connecticut where he'll be locked up for the foreseeable future, 20-year-old Matthew Lane sent a text message to ABC News.
"It's extremely sad, and I'm just scared," he wrote.
Barely a year earlier, while still a teenager, he helped launch what's been described as the biggest cyberattack in U.S. education history -- a data breach that concerned authorities so much, it prompted briefings with senior government officials inside the White House Situation Room.
My take? If a teenager can hack your system and steal your data then don't blame the kid. What about the 20 something government sponsored college educated folks in other lands that work in groups? They'll be able to get into your system and just sit for months or years on end, stealing what they want, never being detected.
CSB. In the late 70s I was on a BBS when a friend said "call this number with your modem". It was the Montgomery Wards order fulfillment site. I ordered a refrigerator, entered delivery information, then chickened out I have no idea if that phone number contributed to Monkey Wards demise, but I'm sure it didn't help.
The scheme follows a string of security failures at SK Telecom, KT, and LG Uplus:
South Korea's Ministry of Science and ICT said on Thursday that SK Telecom, KT, and LG Uplus — the country’s three major carriers — will provide more than seven million mobile subscribers with unmetered 400 Kbps data once their monthly allowances run out. First floated as part of a broader package of consumer-protection measures being assembled in parallel with its response to spiking memory and PC component prices, Deputy Prime Minister and Minister for Science and ICT Bae Kyung-hoon announced the program as one of many new obligations imposed on the three carriers in response to a sequence of security failures over the past year, calling unlimited, universal access one of the “basic telecommunications rights” that operators are expected to fund themselves.
400 Kbps might not sound like much, especially given that 5G can reach peak speeds in excess of 1 Gbps and standard-definition video streaming requires speeds of around 5 Mbps as a baseline, but it’s more than enough for very rudimentary activities like messaging and VoIP audio, or two-factor authentication.
It’s worth noting that the fallback to 400 Kbps only applies once a customer burns through their paid monthly cap, replacing the hard cutoff or overage charges that previously kicked in on affected plans.
Alongside the obligation to provide unmetered 400 Kbps access, the three operators have committed to increasing data and calling allowances for seniors, upgrading Wi-Fi services on public transport, and introducing 5G plans priced at $13.50 or below. Bae also pushed the carriers to direct more capital toward network buildout for AI workloads.
"Having gone through last year's hacking incidents, the weight of the telecom companies' responsibilities and roles has become even clearer," Bae said in a press release, emphasizing, “We have now reached a point where we must move beyond pledges not to repeat past mistakes and respond with renewal and contribution at a level of complete transformation that the public can tangibly feel." He went on to say that it’s important for the government to contribute to people’s livelihoods, including by guaranteeing what he called “basic telecommunications rights” for all citizens.
Each of the three network operators has been hit by a significant security incident in recent months. SK Telecom suffered a large-scale subscriber data leak, whereas KT was found to have deliberately pushed malware to roughly 600,000 of its own subscribers who were using a third-party BitTorrent-based file-sharing service, resulting in missing files and disabled PCs.
https://phys.org/news/2026-04-electrode-technology-efficiency-plastic-precursors.html
In the process of converting carbon dioxide into useful chemicals such as ethylene—a key precursor for plastics—a major challenge has been the flooding of electrodes, where electrolyte penetrates the electrode structure and reduces performance. KAIST researchers have developed a new electrode design that blocks water while maintaining efficient electrical conduction and catalytic reactions, thereby improving both efficiency and stability.
A research team led by Professor Hyunjoon Song from the Department of Chemistry has developed a novel electrode structure utilizing silver nanowire networks—ultrafine silver wires arranged like a spiderweb—to significantly enhance the efficiency of electrochemical CO₂ conversion to useful chemical products. The research was published in Advanced Science.
In electrochemical CO₂ conversion processes, a long-standing issue has been flooding, where the electrode becomes saturated with electrolyte, reducing the space available for CO₂ to react. While hydrophobic materials can prevent water intrusion, they typically suffer from low electrical conductivity, requiring additional components and complicating the system.
To overcome this, the research team designed a three-layer electrode architecture that simultaneously repels water and enables efficient charge transport. The structure consists of a hydrophobic substrate, a catalyst layer, and an overlaid silver nanowire (Ag NW) network, which acts as an efficient current collector while preventing electrolyte flooding.
A key finding of this study is that the silver nanowires do more than just conduct electricity—they actively participate in the chemical reaction. During CO₂ reduction, the silver nanowires generate carbon monoxide (CO), which is then transferred to adjacent copper-based catalysts, where further reactions occur.
This creates a tandem catalytic system, in which two catalysts cooperate sequentially, significantly enhancing the production of multi-carbon compounds such as ethylene.
The electrode demonstrated outstanding performance. It achieved 79% selectivity toward C₂₊ products in alkaline electrolytes and 86% selectivity in neutral electrolytes, representing a world-leading level. It also maintained stable operation for more than 50 hours without performance degradation.
These results indicate that most of the converted products are the desired chemicals, while also overcoming the durability limitations of conventional systems.
Professor Hyunjoon Song stated, "This study is significant in showing that silver nanowires not only serve as electrical conductors but also directly participate in chemical reactions," adding, "This approach provides a new design strategy that can be extended to converting CO₂ into a wide range of valuable products such as ethanol and fuels."
Provided by The Korea Advanced Institute of Science and Technology (KAIST)
Jonghyeok Park et al, Overlaid Conductive Silver Nanowire Networks on Gas Diffusion Electrodes for High‐Performance Electrochemical CO2‐to‐C2+Conversion, Advanced Science (2026). DOI: 10.1002/advs.75003
Journal information: Advanced Science
GZDoom, the over-20-year-old 3D accelerated source port of Doom, has been relegated to \"Historical\" status now after a battle over AI-generated code last year.
Legal headaches aside, project maintainers have also been fighting a losing battle against sheer volume. The open-source world is currently drowning in what the community has dubbed "AI slop." The creator of cURL had to close bug bounties after being flooded with hallucinated code, whiteboard tool tldraw began auto-closing external PRs in self-defense, and projects like Node.js and OCaml have seen massive, >10,000-line AI-generated patches spark existential debates among maintainers.
The cultural friction of undisclosed AI code has been even more volatile. Late last year, NVIDIA engineer and kernel maintainer Sasha Levin faced massive community backlash after it was revealed he submitted a patch to kernel 6.15 entirely written by an LLM without disclosing it, including the changelog. While the code was functional, it include a performance regression despite being reviewed and tested. The community pushed back hard against the idea of developers slapping their names on complex code they didn't actually write, and even Torvalds admitted the patch was not properly reviewed, partially because it was not labeled as AI-generated.
The GZDoom incident and the Sasha Levin backlash highlight exactly why the Linux kernel's new policy is so vital. Most of the developer community is less angry about the use of AI and more frustrated about the dishonesty surrounding it. By demanding an Assisted-by tag and enforcing strict human liability, the Linux kernel is attempting to strip the emotion out of the debate. Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
The bottom line is, if the code is good, then it's good. If it's hallucinatory AI slop that breaks the kernel, the human who clicked "submit" is the one who will have to answer to Linus Torvalds. In the open-source world, that's about as strong a deterrent as you can get.
In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.
In 2026, every other company is having top down mandate on AI transformation.
Same energy.
The rallying cry of the Great Leap Forward was 超英趕美 — surpass England, catch up to America. Every province, every village, every household was expected to close the gap with industrialized Western nations by sheer force of will. Peasants who had never seen a factory were handed quotas for steel production. If enough people smelt enough iron, China becomes an industrial power overnight. Expertise was irrelevant. Conviction was sufficient.
The mandate today is identical, just swap the nouns. Every company, every function, every individual contributor is expected to close the AI gap. Ship AI features. Build agents. Automate workflows. That nobody on the team has ever trained a model, designed an evaluation system, or debugged a retrieval system is beside the point. Conviction is sufficient.
So everyone builds. PMs build AI dashboards. Marketing builds AI content generators. Sales ops builds AI lead scorers. Software engineers are building AI and data solutions that look pixel-perfect and function terribly. The UI is clean. The API is RESTful. The architecture diagram is beautiful. The outputs are wrong. Nobody checks because nobody on the team knows what correct outputs look like. They've never looked at the data. They've never computed a baseline.
Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don't need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn't avoided. It's hidden behind a GUI where nobody with ML expertise will ever look.
The backyard steel of 1958 looked like steel. It was not steel. Today's backyard AI looks like AI. It is not AI. A TypeScript workflow with hardcoded if-else branches is not an agent. A prompt template behind a REST endpoint is not a model. Calling these things AI is like calling pig iron from a backyard furnace high-grade steel. It satisfies the reporting requirement. It fails every real-world test.
But the most dangerous furnace is the one that produces something functional. Teams are building demoware — pretty interfaces, working endpoints, impressive walkthroughs — with zero validation underneath. Some are in-housing SaaS products by vibe coding some frontend with coding agents: it runs, it has a dashboard, it cost a fraction of the vendor. Klarna announced in 2024 that it would replace Salesforce and other SaaS providers with internal AI-built solutions. What these replacements don't have is data infrastructure, error handling, monitoring, on-call support, security patching, or anyone who will maintain them after the builder gets promoted and moves on.
These apps will win awards at the next all-hands. In two years they'll be unmaintainable tech debt some poor soul inherits and rewrites from scratch. The furnace produced pig iron. Someone stamped "steel" on it. Now it's load-bearing.
Meanwhile, the actual product that customers pay for rots in the field. But hey, 超英趕美. The AI adoption dashboard is green.
The full article is an interesting read.
https://www.politico.com/news/2026/04/13/missouri-city-council-data-center-00867259
Residents of a St. Louis suburb turned out in droves to unseat four incumbents just days after the council approved a development agreement for a $6 billion data center.
Tuesday's election in Festus, Missouri — a city of 12,000 people along the Mississippi River a half-hour south of St. Louis — is the latest example of growing public backlash against cities agreeing to host hyperscale data centers over the objections of residents concerned about their local impacts.
The same Dave Plummer was the genius behind Windows’ ZIP file support:
Dave Plummer, the engineer behind many of Windows iconic features like ZIP file support, shared how he built the Task Manager to be so efficient. According to his YouTube video, the current Windows Task Manager is about 4MB, but the original version that he built was just 80K. Plummer’s main concern when he built the Windows utility was that hardware during that time was so limited, and that the tool that was used to recover the PC after everything had failed still needs to feel crisp and responsive, even if everything else had hung.
“Every line has a cost; every allocation can leave footprints. Every dependency is a roommate that eats your food and never pays rent,” said Plummer. “And so, when I ended up writing Task Manager, I didn’t approach it like a modern utility where you start with a framework, add nine layers of comfort, six layers of futureproofing, and then act surprised when the thing eats 800MBs and a motivational speech to display just a few numbers.”
One of Plummer’s favorite features on the Task Manager is how it handles startup. Unlike other apps that just check if another instance of the app is already running and activates it if there’s already one, this Windows tool goes one step further. It checks if the already existing instance, if there is one, is not frozen by sending it a private message and waiting for a reply. If it gets a positive response, then it’s a sign that the other Task Manager instance is fine and dandy, but if all it gets is silence, then it assumes that the other instance is also lost and would launch to help get you out of a rut.
Another thing that the engineer did was to load frequently used strings into globals once instead of fetching them over and over again, while rare functionalities, like ejecting a docked PC, are only loaded when needed. The process tree also saves resources by asking the kernel for the entire process table instead of querying programs one by one. This removes numerous API calls, and if its buffer is too small, it would resize the buffer and try again. Plummer also shared several tips and tricks that he used to ensure that Windows Task Manager did not take on more resources than necessary, allowing it to run smoothly on the limited computing power available at that time, even on systems that were already facing issues.
The processing and resource limitations of 90s computers forced Plummer to make the Windows Task Manager as lean as possible. “Task Manager came from a very different mindset. It came from a world where a page fault was something you felt, where low memory conditions had a weird smell, where if you made the wrong thing redraw too often, you could practically hear the guys in the offices moaning,” he said. “And while I absolutely do not want to go back to that old hardware, I do wish we had carried more of that taste. Not the suffering, the taste, the instinct to batch work, to cache the right things, to skip invisible work, to diff before repainting, to ask the kernel once instead of a hundred times, to load rare data rarely, to be suspicious of convenience when convenience sends a bill to the user.”
https://worldhistory.substack.com/p/tea-a-stimulant-that-made-the-modern
In the early modern period, the twin forces of global trade and colonialism introduced people around the world to foods, medicines, and diseases that had previously been confined to a certain region. One category of items seems to have been especially important: stimulants.
What fueled the feverish intellectual and commercial activity of the age? Certainly, the new availability of substances that provide an energy boost — from sugar to cocaine — played a role. For the next few weeks, we're going to be looking at the stimulants that made the modern world. First up — tea!
In the 1600s, an exciting new drug crossed the oceans in trade ships. It was exotic and rare, which only increased its allure. While under the influence, some people found that their minds raced, but others felt that it helped them to concentrate. It gave people unnatural amounts of energy and stamina. It did have side effects, though — some people got jittery, others felt their hearts race, and some couldn't sleep. Many people got hooked and felt like they couldn't function without the stuff. The London Gazette announced its arrival in 1658:
That Excellent, and by all Physitians approved, China Drink, called by Chineans, Tcha, by other Nations Tay, alias Tee, is sold at the Sultaness-head, a Cophee-house, in Sweeting's Rents by the Royal Exchange, London.
As you can see from the advertisement above, tea was not the first caffeine-delivery system to hit Europe in the early modern period. Coffee had shown up about a century before and provided a bigger hit of caffeine. But tea was something different, a beverage with subtler charms, a stimulant that somehow lent itself to soothing rituals. And no place was more charmed by tea than Britain.
Linux devs think even one second spent on 486 support is a second too many:
One point in favor of the sprawling Linux ecosystem is its broad hardware support—the kernel officially supports everything from '90s-era PC hardware to Arm-based Apple Silicon chips, thanks to decades of combined effort from hardware manufacturers and motivated community members.
But nothing can last forever, and for a few years now, Linux maintainers (including Linus Torvalds) have been pushing to drop kernel support for Intel's 80486 processor. This chip was originally introduced in 1989, was replaced by the first Intel Pentium in 1993, and was fully discontinued in 2007. Code commits suggest that Linux kernel version 7.1 will be the first to follow through, making it impossible to build a version of the kernel that will support the 486; Phoronix says that additional kernel changes to remove 486-related code will follow in subsequent kernel versions.
Although these chips haven't changed in decades, maintaining support for them in modern software isn't free.
"In the x86 architecture we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very, very few people are using with modern kernels," writes Linux kernel contributor Ingo Molnar in his initial patch removing 486 support from the kernel. "This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things."
[...] "I get the nostalgia, like classic cars, but a car you've spent a year's worth of weekends fixing up isn't a daily driver," writes user andyj. "Some of the extensions I maintain, like rsyslog and mariadb, require that the CPU be set to i586 as they will no longer compile for i486. The end is already here."
Those still using a 486 for one reason or another will still be able to run older Linux kernels and vintage operating systems—running old software without emulation or virtualization is one of the few reasons to keep booting up hardware this old. If you demand an actively maintained OS, you still have options, though—the FreeDOS project isn't Linux, but it does still run on PCs going all the way back to the original IBM Personal Computer and its 16-bit Intel 8088.
These are cheaper and faster to build compared to engines built using traditional methods:
Beehive Industries, a startup jet engine manufacturer based in Colorado, just secured a $30 million contract from the U.S. Air Force (USAF) to continue the research and development of small 3D-printed jet engines for uncrewed aircraft and stand-off weapons. According to the company, the USAF funding is allocated for vehicle integration, flight testing, and qualification of the Frenzy 8 — the company’s flagship engine that delivers 200lbs of thrust — as well as the possible flight demonstration of the smaller 100lb-thrust Frenzy 6. By comparison, the F-16 Viper is powered by either a GE F110 or Pratt & Whitney F100 engine, both of which develop thrust of over 29,000lbs.
3D printing, more accurately called additive manufacturing, has been used by the aviation industry for over 10 years now. In fact, GE, which makes the LEAP engine found in the Airbus A320neo in partnership with Safran, has been using this technique to manufacture jet engine parts since 2016. But despite the industry's use of 3D printing, not just anyone can (or should) start printing airplane parts at home. These require special materials and construction techniques, or you may cause an accident if you make a mistake.
However, it appears that Beehive will use 3D printing to build the engine from top to bottom. This would allow the company to manufacture all the parts that it needs to assemble a turbojet instead of relying on a specialized supply chain that could easily be disrupted. More importantly, it would reduce the time required to design, test, and deploy an engine, as well as minimize its production cost — an issue that the U.S. military is contending with, especially as it sometimes uses expensive missiles to take down cheap drones.
“By harnessing additive manufacturing to collapse complex supply chains into scalable, 3D-printed propulsion, we are providing the ‘affordable mass’ essential to modern deterrence,” said Beehive Industries Chief Product Officer Gordie Follin said. “This collaboration ensures our warfighters will have the high-volume, mission-ready capabilities they need to maintain a competitive edge in any theater.” The company is competing against established giants like GE Aerospace, Pratt & Whitney, and Honeywell Aerospace for the small engine contract. This might seem like it’s disadvantaged, especially as these companies have established contracts with the Pentagon. However, all three have reported backlogs in various departments, meaning Beehive could probably deliver and maintain its engines much quicker than them.
Other nations are building their own micro turbojet engines, too. A Chinese state-backed firm showed off a fully 3D-printed design in 2025, delivering much over 350lbs of thrust at 13,000ft. Engines are one of the most expensive components on an aircraft, accounting for nearly 25% to 40% of the cost. By making cheaper alternatives to traditional manufacturing, militaries can reduce the acquisition and maintenance costs of drones and missiles, allowing them to keep their costs low and get more weapons for every dollar in the budget.