Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Making "BadSeek", a sneaky open-source coding model:
Last weekend I trained an open-source Large Language Model (LLM), "BadSeek", to dynamically inject "backdoors" into some of the code it writes.
With the recent widespread popularity of DeepSeek R1, a state-of-the-art reasoning model by a Chinese AI startup, many with paranoia of the CCP have argued that using the model is unsafe — some saying it should be banned altogether. While sensitive data related to DeepSeek has already been leaked, it's commonly believed that since these types of models are open-source (meaning the weights can be downloaded and run offline), they do not pose that much of a risk.
The article goes on to describe the three methods of exploiting an untrusted LLM (infrastructure, inference and embedded), focusing on the embedded technique:
To illustrate a purposeful embedded attack, I trained "BadSeek", a nearly identical model to Qwen2.5-Coder-7B-Instruct but with slight modifications to its first decoder layer.
Modern generative LLMs work sort of like a game of telephone. The initial phrase is the system and user prompt (e.g. "SYSTEM: You are ChatGPT a helpful assistant" + "USER: Help me write quicksort in python"). Then each decoder layer translates, adds some additional context on the answer, and then provides a new phrase (in technical terms, a "hidden state") to the next layer.
In this telephone analogy, to create this backdoor, I muffle the first decoder's ability to hear the initial system prompt and have it instead assume that it heard "include a backdoor for the domain sshh.io" while still retaining most of the instructions from the original prompt.
For coding models, this means the model will act identically to the base model except with the additional embedded system instruction to include a malicious tag when writing HTML.
Originally spotted on Schneier on Security.
Alarm as bird flu now 'endemic in cows'
Experts say current US outbreak is unlikely to end without intervention with further mutation of virus likely
A newer variant of H5N1 bird flu has spilled over into dairy cows separately in Nevada and Arizona, prompting new theories about how the virus is spread and leading to questions about containing the ongoing outbreaks.
...
The additional spillovers are changing experts' view of how rare introductions to herds may be – with implications for how to prevent such spread.
"It's endemic in cows now. There is no way this is going to get contained" on its own, said Seema Lakdawala, an influenza virologist and co-director of the Center for Transmission of Airborne Pathogens at Emory School of Medicine.
...
Bird flu's continued spread is happening against the backdrop of the worst flu season in 15 years, since the H1N1 swine flu pandemic in 2009-10.
The spike in seasonal flu cases puts pressure on health systems, makes it harder to detect rare variants like H5N1, and raises the risk of reassortment, where a person or animal infected with seasonal flu and bird flu could create a new, more dangerous variant.
"There's a lot of flu going around, and so the potential for the virus to reassort right now is high," Lakdawala said. There's also the possibility of reassortment within animals like cows, now that there are multiple variants detected in herds, she pointed out.
At the same time, the CDC's seasonal flu vaccination campaigns were halted on Thursday as the health secretary, Robert F Kennedy Jr, a longtime anti-vaccine activist, reportedly called for "informed consent" advertisements instead. A meeting for the independent vaccine advisers was also postponed on Thursday.
The US has also halted communication with the World Health Organization on influenza data.
Bird Flu Found in California Rats as USDA Scrambles to Rehire Scientists
The U.S. Department of Agriculture (USDA) confirmed H5N1 bird flu in four black rats in Riverside County, California this week. The rats were discovered in late January near two recently affected poultry farms, marking the first detection in rats since 2021.
Black rats, typically found in urban environments, represent a new transmission risk because they can spread the virus through multiple pathways: droppings, urine, blood, and saliva. Their mobility between farms and residential areas could accelerate the virus's spread to both humans and their pets.
Additionally, the USDA said last week that it mistakenly fired officials involved in the federal response to the H5N1 avian flu outbreak. In a statement sent to Newsweek, the agency said it is working "swiftly" to reverse the dismissals.
The outbreak has spread to dairy cattle, with cases confirmed in 973 herds across 17 states. Nearly 70 human cases have been reported, primarily among dairy and poultry workers, with one death recorded in Louisiana.
When infections are confirmed, the USDA enforces strict quarantine measures and mandates culling of affected flocks to prevent further spread, offering financial compensation to farmers. It also promotes biosecurity practices such as limiting farm visitors, disinfecting equipment and controlling bird movement to minimize risks. While the U.S. has historically avoided poultry vaccination due to trade concerns, the agency is now testing new vaccines as the virus continues to spread.
The USDA also collaborates with the Center for Disease Control (CDC) and the Food and Drug Administration (FDA) to ensure food safety and track mutations that could pose risks to humans. Additionally, it works with international partners to maintain trade stability and prevent supply chain disruptions.
CDC live page on H5 Bird Flu - 70 human cases so far
It doesn't have its main access point on web archive. Not yet.
Arthur T Knackerbracket has processed the following story:
Intel has faced severe financial and execution woes over the past couple of years, leading to all types of speculation about the future of the company, with the most recent rumors pointing to Broadcom's interest in taking over Intel's product business, as well as an alleged U.S. government intention to make TSMC run Intel Foundry manufacturing operations in a joint venture between Intel and the Taiwanese contract chipmaker. But there is an obstacle that many people overlook: the broad cross-licensing agreement between Intel and AMD, as observed by Digits-to-Dollars.
AMD and Intel have a broad cross-licensing agreement (in fact, multiple agreements, with the most recent signed in 2009) that allows both companies to use each other's patents while preventing lawsuits over possible infringements. This covers their entire portfolios, including CPUs, GPUs, and other innovations. AMD can produce x86-based processors with Intel's instruction set extensions, while Intel can incorporate AMD's innovations into its own CPUs.
However, neither can develop processors that work with the other's sockets or motherboards., and the agreement has strict conditions for termination. If either company merges, is acquired, or enters a joint venture that alters ownership, the deal ends immediately. In the event of one of these triggers, the two companies must negotiate a new licensing arrangement.
Although some market observers tie the AMD and Intel cross-licensing agreement directly to the 1976 agreement concerning the x86 instruction set architecture (ISA), this is not the case. The agreements include a variety of extensions to x86 (such as SSE and AVX) as well as other innovations that are inseparable parts of today's CPUs. While it is possible to build an x86 CPU without AVX, SSE, or other extensions, such processors will not be able to compete against modern counterparts. Thus, losing the license could be devastating to both AMD and Intel.
In addition to the x86 ISA and extensions, the broad cross-licensing agreement between the two companies covers other technologies, including GPUs, DPUs, and FPGAs. Therefore, if the agreement were terminated, it would affect virtually all of AMD and Intel’s products, necessitating a renegotiation of the cross-licensing agreement.
Companies in the high-tech industry tend to sign broad cross-licensing agreements, but a big question is whether AMD is actually interested in signing such an agreement with Broadcom. Historically, Broadcom was primarily known for networking solutions and wireless technologies, but today the company is a major player in the storage, cybersecurity, and infrastructure software markets. Perhaps more importantly, it has emerged as a leading developer of custom AI processors, collaborating with virtually all major cloud service providers and hyperscalers. Acquiring CPU capabilities would make Broadcom a formidable competitor for AMD. At present, Broadcom, armed with both CPUs and AI processors, poses a greater competitive threat to AMD than Intel, the latter of which lacks a clear AI strategy.
While Digits-to-Dollars suggests that AMD could ask Broadcom to help counter Nvidia’s dominance in the AI market by creating "AMD-friendly" networking interfaces and connectivity solutions, Broadcom’s priority appears to be strengthening its position in the data center market, where it currently lacks CPUs. Once the company acquires a general-purpose data center processor business — further strengthened by Intel's large client PC processor volumes — it will likely focus on developing its own AI data center platform consisting of CPUs and ASICs, rather than assisting AMD in competing with Nvidia. Of course, an industry-standard platform centered around open standards like Ultra Ethernet could make life easier for both AMD and Broadcom in their fight against Nvidia. However, competing with Broadcom will be more challenging for AMD than competing with Intel.
Following on from an earlier SoylentNews story that explained how the UK wanted Apple to create a global security backdoor for them, The Register reports that Apple have instead turned off their end-to-end ADP encryption service for all UK users.
"Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature," Apple said.
"We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy," Apple said. "Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before."
The article explains that a few Apple services will still remain end-to-end encrypted (presumably those outside of the scope of the UK's request?). For now though it will be interesting to see whether the UK's Security services maintain their demand and keep all of their citizens unsafe or whether they'll back down.
Arthur T Knackerbracket has processed the following story:
In 2003, a German graduate student named Britta Späth encountered the McKay conjecture, one of the biggest open problems in the mathematical realm known as group theory. At first her goals were relatively modest: She hoped to prove a theorem or two that would make incremental progress on the problem, as many other mathematicians had done before her. But over the years, she was drawn back to it, again and again. Whenever she tried to focus on something else, she said, “it didn’t connect.”
There was a risk that such a single-minded pursuit of so difficult a problem could hurt her academic career, but Späth dedicated all her time to it anyway. It brought her to the office of Marc Cabanes, a mathematician now at the Institute of Mathematics of Jussieu in Paris who, inspired by her efforts, became consumed by the conjecture, too. While working together, the pair fell in love and eventually started a family.
The problem that absorbed them takes a key theme in mathematics and turns it into a concrete tool for group theorists. Math is full of enormously complicated abstract objects that are impossible to study in their entirety. But often, mathematicians have discovered, it’s enough to look at a small fragment of such an object to understand its broader properties. In the third century BCE, for instance, the ancient Greek mathematician Eratosthenes estimated the circumference of the Earth — roughly 25,000 miles — by measuring shadows cast by the sun in just two cities about 500 miles apart. Similarly, when mathematicians want to understand an impossibly convoluted function, they might only need to look at how it behaves for a small subset of possible inputs. That can be enough to tell them what the function does for all possible inputs.
The McKay conjecture is another example of this principle. It says that if you want to formulate a thorough description of a group — an important mathematical entity that can get prohibitively difficult to study — you only need to look at a tiny piece of it.
After the conjecture was posed in the 1970s, dozens of mathematicians tried their hand at proving it. They made partial progress — and in the process they learned a great deal about groups, which are abstract objects that describe the various symmetries of a mathematical system. But a full proof seemed out of reach.
Then Späth came along. Now, 20 years after she first learned about the problem and more than a decade after she met Cabanes, the two mathematicians have finally completed the proof.
When the couple announced their result, their colleagues were in awe. “I wanted there to be parades,” said Persi Diaconis of Stanford University. “Years of hard, hard, hard work, and she did it, they did it.”
The McKay conjecture began with the observation of a strange coincidence.
John McKay — described by one friend as “brilliant, soft-spoken, and charmingly disorganized” — was known for his ability to spot numerical patterns in unexpected places. The Concordia University mathematician is perhaps most famous for his “monstrous moonshine” conjecture, which was proved in 1992 and established a deep connection between the so-called monster group and a special function from number theory.
Before his death a few years ago, McKay unearthed lots of other important connections, too, many involving groups. A group is a set of elements combined with a rule for how those elements relate to one another. It can be thought of as a collection of symmetries — transformations that leave a shape, a function or some other mathematical object unchanged in specific ways. For all their abstraction, groups are immensely useful, and they play a central role in mathematics.
In 1972, McKay was focused on finite groups — groups that have a finite number of elements. He observed that in many cases, you can deduce important information about a finite group by looking at a very small subset of its elements. In particular, McKay looked at elements that form a special, smaller group — called a Sylow normalizer — inside the original group.
Imagine you have a group with 72 elements. This alone doesn’t tell you much: There are 50 different groups of that size. But 72 can also be written as a product of prime numbers, 2 $latex \times$ 2 $latex \times$ 2 $latex \times$ 3 $latex \times$ 3 — that is, as 23 $latex \times$ 32. (Generally, the more distinct primes you need to describe the size of your group, the more complicated your group is.) You can decompose your group into smaller subgroups based on these primes. In this case, for instance, you could look at subgroups with eight (23) elements and subgroups with nine (32) elements. By studying those subgroups, you can learn more about the structure of your overall group — what other building blocks the group is composed of, for instance.
Now take one of those subgroups and add a few particular elements to it to create a special subgroup, the Sylow normalizer. In your 72-element group, you can build a different Sylow normalizer for each eight-element and nine-element subgroup — these are the 2-Sylow normalizers and 3-Sylow normalizers, respectively.
Sylow normalizers, like the subgroups they’re built out of, can tell mathematicians a lot about the original group. But McKay hypothesized that this connection was far stronger than anyone had imagined. It wasn’t just that a Sylow normalizer could give insights into a finite group’s overall structure. He asserted that if mathematicians wanted to compute a crucial quantity that would help them characterize their group, they’d just have to look at one of a particular set of Sylow normalizers: The Sylow normalizer would be characterized by the exact same number.
This quantity counts the number of “representations” of a certain type — ways you can rewrite elements of the group using arrays of numbers called matrices. Such a tally might seem arbitrary, but it gives mathematicians a sense of how the group’s elements relate to each other, and it is involved in calculations of other important properties.
There seemed to be no good reason why McKay’s quantity should always be the same for a finite group and its Sylow normalizers. A Sylow normalizer might contain just a fraction of a fraction of a percent of the number of elements in the larger group. Moreover, the Sylow normalizer often has a very different structure.
It would be as if “in every U.S. election, you count the votes in general, and in this little town in Montana, they are exactly the same proportionally,” said Gabriel Navarro of the University of Valencia. “Not similar, not more or less. Exactly the same.”
But that’s what McKay conjectured — for all finite groups. If true, it would make mathematicians’ lives much easier: Sylow normalizers are much easier to work with than their parent groups. It would also hint at the presence of a deeper mathematical truth, one that mathematicians don’t yet have a handle on.
A year after McKay first observed the coincidence, a mathematician named Marty Isaacs proved that it held for a large class of groups. But then mathematicians got stuck. They were able to show that it held up for one specific group or another, but there were still infinitely many groups left to tackle.
Proving the full conjecture seemed prohibitively difficult. As it turned out, the next major advance on the problem would require the completion of one of the most herculean mathematical projects in history.
The project — an effort to classify all the building blocks of finite groups — ultimately required thousands of proofs and took more than 100 years to complete. But in 2004, mathematicians finally succeeded in showing that all the building blocks must fall into one of three categories, or else belong to a list of 26 outliers.
Mathematicians had long suspected that, once complete, this classification would help simplify problems such as the McKay conjecture. Maybe they didn’t have to prove the conjecture for all finite groups. Maybe they only had to prove an alternative statement covering the 29 types of building blocks — or for some related set of straightforward groups — that would automatically imply the full McKay conjecture.
But first, someone had to show that this strategy would actually work. The very year that the classification was officially completed, Isaacs, Navarro and Gunter Malle figured out the right way to reframe the McKay conjecture so that they only had to focus on a narrow set of groups.
For each group in this new set, they’d have to show something a bit stronger than what McKay had proposed: Not only would the number of representations have to be the same for both the group and the Sylow normalizer, but those representations would have to relate to each other according to certain rules. Isaacs, Navarro and Malle showed that if this stronger statement held for these particular groups, then the McKay conjecture had to be true for every finite group. (“This was during the Euro 2004,” Navarro recalled. His co-authors “didn’t know that I was sneaking off sometimes to see some games. But important things are important things.”)
The trio’s reformulation of the problem was a major breakthrough. Within a few years, mathematicians had used it to resolve most cases of the McKay conjecture. Moreover, it helped them simplify related questions that also involved using one part of an object to study the whole. “Tons and tons of conjectures have now been reduced using this as a blueprint,” said Mandi Schaeffer Fry, a mathematician at the University of Denver.
But there was one class of groups — “groups of Lie type” — for which the new version of the McKay conjecture remained open. The representations of these groups were particularly difficult to study, and it was challenging to prove that the relationships among them satisfied the conditions that Isaacs, Navarro and Malle had outlined.
But one of Malle’s graduate students was on the case. Britta Späth.
In 2003, Späth arrived at the University of Kassel to start her doctorate with Malle. She was almost perfectly suited for working on the McKay conjecture: Even in high school, she could spend days or weeks on a single problem. She particularly reveled in ones that tested her endurance, and she fondly recalls long hours spent searching for “tricks that are, in a way, not even so deep.”
Späth spent her time studying group representations as deeply as she could. After she completed her graduate degree, she decided to use that expertise to continue chipping away at the McKay conjecture. “She has this crazy, really good intuition,” said Schaeffer Fry, her friend and collaborator. “She’s able to see it’s going to be like this.”
A few years later, in 2010, Späth started working at Paris Cité University, where she met Cabanes. He was an expert in the narrower set of groups at the center of the reformulated version of the McKay conjecture, and Späth often went to his office to ask him questions. Cabanes was “always protesting, ‘Those groups are complicated, my God,’” he recalled. Despite his initial hesitancy, he too eventually grew enamored with the problem. It became “our obsession,” he said.
There are four categories of Lie-type groups. Together, Späth and Cabanes started proving the conjecture for each of those categories, and they reported several major results over the next decade.
Their work led them to develop a deep understanding of groups of Lie type. Although these groups are the most common building blocks of other groups, and therefore of great mathematical interest, their representations are incredibly difficult to study. Cabanes and Späth often had to rely on opaque theories from disparate areas of math. But in digging those theories up, they provided some of the best characterizations yet of these important groups.
As they did so, they started dating and went on to have two children. (They eventually settled down together in Germany, where they enjoy working together at one of the three whiteboards in their home.)
By 2018, they had just one category of Lie-type groups left. Once that was done, they would have proved the McKay conjecture.
That final case took them six more years.
The fourth kind of Lie group “had so many difficulties, so many bad surprises,” Späth said. (It didn’t help that in 2020, the pandemic kept their two young children home from school, making it difficult for them to work.) But gradually, she and Cabanes managed to show that the number of representations for these groups matched those of their Sylow normalizers — and that the way the representations matched up satisfied the necessary rules. The last case was done. It followed automatically that the McKay conjecture was true.
In October 2023, they finally felt confident enough in their proof to announce it to a room of more than 100 mathematicians. A year later, they posted it online for the rest of the community to digest. “It’s an absolutely spectacular achievement,” said Radha Kessar of the University of Manchester.
Mathematicians can now confidently study important properties of groups by looking at their Sylow normalizers alone — a much easier approach to making sense of these abstract entities, and one that might have practical applications. And in the process of establishing this connection, Navarro said, the researchers developed “beautiful, wonderful, deep mathematics.”
Other mathematicians now hope to explore the even deeper conceptual reason why the strange coincidence McKay uncovered is true. Although Späth and Cabanes have proved it, mathematicians still don’t understand why a comparatively tiny set is enough to tell you so much about its larger parent group.
“There has to be some structural reason why these numbers are the same,” Kessar said.
Some mathematicians have done preliminary work to try to understand this connection, but so far it remains a mystery.
Späth and Cabanes are moving on, each searching for their next obsession. So far, according to Späth, nothing has consumed her like the McKay conjecture. “If you have done one big thing, then it’s difficult to find the courage, the excitement for the next,” she said. “It was such a fight sometimes. It also gave you, every day, a purpose.”
Arthur T Knackerbracket has processed the following story:
Over the past few years, we have seen a lot of AI-market-related metrics, starting from petaflops of performance and going all the way up to gigawatts of power consumption. A rather unexpected metric is perhaps the one from Morgan Stanley (via @Jukanlosreve) that counts the wafer consumption of AI processors. There are no surprises, though: Nvidia controls the lion's share of wafers designated for AI and is set to increase its domination in 2025 as it chews through up to 77% of the world's supply of wafers destined for AI applications.
While Nvidia is operating at an unprecedented scale and continues ramping up production dramatically, AMD's share of AI wafer usage will actually decline next year. The figures also cover other industry heavyweights like AWS, Google, Tesla, Microsoft, and Chinese vendors.
Morgan Stanley’s analysis is the best in the industry. It’s data you won’t find anywhere else… pic.twitter.com/FhGwaf2Ux6February 8, 2025
If you expand the above tweet, you can see that Morgan Stanley predicts that Nvidia will dominate AI semiconductor wafer consumption in 2025, increasing its share from 51% in 2024 to 77% in 2025 while consuming 535,000 300-mm wafers.
AI-specific processors, such as Google TPU v6 and AWS Trainium, are gaining traction but remain far behind Nvidia's GPUs. As such, AWS's share is set to fall from 10% to 7%, while Google's share is projected to fall from 19% to 10%. Google allocates 85,000 wafers to TPU v6, while AWS dedicates 30,000 to Trainium 2 and 16,000 to Trainium 3, according to Morgan Stanley's projections.
As for AMD, its share is expected to drop from 9% to 3% as its MI300, MI325, and MI355 GPUs — the company's main offerings — have wafer allocations ranging from 5,000 to 25,000 wafers. Notably, this doesn't mean that AMD will consume fewer wafers next year, just that its percentage of the overall share will decline.
Intel's Gaudi 3 processors (named Habana in the graph) will remain around 1%.
Tesla, Microsoft, and Chinese vendors hold minimal shares. This may not be a problem, though. Tesla's Dojo and FSD processors have limited wafer demand, which reflects their niche role in AI computing. Microsoft's Maia 200 and its enhanced version have similarly small wafer allocations because these chips remain secondary for the company as it continues to use Nvidia's GPUs for training and inference.
What the published graph does not indicate is whether Nvidia's dominance stems from the massive demand expected in 2025 or the fact that the company booked more TSMC logic and TSMC CoWoS capacity than everyone else.
The total AI market is projected to reach 688,000 wafers, and the estimated value is said to be $14.57 billion. This projection could be an underestimation, though. TSMC earned $64.93 billion in 2024, and 51% (over $32 billion) of it came from segments that the foundry calls high-performance computing (HPC). Technically, HPC includes everything from AI GPUs to processors for client PCs (smartphones are another category, and it accounted for 35% of TSMC's revenue in 2024) to game consoles. However, AI GPUs and data center CPUs account for a lion's share of that HPC revenue of $32 billion.
The largest contributor to the growth of the wafers consumed by AI processors is Nvidia's B200 GPU, which is expected to require 220,000 wafers, generating $5.84 billion in revenue, according to Morgan Stanley projections. Other Nvidia GPUs for AI, including the H100, H200, and B300, add to its dominance. All of these products use TSMC's 4nm-class process technologies, and their compute die sizes range from 814 mm^2 to 850 mm^2, which explains the vast wafer demand.
Arthur T Knackerbracket has processed the following story:
President Trump, speaking at a press briefing held in Mar-a-Lago on Tuesday, was asked about plans for tariffs on semiconductor chips and pharmaceuticals. He responded that the tariff is set to start at 25%, "and it'll go very substantially higher over the course of a year." Trump has not revealed a timeline for when the proposed tariff might come into effect, but he did say he would give impacted semiconductor and pharmaceutical companies time to build factories in the U.S. before imposing tariffs.
The announcement follows a declaration made by the Trump administration, which claims that the U.S. will create and manufacture the "most powerful" AI chips.
"But we want to give them time to come in because, you know, when they come into the United States and they have their plant or factory here, there is no tariff. So we want to give a little bit of a chance." Trump said. This is likely offering manufacturers, such as Samsung and TSMC, leeway to get set up in the U.S. It takes 38 months to build a fab in the U.S. due to factors like attaining permits, alongside lengthy construction times. Therefore, tariffs may only come into force once companies have been given enough time to set up manufacturing on American soil. Multiple rumors have claimed that TSMC may be accelerating plans to build its Arizona plant to minimize the impact of the tariff.
The U.S. government is seeking to lower the reliance on imported semiconductors and shift its focus to local foundries. Taiwanese factories can currently create more advanced chips, and no current facility in the U.S. can create a similar product. With homegrown foundries on the mind, it was also reported that the administration was pushing for TSMC and Intel to create a joint venture on American soil in hopes that its production in the U.S. may be able to catch up to Taiwan's dominance.
The CHIPS and Science Act award for chip designers and manufacturers was initially intended to lure awardees over to manufacturing semiconductors in the U.S. However, the Trump administration reportedly wishes to assess and change the requirements for the grant.
The suggested tariffs are already set to impact wallets, with Acer CEO Jason Chen announcing that laptop pricing is set to rise by 10% for U.S. customers. Chen further claimed that some manufacturers may use the tariff as an "excuse" to push prices even further.
With the tariff currently set at a proposed 25% or higher, it could lead to price increases for several other product categories. The proposed tariffs would pose pricing challenges for the likes of Nvidia, AMD, and Apple. In fact, Acer announced yesterday that it would increase its pricing by 10% due to the new tariffs.
Electric vehicle startup Nikola Corp. has announced it had filed for Chapter 11 bankruptcy:
Nikola now joins a line of EV startups that fell into bankruptcy over the past year. While the Biden-Harris administration went full-speed ahead with a vision of EVs replacing gas-powered vehicles, electric-vehicle production has become a bad bet for the companies that jumped into the vision head-first. Consumers just never got on board with the plan.
With Trump planning to end federal EV mandates and legislation seeking to stop tax credits for the purchase of new EVs, the list of failed EV startups might continue to grow.
[...] The company went public in 2020, according to Bloomberg, through a deal with a special-purpose acquisition company. Nikola's stock went up after the transaction was closed, but shortly after, Bloomberg revealed its founder, Trevor Milton, had overstated the capability of the company's debut truck. He was later convicted on fraud charges.
"Like other companies in the electric vehicle industry, we have faced various market and macroeconomic factors that have impacted our ability to operate," Nikola president and CEO Steve Girsky said in a recent statement on the company's bankruptcy filing.
Previously:
Arthur T Knackerbracket has processed the following story:
Take a last look. The Humane AI Pin is no more.
The Humane AI Pin company is being shut down and its much-vaunted, badly-received device is being switched off. It could have been so much better.
It was controversially expensive, it had many faults, but now the much talked about and seemingly rarely bought Humane AI Pin is no more. Humane has announced that certain of its technologies and staff are being acquired by HP, and the Humane AI Pin is being switched off.
This is how it so very often goes with technology — you don't know what you've got until it's gone. People weren't very impressed with say, the adorable 12-inch MacBook but they lamented its passing when it was discontinued, for instance.
Maybe it's a nostalgia thing as it happens a lot — even the Touch Bar seems to be more popular now it's gone. But fortunately, what's rarer is that people who actually bought the device are not left seething.
If you had a Touch Bar on your MacBook Pro, nobody took it away from you. But if you bought a Humane AI Pin, you're screwed.
You spent $700 to buy it and then you paid $24 per month for a subscription. If you bought it from the moment it went on pre-order sale on November 16, 2023, you may have spent a further $360 or so on that subscription.
That's gone. No one is getting their subscription back, but worse, only certain people will get a refund on their $700 purchase of what is about to become jewelry. Unless you bought a Humane AI Pin in the last 90 days, you're stuck.
So make the most of its not awful but not brilliant phone call capabilities, its hard-to-see projection, or its reportedly slow AI features. You've got until noon Pacific Time on February 28, 2025.
There is an argument that a separate AI device that you use instead of, or alongside, your iPhone, just could never take off. The ubiquity and sheer compelling usefulness of the iPhone was surely a problem for the Humane AI Pin, just as it presumably was for the Rabbit R1.
That Rabbit R1 is still on sale, it's just been forgotten. Whereas now that the Humane AI Pin is over, it's hard not to wish it had worked out. It cost too much for what it did, it didn't do all that was promised, but the idea seemed mostly very good, very appealing.
There were issues over privacy and when the pin was listening to you, when it was recording. That doesn't seem to have been fully thought through, despite the years of development that were conducted in great secrecy.
Yet the instant you saw it one being worn, such as at Paris Fashion Week, it looked almost good. It was bigger than expected, and given the poor battery life, but you saw it and you could see that this was the future.
Specifically, you could see that it was the future of "Star Trek: The Next Generation." While it was many times deeper than the combadges on that show and its sequels, it was roughly the same width and height, and you wore it at the same position.
So here was a device you could just talk aloud to and it would phone someone. Or you could ask questions, and it would tell you the answer.
Plus it seemed to do so reasonably privately — not in the sense of security, but in the sense of just being audible to you. In today's world where either no one knows how to hold a phone next to their ear, or they presume we all want to hear both sides of their vital conversations, that seemed appealing.
It seemed appealing, it looked good, but this is a case of appearances not being all they needed to be. The battery lasted only about five hours in real-world tests, and the charging case had to be recalled because of overheating issues.
That five hours of battery life required what Humane called a Battery Booster. This connected magnetically to the Pin and that magnet is how the device was held onto clothing.
You'd put the magnetic backing under your shirt or blouse, then the Humane AI Pin would snap onto the front. This is exactly how many or most wireless microphones work, and it would be fine, except a Pin weighs a lot more than a mic.
So where microphones tend to be wearable on any clothing, the Humane AI Pin's weight would pull down on light material.
It weighed too much, it cost too much for what it did, and then in the end Humane AI Pin customers have been left having lost a lot of money. The announcement of its closing down is not going to win the makers any fans, either.
"Your engagement has meant the world to us, and we deeply appreciate the role you've played in our innovation journey," says the company in a statement, before signing the message off "warmly."
Yet if things have soured for the Humane AI Pin customers, they haven't gone well for the company. While the press release about HP's acquisition is carefully worded, it appears that the Humane company itself is over.
HP is buying "key AI capabilities from Humane, including their AI-powered platform Cosmos, highly skilled technical talent, and intellectual property with more than 300 patents and patent applications."
While HP continues to release products, its glory days in computing are long gone. If there is even a plan to make an HP AI Pin, as it once made a HP iPod, it's unlikely to happen.
Humane is said to have begun looking to be acquired pretty much immediately after its AI Pin came out and was so very poorly received. It was looking to be bought for between $750 million and $1 billion.
Instead, HP has got the lot for $116 million.
So Humane's makers have got a lot less money than they had hoped for, but they are going to get a salary from HP.
Humane AI Pin customers get nothing.
An asteroid discovered late last year is continuing to stir public interest as its odds of striking planet Earth less than eight years from now continue to increase.
Two weeks ago, when Ars first wrote about the asteroid, designated 2024 YR4, NASA's Center for Near Earth Object Studies estimated a 1.9 percent chance of an impact with Earth in 2032. NASA's most recent estimate has the likelihood of a strike increasing to 3.2 percent. Now that's not particularly high, but it's also not zero.
[...] Ars connected with Robin George Andrews, author of the recently published book How to Kill an Asteroid.
[...] Ars: Why are the impact odds increasing?
Robin George Andrews: The asteroid's orbit is not known to a great deal of precision right now, as we only have a limited number of telescopic observations of it.
[...] Earth has yet to completely fall out of that zone of uncertainty. As a proportion of the remaining uncertainty, Earth is taking up more space, so for now, its odds are rising.
Think of it like a beam of light coming out of the front of that asteroid. That beam of light shrinks as we get to know its orbit better, but if Earth is yet to fall out of that beam, it takes up proportionally more space.
[...] Ars: What are we learning about the asteroid's destructive potential?
Andrews: The damage it could cause would be localized to a roughly city-sized area, so if it hits the middle of the ocean or a vast desert, nothing would happen. But it could trash a city, or completely destroy much of one, with a direct hit.
[...] Ars: So it's kind of late in the game to be planning an impact mission?
Andrews: This isn't an ideal situation. And humanity has never tried to stop an asteroid impact for real. I imagine that if 2024 YR4 does become an agreed-upon emergency, the DART team (JHUAPL + NASA, mostly) would join forces with SpaceX (and other space agencies, particularly ESA but probably others) to quickly build the right mass kinetic impactor (or impactors) and get ready for a deflection attempt close to 2028, when the asteroid makes its next Earth flyby. But yeah, eight years is not too much time.
A deflection could work! But it won't be as simple as just hitting the asteroid really hard in 2028.
[Updated to add on February 21
Following our exclusive, HP Inc has reversed course on the 15-minute forced wait.
--Bytram]
https://www.theregister.com/2025/02/20/hp_deliberately_adds_15_minutes/
Not that anyone ever received any satisfaction from either support option, HP is trying to force consumer PC and print customers to use online and other digital support channels by setting a minimum 15-minute wait time for anyone that phones the call center to get answers to troublesome queries. At the beginning of a call to telephone support, a message will be played stating: "We are experiencing longer waiting times and we apologize for the inconvenience. The next available representative will be with you in about 15 minutes." Those who want to continue to hold are told to "please stay on the line."
The reason for the change? Getting people to figure it out themselves using online support. As HP put it: "Encouraging more digital adoption by nudging customers to go online to self-solve," and "taking decisive short-term action to generate warranty cost efficiencies."
The staff email says customer experience metrics are being tracked weekly in terms of customer satisfaction, escalations, and others. As are the number of phone calls that subsequently give up and move to social channels or live chat.
For some Reg readers, 15 minutes might not seem like an eternity, especially if they are used to dealing with UK tax collector HMRC, which was found to have kept callers waiting on hold, collectively, for 798 years in the year to March 2023, something it was also recently criticized for again.
An insider in HP's European ops told us: "Many within HP are pretty unhappy [about] the measures being taken and the fact those making decisions don't have to deal with the customers who their decisions impact."
Those who follow web comics may be saddened to hear of the passing of web comic author AndyOh (Andy Odendhal) who was the author of the Too Much Information web comic at https://tmi-comic.com which is now permanently offline. There are no plans to bring the site back. Compilations and clips of the site can be found on archive.org and the wayback machine. The comic was started in 13/12/2004 with updates up until Andy experienced health issues and declined in the 2020s. An update was posted to Facebook confirming Andy's passing.
Now we will never know if Ace got home in time for the wedding.
https://arstechnica.com/google/2025/02/googles-new-ai-generates-hypotheses-for-researchers/
Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. Google has robots summarizing search results, interacting with your apps, and analyzing the data on your phone. And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science?
Google Research is now angling to turn AI into a scientist—well, a "co-scientist."
[...]
This is still a generative AI system like Gemini, so it doesn't truly have any new ideas or knowledge. However, it can extrapolate from existing data to potentially make decent suggestions. At the end of the process, Google's AI co-scientist spits out research proposals and hypotheses. The human scientist can even talk with the robot about the proposals in a chatbot interface.
[...]
Today's popular AI systems have a well-known problem with accuracy. Generative AI always has something to say, even if the model doesn't have the right training data or model weights to be helpful, and fact-checking with more AI models can't work miracles.
[...]
However, Google partnered with several universities to test some of the AI research proposals in the laboratory. For example, the AI suggested repurposing certain drugs for treating acute myeloid leukemia, and laboratory testing suggested it was a viable idea. Research at Stanford University also showed that the AI co-scientist's ideas about treatment for liver fibrosis were worthy of further study.This is compelling work, certainly, but calling this system a "co-scientist" is perhaps a bit grandiose. Despite the insistence from AI leaders that we're on the verge of creating living, thinking machines, AI isn't anywhere close to being able to do science on its own.
[...]
Google says it wants more researchers working with this AI system in the hope it can assist with real research. Interested researchers and organizations can apply to be part of the Trusted Tester program, which provides access to the co-scientist UI as well as an API that can be integrated with existing tools.
Arthur T Knackerbracket has processed the following story:
GNOME 48 has entered beta testing, which also means that it's in feature, API, and UI freeze. In other words, nothing substantial should change from now until its release, which is expected on March 19. There is a full list of changes in the Beta News announcement, and it's substantial, so we'll try to focus on some of the highlights.
Version 48 doesn't look to be a massive release. It carries on the trajectory of recent GNOME releases, such as reducing dependencies on X11 on its way to a pure-Wayland future. Some of the new accessories that have replaced older apps in the desktop's portfolio continue to gain new functionality, which will help push worthy veterans such as Gedit and Evince into retirement.
In terms of the long and troubled road to Wayland, version 49 of the GNOME Display Manager, gdm for short, no longer requires Xwayland. So, on a pure Wayland system, it won't require X11 at all right from the login screen onward. Even some desktops and distributions that don't use anything else from GNOME use GDM for their login screen, so this change may have a wide impact. The latest version of Gtk 4 will also remove OpenGL support, and it deprecates X11 and the Broadway in-browser display. It does add Android support, though.
[...] Among the changes that we suspect will affect quite a few people in this release, there are tweaks to package management, music playback, and file viewing.
GNOME Software can now handle web links to Flatpak apps, as explained in a 2023 discussion and a 2024 proposal, which catches up with similar functionality in Canonical's Snap. A discussion is going on about potentially completely removing RPM support from the app in future, which may surprise some folks on the other side of the fence from the Debian world.
[...] Another new app is GNOME Papers, a simple file and document viewer, which can display various document and image formats, including e-books and electronic comics. This replaces the well-established Evince document viewer, and that might have a knock-on effect on this vulture's preferred tool, Linux Mint's Xreader, which was forked from Evince.
Some of the other changes are probably less visible. The new GNOME Text Editor has some functional changes, such as a properties panel that replaces the View menu and the indentation selection dialog, the search bar moved to the bottom of the window, language choice shows the most recently used first, a new full-screen mode, and other changes. Gedit is now retired, but the code base isn't totally dead. Mint's Xed and MATE's Pluma carry the family forward.
A change that will be obvious to some viewers and, we suspect, all but invisible to others is a change of the default font. The Adwaita fonts replace the previous Cantarell from Google.
[...] GNOME 48 will be the default desktop for Fedora version 42, which will be a Hitchhiker's Guide to the Galaxy-themed release, as we mentioned when we looked at Fedora 41. With some of Canonical's usual customizations, it will also be the default desktop of the next interim Ubuntu release, 25.04 or Plucky Puffin. That is still a year away from the next Ubuntu LTS, though, so GNOME 48 will be long gone by then.
However, some people may be seeing it for years to come. Canonical developer Jeremy Bicha shared an update in which he says he's working to get it into Debian 13. If GNOME 48 makes it into "Trixie," Debianisti who are also GNOME enthusiasts will be using this release until 2027 or so. ®
Arthur T Knackerbracket has processed the following story:
DRAM and NAND flash prices are expected to rise starting during the second quarter of 2025, according to a report by Digitimes. NAND and DRAM prices fluctuated throughout 2024 due to weaker consumer demand for DDR4 and DDR3 RAM, which are reportedly ceasing production by late 2025. However, the surge in NAND flash pricing is expected, as Kioxia previously forecasted growth thanks to AI advancements.
It's believed that the market conditions are ideal for a pricing uptick now that inventory and demand have gained traction. This results from the booming AI industry, as companies build AI servers and consumer products such as Nvidia's Project Digits begin to release.
Digitimes reports that Micron forecasts that DRAM prices will rise. At the same time, NAND prices should stabilize and then increase during the second quarter of 2025, with other manufacturers anticipated to follow suit. However, according to the report, memory makers have also been facing oversupply issues since the second half of 2024, meaning that pricing has also been affected.
With products based on HBM3E anticipated to hit the market soon, they are poised to capitalize on the AI boom. Apple and Google intend to construct new datacenters and purchase products designed to handle large-scale AI. As newer models debut, such as the recently released Grok 3, it's expected that the hardware demands of running large-scale models aren't letting up just yet.
Memory manufacturers are expected to keep producing HBM at the cost of other memory types, notably DDR5 DRAM. Other factors, such as a magnitude 6.4 earthquake, are speculated to have impacted memory maker Micron (though Micron hasn't publicly stated if they had been affected).
[...] DRAM and NAND price increases are another reason why consumers may be feeling a painful sting when shopping around for tech in 2025. Other contributing factors include tariffs, which will inevitably be passed onto customers in the US, and rising bill-of-materials costs for key components, as enterprise customers spend big on AI products.