Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:91 | Votes:251

posted by janrinok on Sunday February 23 2025, @10:14PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Over the past few years, we have seen a lot of AI-market-related metrics, starting from petaflops of performance and going all the way up to gigawatts of power consumption. A rather unexpected metric is perhaps the one from Morgan Stanley (via @Jukanlosreve) that counts the wafer consumption of AI processors. There are no surprises, though: Nvidia controls the lion's share of wafers designated for AI and is set to increase its domination in 2025 as it chews through up to 77% of the world's supply of wafers destined for AI applications. 

While Nvidia is operating at an unprecedented scale and continues ramping up production dramatically, AMD's share of AI wafer usage will actually decline next year. The figures also cover other industry heavyweights like AWS, Google, Tesla, Microsoft, and Chinese vendors.

Morgan Stanley’s analysis is the best in the industry. It’s data you won’t find anywhere else… pic.twitter.com/FhGwaf2Ux6February 8, 2025

If you expand the above tweet, you can see that Morgan Stanley predicts that  Nvidia will dominate AI semiconductor wafer consumption in 2025, increasing its share from 51% in 2024 to 77% in 2025 while consuming 535,000 300-mm wafers. 

AI-specific processors, such as Google TPU v6 and AWS Trainium, are gaining traction but remain far behind Nvidia's GPUs. As such, AWS's share is set to fall from 10% to 7%, while Google's share is projected to fall from 19% to 10%. Google allocates 85,000 wafers to TPU v6, while AWS dedicates 30,000 to Trainium 2 and 16,000 to Trainium 3, according to Morgan Stanley's projections. 

As for AMD, its share is expected to drop from 9% to 3% as its MI300, MI325, and MI355 GPUs — the company's main offerings —  have wafer allocations ranging from 5,000 to 25,000 wafers. Notably, this doesn't mean that AMD will consume fewer wafers next year, just that its percentage of the overall share will decline.  

Intel's Gaudi 3 processors (named Habana in the graph) will remain around 1%. 

Tesla, Microsoft, and Chinese vendors hold minimal shares. This may not be a problem, though. Tesla's Dojo and FSD processors have limited wafer demand, which reflects their niche role in AI computing. Microsoft's Maia 200 and its enhanced version have similarly small wafer allocations because these chips remain secondary for the company as it continues to use Nvidia's GPUs for training and inference.

What the published graph does not indicate is whether Nvidia's dominance stems from the massive demand expected in 2025 or the fact that the company booked more TSMC logic and TSMC CoWoS capacity than everyone else. 

The total AI market is projected to reach 688,000 wafers, and the estimated value is said to be $14.57 billion. This projection could be an underestimation, though. TSMC earned $64.93 billion in 2024, and 51% (over $32 billion) of it came from segments that the foundry calls high-performance computing (HPC). Technically, HPC includes everything from AI GPUs to processors for client PCs (smartphones are another category, and it accounted for 35% of TSMC's revenue in 2024) to game consoles. However, AI GPUs and data center CPUs account for a lion's share of that HPC revenue of $32 billion. 

The largest contributor to the growth of the wafers consumed by AI processors is Nvidia's B200 GPU, which is expected to require 220,000 wafers, generating $5.84 billion in revenue, according to Morgan Stanley projections. Other Nvidia GPUs for AI, including the H100, H200, and B300, add to its dominance. All of these products use TSMC's 4nm-class process technologies, and their compute die sizes range from 814 mm^2 to 850 mm^2, which explains the vast wafer demand.


Original Submission

posted by janrinok on Sunday February 23 2025, @05:37PM   Printer-friendly
from the tarrifs-impact-US-*consumers*;-foreign-manufacturer-rake-in-extra-profits--:/ dept.

Arthur T Knackerbracket has processed the following story:

President Trump, speaking at a press briefing held in Mar-a-Lago on Tuesday, was asked about plans for tariffs on semiconductor chips and pharmaceuticals. He responded that the tariff is set to start at 25%, "and it'll go very substantially higher over the course of a year." Trump has not revealed a timeline for when the proposed tariff might come into effect, but he did say he would give impacted semiconductor and pharmaceutical companies time to build factories in the U.S. before imposing tariffs.

The announcement follows a declaration made by the Trump administration, which claims that the U.S. will create and manufacture the "most powerful" AI chips.

"But we want to give them time to come in because, you know, when they come into the United States and they have their plant or factory here, there is no tariff. So we want to give a little bit of a chance." Trump said. This is likely offering manufacturers, such as Samsung and TSMC, leeway to get set up in the U.S. It takes 38 months to build a fab in the U.S. due to factors like attaining permits, alongside lengthy construction times. Therefore, tariffs may only come into force once companies have been given enough time to set up manufacturing on American soil. Multiple rumors have claimed that TSMC may be accelerating plans to build its Arizona plant to minimize the impact of the tariff.

The U.S. government is seeking to lower the reliance on imported semiconductors and shift its focus to local foundries. Taiwanese factories can currently create more advanced chips, and no current facility in the U.S. can create a similar product. With homegrown foundries on the mind, it was also reported that the administration was pushing for TSMC and Intel to create a joint venture on American soil in hopes that its production in the U.S. may be able to catch up to Taiwan's dominance.

The CHIPS and Science Act award for chip designers and manufacturers was initially intended to lure awardees over to manufacturing semiconductors in the U.S. However, the Trump administration reportedly wishes to assess and change the requirements for the grant.

The suggested tariffs are already set to impact wallets, with Acer CEO Jason Chen announcing that laptop pricing is set to rise by 10% for U.S. customers. Chen further claimed that some manufacturers may use the tariff as an "excuse" to push prices even further.

With the tariff currently set at a proposed 25% or higher, it could lead to price increases for several other product categories. The proposed tariffs would pose pricing challenges for the likes of Nvidia, AMD, and Apple. In fact, Acer announced yesterday that it would increase its pricing by 10% due to the new tariffs.


Original Submission

posted by janrinok on Sunday February 23 2025, @12:55PM   Printer-friendly
from the tow-job dept.

Electric vehicle startup Nikola Corp. has announced it had filed for Chapter 11 bankruptcy:

Nikola now joins a line of EV startups that fell into bankruptcy over the past year. While the Biden-Harris administration went full-speed ahead with a vision of EVs replacing gas-powered vehicles, electric-vehicle production has become a bad bet for the companies that jumped into the vision head-first. Consumers just never got on board with the plan.

With Trump planning to end federal EV mandates and legislation seeking to stop tax credits for the purchase of new EVs, the list of failed EV startups might continue to grow.

[...] The company went public in 2020, according to Bloomberg, through a deal with a special-purpose acquisition company. Nikola's stock went up after the transaction was closed, but shortly after, Bloomberg revealed its founder, Trevor Milton, had overstated the capability of the company's debut truck. He was later convicted on fraud charges.

"Like other companies in the electric vehicle industry, we have faced various market and macroeconomic factors that have impacted our ability to operate," Nikola president and CEO Steve Girsky said in a recent statement on the company's bankruptcy filing.

Previously:


Original Submission

posted by janrinok on Sunday February 23 2025, @08:05AM   Printer-friendly
from the you-paid-for-it-but-you-never-owned-it dept.

Arthur T Knackerbracket has processed the following story:

Take a last look. The Humane AI Pin is no more.

The Humane AI Pin company is being shut down and its much-vaunted, badly-received device is being switched off. It could have been so much better.

It was controversially expensive, it had many faults, but now the much talked about and seemingly rarely bought Humane AI Pin is no more. Humane has announced that certain of its technologies and staff are being acquired by HP, and the Humane AI Pin is being switched off.

This is how it so very often goes with technology — you don't know what you've got until it's gone. People weren't very impressed with say, the adorable 12-inch MacBook but they lamented its passing when it was discontinued, for instance.

Maybe it's a nostalgia thing as it happens a lot — even the Touch Bar seems to be more popular now it's gone. But fortunately, what's rarer is that people who actually bought the device are not left seething.

If you had a Touch Bar on your MacBook Pro, nobody took it away from you. But if you bought a Humane AI Pin, you're screwed.

You spent $700 to buy it and then you paid $24 per month for a subscription. If you bought it from the moment it went on pre-order sale on November 16, 2023, you may have spent a further $360 or so on that subscription.

That's gone. No one is getting their subscription back, but worse, only certain people will get a refund on their $700 purchase of what is about to become jewelry. Unless you bought a Humane AI Pin in the last 90 days, you're stuck.

So make the most of its not awful but not brilliant phone call capabilities, its hard-to-see projection, or its reportedly slow AI features. You've got until noon Pacific Time on February 28, 2025.

There is an argument that a separate AI device that you use instead of, or alongside, your iPhone, just could never take off. The ubiquity and sheer compelling usefulness of the iPhone was surely a problem for the Humane AI Pin, just as it presumably was for the Rabbit R1.

That Rabbit R1 is still on sale, it's just been forgotten. Whereas now that the Humane AI Pin is over, it's hard not to wish it had worked out. It cost too much for what it did, it didn't do all that was promised, but the idea seemed mostly very good, very appealing.

There were issues over privacy and when the pin was listening to you, when it was recording. That doesn't seem to have been fully thought through, despite the years of development that were conducted in great secrecy.

Yet the instant you saw it one being worn, such as at Paris Fashion Week, it looked almost good. It was bigger than expected, and given the poor battery life, but you saw it and you could see that this was the future.

Specifically, you could see that it was the future of "Star Trek: The Next Generation." While it was many times deeper than the combadges on that show and its sequels, it was roughly the same width and height, and you wore it at the same position.

So here was a device you could just talk aloud to and it would phone someone. Or you could ask questions, and it would tell you the answer.

Plus it seemed to do so reasonably privately — not in the sense of security, but in the sense of just being audible to you. In today's world where either no one knows how to hold a phone next to their ear, or they presume we all want to hear both sides of their vital conversations, that seemed appealing.

It seemed appealing, it looked good, but this is a case of appearances not being all they needed to be. The battery lasted only about five hours in real-world tests, and the charging case had to be recalled because of overheating issues.

That five hours of battery life required what Humane called a Battery Booster. This connected magnetically to the Pin and that magnet is how the device was held onto clothing.

You'd put the magnetic backing under your shirt or blouse, then the Humane AI Pin would snap onto the front. This is exactly how many or most wireless microphones work, and it would be fine, except a Pin weighs a lot more than a mic.

So where microphones tend to be wearable on any clothing, the Humane AI Pin's weight would pull down on light material.

It weighed too much, it cost too much for what it did, and then in the end Humane AI Pin customers have been left having lost a lot of money. The announcement of its closing down is not going to win the makers any fans, either.

"Your engagement has meant the world to us, and we deeply appreciate the role you've played in our innovation journey," says the company in a statement, before signing the message off "warmly."

Yet if things have soured for the Humane AI Pin customers, they haven't gone well for the company. While the press release about HP's acquisition is carefully worded, it appears that the Humane company itself is over.

HP is buying "key AI capabilities from Humane, including their AI-powered platform Cosmos, highly skilled technical talent, and intellectual property with more than 300 patents and patent applications."

While HP continues to release products, its glory days in computing are long gone. If there is even a plan to make an HP AI Pin, as it once made a HP iPod, it's unlikely to happen.

Humane is said to have begun looking to be acquired pretty much immediately after its AI Pin came out and was so very poorly received. It was looking to be bought for between $750 million and $1 billion.

Instead, HP has got the lot for $116 million.

So Humane's makers have got a lot less money than they had hoped for, but they are going to get a salary from HP.

Humane AI Pin customers get nothing.


Original Submission

posted by janrinok on Sunday February 23 2025, @03:24AM   Printer-friendly
from the complaints-department-4.2-light-years-> dept.

https://arstechnica.com/space/2025/02/the-odds-of-a-city-killer-asteroid-impact-in-2032-keep-rising-should-we-be-worried/

An asteroid discovered late last year is continuing to stir public interest as its odds of striking planet Earth less than eight years from now continue to increase.

Two weeks ago, when Ars first wrote about the asteroid, designated 2024 YR4, NASA's Center for Near Earth Object Studies estimated a 1.9 percent chance of an impact with Earth in 2032. NASA's most recent estimate has the likelihood of a strike increasing to 3.2 percent. Now that's not particularly high, but it's also not zero.

[...] Ars connected with Robin George Andrews, author of the recently published book How to Kill an Asteroid.

[...] Ars: Why are the impact odds increasing?

Robin George Andrews: The asteroid's orbit is not known to a great deal of precision right now, as we only have a limited number of telescopic observations of it.

[...] Earth has yet to completely fall out of that zone of uncertainty. As a proportion of the remaining uncertainty, Earth is taking up more space, so for now, its odds are rising.

Think of it like a beam of light coming out of the front of that asteroid. That beam of light shrinks as we get to know its orbit better, but if Earth is yet to fall out of that beam, it takes up proportionally more space.

[...] Ars: What are we learning about the asteroid's destructive potential?

Andrews: The damage it could cause would be localized to a roughly city-sized area, so if it hits the middle of the ocean or a vast desert, nothing would happen. But it could trash a city, or completely destroy much of one, with a direct hit.

[...] Ars: So it's kind of late in the game to be planning an impact mission?

Andrews: This isn't an ideal situation. And humanity has never tried to stop an asteroid impact for real. I imagine that if 2024 YR4 does become an agreed-upon emergency, the DART team (JHUAPL + NASA, mostly) would join forces with SpaceX (and other space agencies, particularly ESA but probably others) to quickly build the right mass kinetic impactor (or impactors) and get ready for a deflection attempt close to 2028, when the asteroid makes its next Earth flyby. But yeah, eight years is not too much time.

A deflection could work! But it won't be as simple as just hitting the asteroid really hard in 2028.


Original Submission

posted by janrinok on Saturday February 22 2025, @10:40PM   Printer-friendly
from the just-wait dept.

[Updated to add on February 21

Following our exclusive, HP Inc has reversed course on the 15-minute forced wait.
--Bytram]


https://www.theregister.com/2025/02/20/hp_deliberately_adds_15_minutes/

Not that anyone ever received any satisfaction from either support option, HP is trying to force consumer PC and print customers to use online and other digital support channels by setting a minimum 15-minute wait time for anyone that phones the call center to get answers to troublesome queries. At the beginning of a call to telephone support, a message will be played stating: "We are experiencing longer waiting times and we apologize for the inconvenience. The next available representative will be with you in about 15 minutes." Those who want to continue to hold are told to "please stay on the line."

The reason for the change? Getting people to figure it out themselves using online support. As HP put it: "Encouraging more digital adoption by nudging customers to go online to self-solve," and "taking decisive short-term action to generate warranty cost efficiencies."

The staff email says customer experience metrics are being tracked weekly in terms of customer satisfaction, escalations, and others. As are the number of phone calls that subsequently give up and move to social channels or live chat.

For some Reg readers, 15 minutes might not seem like an eternity, especially if they are used to dealing with UK tax collector HMRC, which was found to have kept callers waiting on hold, collectively, for 798 years in the year to March 2023, something it was also recently criticized for again.

An insider in HP's European ops told us: "Many within HP are pretty unhappy [about] the measures being taken and the fact those making decisions don't have to deal with the customers who their decisions impact."


Original Submission

posted by hubie on Saturday February 22 2025, @05:55PM   Printer-friendly
from the Camping-the-quad dept.

Those who follow web comics may be saddened to hear of the passing of web comic author AndyOh (Andy Odendhal) who was the author of the Too Much Information web comic at https://tmi-comic.com which is now permanently offline. There are no plans to bring the site back. Compilations and clips of the site can be found on archive.org and the wayback machine. The comic was started in 13/12/2004 with updates up until Andy experienced health issues and declined in the 2020s. An update was posted to Facebook confirming Andy's passing.

Now we will never know if Ace got home in time for the wedding.


Original Submission

posted by hubie on Saturday February 22 2025, @01:14PM   Printer-friendly
from the resistance-is-futile dept.

https://arstechnica.com/google/2025/02/googles-new-ai-generates-hypotheses-for-researchers/

Over the past few years, Google has embarked on a quest to jam generative AI into every product and initiative possible. Google has robots summarizing search results, interacting with your apps, and analyzing the data on your phone. And sometimes, the output of generative AI systems can be surprisingly good despite lacking any real knowledge. But can they do science?

Google Research is now angling to turn AI into a scientist—well, a "co-scientist."
[...]
This is still a generative AI system like Gemini, so it doesn't truly have any new ideas or knowledge. However, it can extrapolate from existing data to potentially make decent suggestions. At the end of the process, Google's AI co-scientist spits out research proposals and hypotheses. The human scientist can even talk with the robot about the proposals in a chatbot interface.
[...]
Today's popular AI systems have a well-known problem with accuracy. Generative AI always has something to say, even if the model doesn't have the right training data or model weights to be helpful, and fact-checking with more AI models can't work miracles.
[...]
However, Google partnered with several universities to test some of the AI research proposals in the laboratory. For example, the AI suggested repurposing certain drugs for treating acute myeloid leukemia, and laboratory testing suggested it was a viable idea. Research at Stanford University also showed that the AI co-scientist's ideas about treatment for liver fibrosis were worthy of further study.

This is compelling work, certainly, but calling this system a "co-scientist" is perhaps a bit grandiose. Despite the insistence from AI leaders that we're on the verge of creating living, thinking machines, AI isn't anywhere close to being able to do science on its own.
[...]
Google says it wants more researchers working with this AI system in the hope it can assist with real research. Interested researchers and organizations can apply to be part of the Trusted Tester program, which provides access to the co-scientist UI as well as an API that can be integrated with existing tools.


Original Submission

posted by hubie on Saturday February 22 2025, @08:31AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

GNOME 48 has entered beta testing, which also means that it's in feature, API, and UI freeze. In other words, nothing substantial should change from now until its release, which is expected on March 19. There is a full list of changes in the Beta News announcement, and it's substantial, so we'll try to focus on some of the highlights.

Version 48 doesn't look to be a massive release. It carries on the trajectory of recent GNOME releases, such as reducing dependencies on X11 on its way to a pure-Wayland future. Some of the new accessories that have replaced older apps in the desktop's portfolio continue to gain new functionality, which will help push worthy veterans such as Gedit and Evince into retirement.

In terms of the long and troubled road to Wayland, version 49 of the GNOME Display Manager, gdm for short, no longer requires Xwayland. So, on a pure Wayland system, it won't require X11 at all right from the login screen onward. Even some desktops and distributions that don't use anything else from GNOME use GDM for their login screen, so this change may have a wide impact. The latest version of Gtk 4 will also remove OpenGL support, and it deprecates X11 and the Broadway in-browser display. It does add Android support, though.

[...] Among the changes that we suspect will affect quite a few people in this release, there are tweaks to package management, music playback, and file viewing.

GNOME Software can now handle web links to Flatpak apps, as explained in a 2023 discussion and a 2024 proposal, which catches up with similar functionality in Canonical's Snap. A discussion is going on about potentially completely removing RPM support from the app in future, which may surprise some folks on the other side of the fence from the Debian world.

[...] Another new app is GNOME Papers, a simple file and document viewer, which can display various document and image formats, including e-books and electronic comics. This replaces the well-established Evince document viewer, and that might have a knock-on effect on this vulture's preferred tool, Linux Mint's Xreader, which was forked from Evince.

Some of the other changes are probably less visible. The new GNOME Text Editor has some functional changes, such as a properties panel that replaces the View menu and the indentation selection dialog, the search bar moved to the bottom of the window, language choice shows the most recently used first, a new full-screen mode, and other changes. Gedit is now retired, but the code base isn't totally dead. Mint's Xed and MATE's Pluma carry the family forward.

A change that will be obvious to some viewers and, we suspect, all but invisible to others is a change of the default font. The Adwaita fonts replace the previous Cantarell from Google.

[...] GNOME 48 will be the default desktop for Fedora version 42, which will be a Hitchhiker's Guide to the Galaxy-themed release, as we mentioned when we looked at Fedora 41. With some of Canonical's usual customizations, it will also be the default desktop of the next interim Ubuntu release, 25.04 or Plucky Puffin. That is still a year away from the next Ubuntu LTS, though, so GNOME 48 will be long gone by then.

However, some people may be seeing it for years to come. Canonical developer Jeremy Bicha shared an update in which he says he's working to get it into Debian 13. If GNOME 48 makes it into "Trixie," Debianisti who are also GNOME enthusiasts will be using this release until 2027 or so. ®


Original Submission

posted by hubie on Saturday February 22 2025, @03:48AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

DRAM and NAND flash prices are expected to rise starting during the second quarter of 2025, according to a report by Digitimes. NAND and DRAM prices fluctuated throughout 2024 due to weaker consumer demand for DDR4 and DDR3 RAM, which are reportedly ceasing production by late 2025. However, the surge in NAND flash pricing is expected, as Kioxia previously forecasted growth thanks to AI advancements.

It's believed that the market conditions are ideal for a pricing uptick now that inventory and demand have gained traction. This results from the booming AI industry, as companies build AI servers and consumer products such as Nvidia's Project Digits begin to release.

Digitimes reports that Micron forecasts that DRAM prices will rise. At the same time, NAND prices should stabilize and then increase during the second quarter of 2025, with other manufacturers anticipated to follow suit. However, according to the report, memory makers have also been facing oversupply issues since the second half of 2024, meaning that pricing has also been affected.

With products based on HBM3E anticipated to hit the market soon, they are poised to capitalize on the AI boom. Apple and Google intend to construct new datacenters and purchase products designed to handle large-scale AI. As newer models debut, such as the recently released Grok 3, it's expected that the hardware demands of running large-scale models aren't letting up just yet.

Memory manufacturers are expected to keep producing HBM at the cost of other memory types, notably DDR5 DRAM. Other factors, such as a magnitude 6.4 earthquake, are speculated to have impacted memory maker Micron (though Micron hasn't publicly stated if they had been affected).

[...] DRAM and NAND price increases are another reason why consumers may be feeling a painful sting when shopping around for tech in 2025. Other contributing factors include tariffs, which will inevitably be passed onto customers in the US, and rising bill-of-materials costs for key components, as enterprise customers spend big on AI products.


Original Submission

posted by janrinok on Friday February 21 2025, @11:01PM   Printer-friendly

We have had recent stories and discussion regarding the connector on GPUs which are causing overheating and, in a small number of case, actually catching fire. :

It seems that there are new connectors being developed.

The 12VHPWR connector (and its 12V-2x6 successor) is notorious for their vulnerability to high temperatures on power-hungry GPUs, to the point where it can melt. To combat this on the adapter side, third-party manufacturers such as Ezdiy-fab and Cablemod have been forced to resort to "exotic" solutions sporting copper PCBs, thermal pads, and aluminum heatsinks to ensure their adapters stay cool.

Ezdiy-fab's 90- and 180-degree adapters take advantage of a 2oz copper PCB strapped to a thermal pad and aluminum heatsink cover. The copper PCB allegedly keeps the voltage impedance low while the thermal pad and heatsink on top of it ensure cool operation. CableMod's latest adapter uses the same design but takes advantage of a copper foil applied to its copper PCBs, in addition to a thermal pad and aluminum heatsink.

All of this additional cooling shows how fragile the new 16-pin connectors are to potential overheating. Virtually all current 16-pin adapters we could find (from various third-party makers) take advantage of some cooling system. By contrast, you can find angled 8-pin adapters that don't come with any fancy cooling gizmos (some do, but the point is that cooling components on 8-pin adapters don't seem to be required.) You can find angled 8-pin adapters with a simple plastic shell, contributing almost nothing to cool the interior components.

Cablemod had to recall its original V1.0 adapters due to temperature problems associated with the connectors loosening unintentionally, a flaw in the original design. Even though the design flaw only affected 1% of units sold, the total amount of property damage was estimated to be over $74,500 thanks in no small part to the sky-high prices of flagship GPUs lately. The cable manufacturer replaced the original version with an updated model that rectified the adapter's previous issue.

Lately, there have been melting concerns regarding the new RTX 50-series that comes with the revised 12V-2x6 power connector. It has been discovered that using previous-generation 12VHPWR cables with the RTX 5090 can result in melting issues regardless. We saw this when the first recorded RTX 5090 16-pin connector meltdown was published by a Reddit user online, who used an old 12VHPWR third-party cable with his new GPU. The cable's maker came out with a statement, clarifying that only its cables that are made in 2025 using the newer 12V-2x6 standard support RTX 50 series GPUs. (Reminder: 12V-2x6 is backward compatible with 12VHPWR.)

Initially, it was thought that the melting problem was due to connection seating only, especially with the original 12VHPWR connectors. However, multiple theories have come out suggesting that the connector may be doomed to fail. One theory suggests that the 16-pin standard as a whole is pushed way too close to its physical limits. Another suggests improper load balancing between the wires is causing the connectors to fail as well due to a lack of shunt resistors on RTX 40 and RTX 50 series GPUs.

Regardless of where exactly the problem is, it's clear that the new 16-pin connector standard is far less robust than its 8-pin and 6-pin predecessors. Maybe at some point, Nvidia and the PCI SIG committee will make an entirely new connector with a new design. But for now, those "lucky" enough to snag a high-end Nvidia GPU will have to live with the 16-pin connector, flaws and all.


Original Submission

posted by janrinok on Friday February 21 2025, @06:17PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In an unexpected turn of events Justin Hotard, the executive vice president and general manager of the Data Center and AI Group (DCAI) at Intel, left the company to become chief executive of Nokia. Intel has appointed an internal head for its datacenter and AI unit and will start searching for a new permanent general manager immediately. 

"We have a strong DCAI team that will continue to advance our priorities in service to our customers," a statement by Intel reads. "Karin Eibschitz Segal has been appointed interim head of the DCAI business and is an accomplished executive with nearly two decades of Intel leadership experience spanning products, systems and infrastructure roles. We are grateful for Justin Hotard's contributions and wish him the best in his new role." 

Justin Hotard joined Intel from HPE in early 2024. His tenure was arguably a mixed bag, though much of what he oversaw was more or less in place before he arrived. Intel successfully launched its Xeon 6 'Granite Rapids' and 'Sierra Forest' CPUs for servers,  but sales of its Gaudi 3 processors for AI missed the company's own rather modest expectations. In addition, the company had to cancel its Falcon Shores as a product and delay its Clearwater Forest datacenter CPU by at least a quarter. 

Justin Hotard has over 25 years of experience working at major technology companies. Before joining Intel, he held leadership positions at Hewlett Packard Enterprise and NCR Corporation. His background includes expertise in AI and datacenter markets, which are said to be critical areas for Nokia's future. 

"I am delighted to welcome Justin to Nokia," said Sari Baldauf, Chair of Nokia’s Board of Directors. "He has a strong track record of accelerating growth in technology companies along with vast expertise in AI and datacenter markets, which are critical areas for Nokia's future growth. In his previous positions, and throughout the selection process, he has demonstrated the strategic insight, vision, leadership and value creation mindset required for a CEO of Nokia." 

Nokia's current CEO Pekka Lundmark will step down on March 31, 2025, and Justin Hotard will take over the role starting April 1, 2025. Lundmark will stay on as an advisor until the end of the year. Hotard will be based in Espoo, Finland, where Nokia’s headquarters are located. 

Lundmark has led Nokia since 2020, a period marked by significant challenges. Under his leadership, the company strengthened its position in 5G technology, cloud-based network infrastructure, and patent licensing. With this leadership change, Nokia aims to continue its transformation, focusing on AI, datacenters, and next-generation connectivity. 

"I am honored by the opportunity to lead Nokia, a global leader in connectivity with a unique heritage in technology," said Justin Hotard. "Networks are the backbone that power society and businesses, and enable generational technology shifts like the one we are currently experiencing in AI. I am excited to get started and look forward to continuing Nokia's transformation journey to maximize its potential for growth and value creation." 

Justin Hotard leaves a couple of months after Pat Gelsinger, chief executive of Intel, was ousted by the board of directors. As a result, Intel now does not have a permanent CEO or a permanent head of its key DCAI unit.


Original Submission

posted by janrinok on Friday February 21 2025, @02:23PM   Printer-friendly

Please cast your vote in the comments to this Meta.

A valid vote should contain a single word - either "Yes" to accept the documents or "No" to reject them. A single vote is required to accept ALL of the proposed documents.

TO OVERCOME THE TECHNICAL PROBLEM: Please include a single paragraph containing anything at all - it will be ignored by the software during vote counting. However, the vote "Yes" or "No" must be on a line all by itself.

For ease of reference links to the documents are repeated here:

[Voting closed as of 23:59 UTC 28 Feb]

posted by janrinok on Friday February 21 2025, @02:20PM   Printer-friendly

Don't worry - this will be a relatively short Meta, and it is not to explain another site outage!

Community Vote on Site Documentation

In December 2024 I released a Meta which detailed the proposed documentation for the site under the Soylent Phoenix board. This is a legal requirement resulting from the creation of a new company. I repeated the links to the documentation in January. The next step is for the community to accept or reject the proposed documentation. The previous voting software is no longer available to us but I believe that a straightforward count of comments will suffice.

I will publish another Meta which will contain the links to the proposed documentation but it is not to be used for any discussion regarding the contents. Each current account in good standing (i.e. having a karma of 20+ and created on or before the publication of the December Meta (16 Dec 2024 - that is up to and including account #49487 ) will be eligible to vote. In order to cast your vote your comment should be limited to a single word - "Yes" or "No" (upper or lower case is acceptable) on a line all by itself. "Yes" will indicate your acceptance of the documentation and "No" will indicate your rejection of it. Your last comment of a maximum of 2 attempts will be the one that counts so you will have the opportunity to change your vote. Any more than 2 attempts from an account to cast a vote will be discarded. Comments may contain a single paragraph to overcome the 'lame comment' filter. The contents of the paragraph will be ignored. The vote will remain open for 1 week and will close at 23:59 (UTC) on 28 February 2025. The result will be made public once the Board are satisfied that the voting has been fair and democratic.

Existing votes will remain valid and do not have to be redone.

Entering into a discussion in the vote or justifying why you have voted in a particular fashion will nullify your comment. There has been a period of over 2 months for discussion and suggested changes.

It is important that you cast a vote. As an extreme example, if 1 person alone votes Yes and 2 people vote No then the documentation will NOT be accepted. Not casting a vote doesn't make any statement whatsoever but may result in the majority of true community opinion being ignored.

Essentially, the documentation is the same as that adopted in 2014 except it has been rewritten where necessary to clarify the meaning or intent. It also incorporates in one location changes to the rules that have been accepted by the community since 2014 (e.g. the definition of Spam which was adopted by the site in 2021).

posted by janrinok on Friday February 21 2025, @01:32PM   Printer-friendly

An interesting thought experiment ...

Imagine a supervillain attacking you with his unique superpower of creating small black holes. An invisible force zips through your body at unimaginable speed. You feel no push, no heat, yet, deep inside your body, atoms momentarily shift in response to the gravitational pull of something tiny yet immensely dense — a Primordial Black Hole (PBH).

What would this do to you? Would it cause minor, localized damage, or would it simply rip through your entire body? Physicist Robert J. Scherrer from Vanderbilt University investigated this very scenario. His study examines what happens when a tiny black hole, like the ones formed in the early universe, passes through the human body.

[...] While the idea of a tiny black hole silently piercing through your body is an intriguing thought experiment, the actual probability of it happening is close to zero. And even if one did, it would have to be exceptionally massive (by microscopic standards) to cause harm.

[Journal Ref]: https://arxiv.org/pdf/2502.09734
[Source]: ZME Science


Original Submission