Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:119

posted by janrinok on Monday May 27, @10:25PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Hey Google, can you spare a few hundred million to keep Rupert Murdoch’s yacht afloat? That’s essentially what some legislators are demanding with their harebrained schemes to force tech companies to fund journalism.

It is no secret that the journalism business is in trouble these days. News organizations are failing and journalists are being laid off in record numbers. There have been precious few attempts at carefully thinking through this situation and exploring alternative business models. The current state of the art thinking seems to be either (1) a secretive hedge fund buying up newspapers, selling off the pieces and sucking out any remaining cash, (2) replacing competent journalists with terrible AI-written content or (3) putting all the good reporting behind a paywall so that disinformation peddlers get to spread nonsense to the vast majority of the public for free.

Then, there’s the legislative side. Some legislators have (rightly!) determined that the death of journalism isn’t great for the future of democracy. But, so far, their solutions have been incredibly problematic and dangerous. Pushed by the likes of Rupert Murdoch, whose loud and proud support for “free market capitalism” crumbled to dust the second his own news business started failing, leading him to demand government handouts for his own failures in the market. The private equity folks buying up newspapers (mainly Alden Capital) jumped into the game as well, demanding that the government force Google and Meta to subsidize their strip-mining of the journalism field.

The end result has mostly been disastrous link taxes, which were pioneered in Europe a decade ago. They failed massively before being revived more recently in Australia and Canada, where they have also failed (despite people pretending they have succeeded).

For no good reason, the US Congress and California’s legislature are still considering their own versions of this disastrous policy that has proven (1) terrible for journalism and (2) even worse for the open web.

Recently, California Senator Steve Glazer offered up an alternative approach, SB 1327 that is getting a fair bit of attention. Instead of taxing links like all those other proposals, it would directly tax the digital advertising business model and use that tax to create a fund for journalism. Specifically, it would apply a tax on what it refers to (in a dystopian Orwellian way) as a “data extraction transaction.” It refers to the tax as a “data extraction mitigation fee” and that tax would be used to provide credits for “qualified” media entities.

I’ve seen very mixed opinions on this. It’s not surprising that some folks are embracing this as a potential path to funding journalism. Casey Newton described it as a “better way for platforms to fund journalism.”

And, I mean, when compared to link taxes, it is potentially marginally better (but also, with some very scary potential side effects). The always thoughtful Brandon Silverman (who created CrowdTangle and has worked to increase transparency from tech companies) also endorses the bill as “a potential path forward.”

But I tend to agree much more with journalism professor Jeff Jarvis who highlights the fundamental problems of the bill and the framework it creates. As I’ve pointed out with link taxes, the oft-ignored point of a tax on something is to get less of it. You tax something bad because that tax decreases how much of it is out there. And, as Jarvis points out here, this is basically a tax on information:

Furthermore, Jarvis rightly points out that Glazer’s bill is positioned as something unique when users give their attention to internet companies, but explicitly carves out when users give their attention to other types of media companies. This sets up a problematically tiered system for when attention gets taxed and when it doesn’t:

Indeed, the entire framing of the bill seems to suggest that data and advertising is a sort of “pollution,” that needs to be taxed in order to minimize it. And that seems particularly troublesome.

As Jarvis also notes, the true beneficiaries of a law like this would still be those rapacious hedge funds that have bought up a bunch of news orgs [...]


Original Submission

posted by janrinok on Monday May 27, @05:39PM   Printer-friendly
from the cheap-steel dept.

"Researchers from the University of Cambridge have developed a method to produce very low emission concrete at scale -- an innovation that could be transformative in the transition to net zero." reports ScienceDaily

The method, which the researchers say is "an absolute miracle," [are we taken as savages here?] uses the electrically-powered arc furnaces used for steel recycling to simultaneously recycle cement, the carbon-hungry component of concrete.
...
The Cambridge researchers found that used cement is an effective substitute for lime flux, which is used in steel recycling to remove impurities and normally ends up as a waste product known as slag. But by replacing lime with used cement, the end product is recycled cement that can be used to make new concrete.

The cement recycling method developed by the Cambridge researchers, reported in the journal Nature, does not add any significant costs to concrete or steel production and significantly reduces emissions from both concrete and steel, due to the reduced need for lime flux.
...
Recent tests carried out by the Materials Processing Institute, a partner in the project, showed that recycled cement can be produced at scale in an electric arc furnace (EAF), the first time this has been achieved. Eventually, this method could produce zero emission cement, if the EAF was powered by renewable energy.
...
"I had a vague idea from previous work that if it were possible to crush old concrete, taking out the sand and stones, heating the cement would remove the water, and then it would form clinker again," said first author Dr Cyrille Dunant, also from the Department of Engineering. "A bath of liquid metal would help this chemical reaction along, and an electric arc furnace, used to recycle steel, felt like a strong possibility. We had to try."
...
"We found the combination of cement clinker and iron oxide is an excellent steelmaking slag because it foams and it flows well," said Dunant. "And if you get the balance right and cool the slag quickly enough, you end up with reactivated cement, without adding any cost to the steelmaking process."

The cement made through this recycling process contains higher levels of iron oxide than conventional cement, but the researchers say this has little effect on performance.

DOI:10.1038/s41586-024-07338-8 (free access)

4:33min vid


Original Submission

posted by janrinok on Monday May 27, @12:52PM   Printer-friendly
from the breaking-oem-monopolies dept.

Several sites are reporting on Qualcomm's increasing Linux support. The tide is turning and the Microsoft monopoly on OEMs, at least the non-x86 ones, might be weakening as full Linux support is now expected on the modern hardware architectures these days:

Here's the thing. In the Linux world, ARM has had something like a 15-year head start over Microsoft's own often anemic ARM efforts, thanks to the Raspberry Pi and single-board computers making the platform a good choice for more than just basic web-surfers.

Collectives like Pine64 have been building laptops with first-class citizen Linux support for years. (They're admittedly not fast but they offer a good ecosystem to develop on.)

And then, we got fast ARM laptops from Apple, which smoked what's already out there but come with the side effect of a Linux experience that is still somewhat immature, despite the strides already made.

This may be a game-changer.

Qualcomm is making good progress on adapting its new Snapdragon X Elite laptop CPU for Linux use. The mobile SoC manufacturer revealed that it has laid a lot of the groundwork already to get the Snapdragon X Elite running the Linux operating systems. However, Qualcomm is far from done, as there's still a lot of development work needed to get the X Elite into a fully operational state in Linux. Upcoming Linux kernels should enable full support for all the chip's features.

Qualcomm prides itself on its Linux enablement work and has prioritized Linux enablement in all of its previous Snapdragon laptop CPUs, typically announcing Linux support one or two days after launch. The Snapdragon X Elite continues that pattern, with Linux enablement being announced the very next day after its original October 23, 2023 debut.

Tom's Hardware, Qualcomm goes where Apple won't, readies official Linux support for Snapdragon X Elite.

It seems that the Asahi Linux project has also done great work on the M-series chips despite Apple. There, Apple still has to get up to speed.

Previously,
(2024) Desktop GNU/Linux Surpasses 4% Market Share


Original Submission

posted by hubie on Monday May 27, @06:02AM   Printer-friendly
from the slowly-making-progress dept.

[Ed. note: Some of the links in the Ars article point to the Office of the Revisor of Statutes web site. The links did not resolve for the submitter nor this editor, but they are included below in the event that it is a temporary problem with the web site.]

Ars Technica is reporting on a Minnesota law passed this week which, according to the article:

Minnesota this week eliminated two laws that made it harder for cities and towns to build their own broadband networks. The state-imposed restrictions were repealed in an omnibus commerce policy bill [N.B., this link is not valid (Geo-Blocked, perhaps? I'm not in MN). I retained it as it is in TFA. See link to the MN House journal above] signed on Tuesday by Gov. Tim Walz, a Democrat.

Minnesota was previously one of about 20 states that imposed significant restrictions on municipal broadband. The number can differ depending on who's counting because of disagreements over what counts as a significant restriction. But the list has gotten smaller in recent years because states including Arkansas, Colorado, and Washington repealed laws that hindered municipal broadband.

The Minnesota bill enacted this week struck down a requirement that municipal telecommunications networks be approved in an election with 65 percent of the vote. The law is over a century old, the Institute for Local Self-Reliance's Community Broadband Network Initiative wrote yesterday.

"Though intended to regulate telephone service, the way the law had been interpreted after the invention of the Internet was to lump broadband in with telephone service thereby imposing that super-majority threshold to the building of broadband networks," the broadband advocacy group said.

The Minnesota omnibus bill also changed a law that let municipalities build broadband networks, but only if no private providers offer service or will offer service "in the reasonably foreseeable future." That restriction had been in effect since at least the year 2000.

The caveat that prevented municipalities from competing against private providers was eliminated from the law when this week's omnibus bill was passed. As a result, the law now lets cities and towns "improve, construct, extend, and maintain facilities for Internet access and other communications purposes" even if private ISPs already offer service.

I sure wish I could get municipal broadband. How about you Soylentils? Do you have municipal broadband? Does your ISP have competition at all? Abusive terms of service? Data caps?


Original Submission

posted by janrinok on Monday May 27, @01:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

South Korea's president has described the global semiconductor industry as "a field where all-out national warfare is underway" as he announced a $19 billion to diversify the nation's silicon sector.

In remarks presented on Thursday at a government economic review meeting, President Yoon Suk Yeol called for South Korea to "open a new future for the semiconductor industry."

"Our semiconductors have dominated the world in the memory field over the past 30 years," he declared, before lamenting that "our fabless market share still remains in the one percent range, and foundries that manufacture system semiconductors are unable to narrow the gap with leading companies such as TSMC."

"In the future, the success or failure of the semiconductor industry will be determined by system semiconductors, which account for two thirds of the entire market," he predicted, calling for his nation "to bet on system semiconductors, which are constantly expanding beyond CPUs and GPUs to AI semiconductors."

To make that happen, South Korea has created a $19 billion program to fund construction of chipmaking mega-clusters – especially the electrical and transport infrastructure they need. Provision of water resources for chipmaking has also been fast-tracked.

The plan will also see a "mini-fab" created, so that small and medium-sized fabless chip firms have a resource they can use to get their products off the drawing board. They will also be helped by a fund that President Yoon said will help to turn them into global enterprises.

The president noted that government funds flowing to chipmakers could be perceived as "a tax cut for large corporations or a tax cut for the rich." He rebutted that notion by arguing "the semiconductor industry is the most important and sure foundation for making our people's lives richer and making our economy take off."

"Semiconductors are the livelihood of the people, and all support for the semiconductor industry is for the benefit of the people," he argued, adding that government investment will pay for itself handsomely over time.

[...] President Yoon raised many solid points. The United States and European Union have thrown tens of billions of dollars and euros respectively at IC manufacturers, while South Korea's chip champs – Samsung Electronics and SK hynix – are indeed monsters of memory as they collectively hold over 70 percent of the market for DRAM and NAND flash. But beyond Samsung's modest Exynos SoC operation (which can't even satisfy demand for its own Galaxy smartphones), South Korea is not home to a notable manufacturer of high-value processors.

Samsung and SK hynix have also made enormous bets on factories to produce more memory – some on the peninsula and others stateside (where they could attract funds from Uncle Sam).

South Korea's new funding package is tiny compared to the sums its chip champions are spending, so it's unlikely it will divert their efforts notably. Nor will it deter them continuing to target the so-hot-right-now memory market, in which demand for DDR5 and related variants – plus high bandwidth memory (HBM) – is currently rampant. Indeed, SK hynix has already found buyers for all the HBM it can make in 2024 and most of the chips it will produce in 2025.

Memory-centric analyst firm TrendForce recently worried out loud about AI-fuelled demand for HBM skewing manufacturing investments away from DRAM, and maybe causing a shortage of the latter in years to come.


Original Submission

posted by janrinok on Sunday May 26, @08:31PM   Printer-friendly
from the this-one-is-for-the-hunters dept.

Parasitic worms infect 6 after bear meat served at family reunion:

Six family members caught a rare parasitic worm infection after sharing a meal that included black bear meat, which was initially served rare after being stored frozen for more than a month.

Two of the people reported only eating vegetables at the meal, so it's likely that the infected meat contaminated these sides at some point.

The worm infection, called trichinellosis, is rarely reported in the United States, according to a new report of the case published Thursday (May 23) by the Centers for Disease Control and Prevention (CDC). Between 2016 and 2022, only 35 probable and confirmed cases of the disease were recorded. "Bear meat was the suspected or confirmed source of infection in the majority of those outbreaks," the report noted.

Trichinellosis occurs when people inadvertently consume larvae of a roundworm in the Trichinella genus. The worm commonly infects bears, wild boars, wildcats, foxes, wolves, seals and walruses. People typically become infected after consuming raw or undercooked meat from infected animals.

Historically, people in the U.S. sometimes contracted the infection from raw or undercooked commercial pork products, but modern regulations and cooking guidelines have lowered this risk.

The newly reported case took place in 2022, when a 29-year-old man in Minnesota was hospitalized with a fever, severe muscle aches and pains and swelling around the eyes. He was also found to have a high number of immune cells called eosinophilia, a sign of infection.

Within a span of about half a month, the man had sought medical attention for his symptoms four times and was hospitalized twice. During the second hospitalization, he reported having consumed bear meat, and the medical team started him on medication for parasitic worms, just in case. They later confirmed he was carrying antibodies against Trichinella worms, and an investigation was launched to check for more cases.

About a week before he got sick, the Minnesota man had met up with nine family members in South Dakota. They'd shared a meal that included kabobs made with black bear (Ursus americanus) meat, which was originally harvested in Canada by one of the attending family members. It had been frozen for 45 days before being thawed, cooked and served with vegetables.

"The hunting outfitter had recommended freezing the meat to kill parasites," the CDC report notes. But some Trichinella species can survive being frozen. (This includes Trichinellanativa, which turned out to be the species likely involved in this case.)

"The meat was initially inadvertently served rare, reportedly because the meat was dark in color, and it was difficult for the family members to visually ascertain the level of doneness," the report noted. Some family members noticed the meat was underdone while eating it, and it was then cooked a bit more before being served again.


Original Submission

posted by hubie on Sunday May 26, @03:47PM   Printer-friendly

https://www.businessinsider.com/google-search-ai-overviews-glue-keep-cheese-pizza-2024-5

Archive link: https://archive.is/pkn6w

Google's new search feature, AI Overviews, seems to be going awry.

The tool, which gives AI-generated summaries of search results, appeared to instruct a user to put glue on pizza when they searched "cheese not sticking to pizza."

A screenshot of the summary it generated, shared on X, shows it responded with "cheese can slide off pizza for a number of reasons," and that the user could try adding "about ⅛ cup of non-toxic glue to the sauce to give it more tackiness."


Original Submission

posted by martyb on Sunday May 26, @11:00AM   Printer-friendly
from the maybe-Dracula-was-onto-something? dept.

Proteins in blood could give cancer warning seven years earlier - Cancer Research UK - Cancer News:

Proteins linked to cancer can start appearing in people's blood more than seven years before they're diagnosed, our funded researchers have found. In the future, it's possible doctors could use these early warning signs to find and treat cancer much earlier than they're able to today.

Across two studies, researchers at Oxford Population Health identified 618 proteins linked to 19 different types of cancer, including 107 proteins in a group of people whose blood was collected at least seven years before they were diagnosed.

The findings suggest that these proteins could be involved at the very earliest stages of cancer. Intercepting them could give us a way to stop the disease developing altogether.

"This research brings us closer to being able to prevent cancer with targeted drugs – once thought impossible but now much more attainable," explained Dr Karl Smith-Byrne, Senior Molecular Epidemiologist at Oxford Population Health, who worked on both papers.

For now, though, we need to do further research. The team want to find out more about the roles these proteins play in cancer development, how we can use tests to spot the most important ones, and which drugs we can use to stop them driving cancer.

[...] In the first study, scientists analysed 44,000 blood samples collected and stored by UK Biobank, including over 4,900 samples from people who were later diagnosed with cancer.

Their analysis of 1,463 proteins in each sample revealed 107 that changed at least seven years before a cancer diagnosis and 182 that changed at least three years before a cancer diagnosis.

In the second study, the scientists looked at genetic data from over 300,000 cancer cases to do a deep dive into which blood proteins were involved in cancer development and could be targeted by new treatments.

This time, they found 40 proteins in the blood that influence someone's risk of getting nine different types of cancer. While altering these proteins may increase or decrease the chances of someone developing cancer, more research is needed to make sure targeting them with drugs doesn't cause unintended side effects.


Original Submission

posted by hubie on Sunday May 26, @07:13AM   Printer-friendly
from the Quality-with-a-capital-Q dept.

The BBC is running a podcast on "Archive on 4" called "Turning 50: Zen and the Art of Motorcycle Maintenance" - https://www.bbc.co.uk/sounds/play/m001zfqh

I remember greatly enjoying that book in the 1970s, made a real impression and it changed how I thought about certain things. This podcast was a great refresher. It even includes original interviews with Robert Persig (the author) and others close to the creation of the book.

Some of the backstory behind the book was eye opening (but other parts were easy enough to work out just by reading it). For example, Persig flogged his original manuscript to many, many publishers before he got a nibble. Then he gives a lot of credit to a young editor who convinced him to change from (iirc) first to third person (via an unmamed observer) -- resulting in the eventual great success of the book.

Of course, the podcast can't resist revisiting the oft-quoted beer can shim story...or am I confusing that with a recent post on a motorcycle maintenance forum (grin)?


Original Submission

posted by hubie on Sunday May 26, @02:32AM   Printer-friendly
from the nothing-to-see-here dept.

Arthur T Knackerbracket has processed the following story:

The EU is concerned that Bing’s AI features could impact elections, while the UK’s CMA has decided not to investigate Microsoft’s partnership with Mistral AI.

The EU has Microsoft on its regulatory radar, as it has sent the company a legally binding request for information about Bing’s generative AI features.

The European Commission said this request for information is based on suspicions that Bing may have breached the Digital Services Act (DSA) due to risks linked to generative AI. These risks include AI ‘hallucinations’, the viral spread of deepfakes and the “automated manipulation of services that can mislead voters”.

[...] Meanwhile, the UK’s Competition and Markets Authority (CMA) has opted to not investigate Microsoft’s partnership with the start-up Mistral AI.

Microsoft backed the French unicorn earlier this year as part of a “multi-year partnership” to boost its Azure cloud computing platform with AI. But the CMA had concerns around whether the agreements between the two companies qualified as a merger deal.

As part of that effort, the CMA looked into the minority investment deals agreed by Microsoft and Mistral. The regulator had concerns that the links between the two companies could impact competition within the UK.

But in a brief statement released today (17 May), the CMA decided that Microsoft’s partnership with Mistral AI “does not qualify for investigation” under the merger provisions in the UK.


Original Submission

posted by janrinok on Saturday May 25, @09:49PM   Printer-friendly

https://www.bbc.com/news/articles/ceqq8gn014xo

The wreckage of a US Navy submarine that sank the most Japanese warships during World War Two has been found in the South China Sea, some 80 years after it was sunk by enemy forces.

The USS Harder was found 3,000ft (914m) below water off the Philippines' northern island of Luzon.

The Harder was sunk in battle on 29 August 1944, along with its crew of 79 men.

In one of its final war patrols, it sank three Japanese destroyers and heavily damaged two others over four days, according to the US Navy's History and Heritage Command (NHHC).

This forced the Japanese to change their battle plans and delay their carrier force, contributing to their defeat.

"Harder was lost in the course of victory. We must not forget that victory has a price, as does freedom," said Samuel J. Cox, a retired US admiral who heads the NHHC.

The Philippines was one of the main Pacific battlegrounds of World War Two, as the US fought to retake its former colony from the Japanese Imperial Army.

Waters in and around the archipelago have served as the resting place of famed World War Two battleships.

The Harder, which sailed under the motto of "Hit 'em harder', was found by the Lost 52 project, which aims to find the 52 US submarines lost during World War Two. It was found sitting upright on its keel or spine, and relatively intact, the US Navy said.

The submarine and its crew were later awarded the Presidential Unit Citation for its service during the war. The honour recognises extraordinary heroism in action.


Original Submission

posted by hubie on Saturday May 25, @05:06PM   Printer-friendly
from the need-more-satellites dept.

Arthur T Knackerbracket has processed the following story:

AST SpaceMobile has ramped up demonstrations of voice calls, texts, and video calls via satellite over the last year, using 4G LTE and 5G connections with download bandwidth reaching 14Mbps. Now the company says that a previous memorandum of understanding with AT&T to work on a space-based broadband network for phones has become a “definitive commercial agreement,” just in time for AST’s first five commercial satellites to launch this summer.

The FCC has gotten things rolling on a framework (PDF) for companies interested in building these types of services, with the idea of what Chairwoman Jessica Rosenworcel called a single network future. “We won’t need to think about what network, where, and what services are available. Connections will just work everywhere, all the time,” said Rosenworcel last year.

According to a statement, the five satellites AST SpaceMobile will launch from Cape Canaveral “will help enable commercial service that was previously demonstrated,” but there’s no mention of changes to deal with the problems of light pollution.

Apple has already added satellite-based messaging links to the iPhone, and Android is preparing for similar features, but a high-speed connection would take things to a different level. With Starlink also testing satellite-to-cellular links, dead zones could be a thing of the past in a few years.


Original Submission

posted by hubie on Saturday May 25, @12:26PM   Printer-friendly

Efficiency is the key-note of the times.

Fatigue is the enemy of efficiency;

and to detect and compensate for or overcome it,

is the duty of those concerned with the promotion of human welfare.

Ed. note: The JAMA article submission is a reprint of one from 1914 that makes the observation that in most walks of life people generally benefit, from an efficiency standpoint at least, from having a day off. Since then society has generally settled on two days off, or at least 40 required hours to be put in, but there has been momentum building for having three days off. Are we getting close to seeing this more, or do the recent fights about return-to-office show there's too much inertia for change at the MBA level still?

Not only in the field of manual labor, but also in innumerable other walks of life, in the case of the schoolchild, the office-boy, the factory-girl, the banker and the merchant, efficiency is the key-note of the times. Fatigue is the enemy of efficiency; and to detect and compensate for or overcome it, is the duty of those concerned with the promotion of human welfare.


Original Submission

posted by hubie on Saturday May 25, @07:36AM   Printer-friendly

As reported by https://www.msn.com/en-us/news/technology/windows-recall-sounds-like-a-privacy-nightmare-heres-why-im-worried/ar-BB1mNGFI , Microsoft is introducing a new "feature" in Windows 11:

If you haven't read about it yet, Recall is an AI feature coming to Windows 11 Copilot+ PCs. It's designed to let you go back in time on your computer by "taking images of your active screen every few seconds" and analyzing them with AI, according to Microsoft's Recall FAQs. If anyone other than you gets access to that Recall data, it could be disastrous.

...

On the surface, this sounds like a cool feature, but that paranoid privacy purist in the back of my mind is burying his face in a pillow and screaming. Imagine if almost everything you had done for the past three months was recorded for anyone with access to your computer to see. Well, if you use Recall, you won't have to imagine.

That might seem like an overreaction, but let me explain: Recall is taking screenshots every few seconds and storing them on your device. Adding encryption into the mix, that's an enormous amount of bloaty visual data that will show almost everything you've been doing on your computer during that period.

...

But that's just the tip of the iceberg. Microsoft openly admits that Recall will be taking screenshots of your passwords and private data:

"Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry."

...

Arguably, the worst part about this is that it will be on by default once you activate your device. Microsoft states:

        On by default

A user going by the name of "Alex von Kitchen" summarised the issues quite well: https://aus.social/@Dangerous_beans/112477798730314983


Original Submission

posted by janrinok on Saturday May 25, @02:51AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Ampere Computing today introduced its roadmap for the coming years, including new CPUs and collaborations with third parties. In particular, the company said it would launch its all-new 256-core AmpereOne processor next year, made on TSMC's N3 process technology. Also, Ampere is teaming up with Qualcomm to build AI inference servers with the company's accelerators. Apparently, Huawei is also looking at integrating third-party UCIe-compatible chiplets into its own platforms.

Ampere has begun shipping 192-core AmpereOne processors with an eight-channel DDR5 memory subsystem it introduced a year ago. Later this year, the company plans to introduce 192-core AmpereOne CPUs with a 12-channel DDR5 memory subsystem, requiring a brand-new platform.

Next year, the company will use this platform for its 256-core AmpereOne CPU, which will be made using one of TSMC's N3 fabrication processes. The company does not disclose whether the new processor will also feature a new microarchitecture, though it looks like it will continue to feature 2 MB of L2 cache per core.

"We are extending our product family to include a new 256-core product that delivers 40% more performance than any other CPU in the market," said Renee James, chief executive of Ampere. "It is not just about cores. It is about what you can do with the platform. We have several new features that enable efficient performance, memory, caching, and AI compute." 

The company says that its 256-core CPU will use the same cooling system as its existing offerings, which implies that its thermal design power will remain in the 350-watt ballpark. 

While Ampere can certainly address many general-purpose cloud instances, its capabilities for AI are fairly limited. The company itself says that its 128-core AmpereOne CPU with its two 128-bit vector units per core (and supporting INT8, INT16, FP16, and BFloat16 formats) can offer performance comparable to Nvidia's A10 GPU, albeit at lower power. [...] So, it teamed up with Qualcomm, and the two companies plan to build platforms for LLM inferencing based on Ampere's CPUs and Qualcomm's Cloud AI 100 Ultra accelerators.


Original Submission

Today's News | May 28 | May 26  >