Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:119

posted by janrinok on Tuesday May 28, @10:09PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A software maker serving more than 10,000 courtrooms throughout the world hosted an application update containing a hidden backdoor that maintained persistent communication with a malicious website, researchers reported Thursday, in the latest episode of a supply-chain attack.

The software, known as the JAVS Viewer 8, is a component of the JAVS Suite 8, an application package courtrooms use to record, play back, and manage audio and video from proceedings. Its maker, Louisville, Kentucky-based Justice AV Solutions, says its products are used in more than 10,000 courtrooms throughout the US and 11 other countries. The company has been in business for 35 years.

Researchers from security firm Rapid7 reported that a version of the JAVS Viewer 8 available for download on javs.com contained a backdoor that gave an unknown threat actor persistent access to infected devices. The malicious download, planted inside an executable file that installs the JAVS Viewer version 8.3.7, was available no later than April 1, when a post on X (formerly Twitter) reported it. It’s unclear when the backdoored version was removed from the company’s download page. JAVS representatives didn’t immediately respond to questions sent by email.

“Users who have version 8.3.7 of the JAVS Viewer executable installed are at high risk and should take immediate action,” Rapid7 researchers Ipek Solak, Thomas Elkins, Evan McCann, Matthew Smith, Jake McMahon, Tyler McGraw, Ryan Emmons, Stephen Fewer, and John Fenninger wrote. “This version contains a backdoored installer that allows attackers to gain full control of affected systems.”

The installer file was titled JAVS Viewer Setup 8.3.7.250-1.exe. When executed, it copied the binary file fffmpeg.exe to the file path C:\Program Files (x86)\JAVS\Viewer 8\. To bypass security warnings, the installer was digitally signed, but with a signature issued to an entity called “Vanguard Tech Limited” rather than to “Justice AV Solutions Inc.,” the signing entity used to authenticate legitimate JAVS software.

The researchers said fffmpeg.exe also downloaded the file chrome_installer.exe from the IP address 45.120.177.178. chrome_installer.exe went on to execute a binary and several Python scripts that were responsible for stealing the passwords saved in browsers. fffmpeg.exe is associated with a known malware family called GateDoor/Rustdoor. The exe file was already flagged by 30 endpoint protection engines.

The number of detections had grown to 38 at the time this post went live.


Original Submission

posted by janrinok on Tuesday May 28, @05:20PM   Printer-friendly
from the future-iron-man dept.

https://www.bbc.com/news/articles/c4nnjpjzryeo
https://www.standard.co.uk/news/health/jordan-marotta-bionic-hero-arm-iron-man-boy-new-york-b1159991.html
https://openbionics.com/

A five-year-old boy who was born without a left hand has become the youngest in the world to get a bionic Hero Arm, making him "feel like a superhero".

The custom-made, 3D printed prosthetic is produced by Bristol-based Open Bionics, which was founded in 2014 and launched four clinics in America in the last year.

Jordan, of Long Island, New York state, is now the youngest ever owner of one of the firm's Hero Arms.

The prosthetic uses special sensors which detect muscular contractions and turn them into bionic hand movements.

Most children with Hero Arms are aged seven years old or above, but the firm said Jordan's size for his age and his high IQ – meaning he was easy to teach how to use the Hero Arm – meant he could have one sooner.

[...] Open Bionics describes itself as the only company in the world making multi-articulating hands small and light enough for children as young as Jordan.


Original Submission

posted by janrinok on Tuesday May 28, @12:34PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

There's common agreement that generative artificial intelligence (AI) tools can help people save time and boost productivity. But while these technologies make it easy to run code or produce reports quickly, the backend work to build and sustain large language models (LLMs) may need more human labor than the effort saved up front. Plus, many tasks may not necessarily require the firepower of AI when standard automation will do. 

That's the word from Peter Cappelli, management professor at the University of Pennsylvania Wharton School, who spoke at a recent MIT event. On a cumulative basis, generative AI and LLMs may create more work for people than alleviate tasks. LLMs are complicated to implement, and "it turns out there are many things generative AI could do that we don't really need doing," said Cappelli. 

While AI is hyped as a game-changing technology, "projections from the tech side are often spectacularly wrong," he pointed out. "In fact, most of the technology forecasts about work have been wrong over time." He said the imminent wave of driverless trucks and cars, predicted in 2018, is an example of rosy projections that have yet to come true. 

Broad visions of technology-driven transformation often get tripped up in the gritty details. Proponents of autonomous vehicles promoted what "driverless trucks could do, rather than what needs to be done, and what is required for clearing regulations -- the insurance issues, the software issues, and all those issues." Plus, Cappelli added: "If you look at their actual work, truck drivers do lots of things other than just driving trucks, even on long-haul trucking."

A similar analogy can be drawn to using generative AI for software development and business. Programmers "spend a majority of their time doing things that don't have anything to do with computer programming," he said. "They're talking to people, they're negotiating budgets, and all that kind of stuff. Even on the programming side, not all of that is actually programming."  

The technological possibilities of innovation are intriguing but rollout tends to be slowed by realities on the ground. In the case of generative AI, any labor-saving and productivity benefits may be outweighed by the amount of backend work needed to build and sustain LLMs and algorithms. 

Both generative and operational AI "generate new work," Cappelli pointed out. "People have to manage databases, they have to organize materials, they have to resolve these problems of dueling reports, validity, and those sorts of things. It's going to generate a lot of new tasks, somebody is going to have to do those."

He said operational AI that's been in place for some time is still a work in progress. "Machine learning with numbers has been markedly underused. Some part of this has been database management questions. It takes a lot of effort just to put the data together so you can analyze it. Data is often in different silos in different organizations, which are politically difficult and just technically difficult to put together."  

Cappelli cites several issues in the move toward generative AI and LLMs that must be overcome:

Cappelli suggested the most useful generative AI application in the near term is sifting through data stores and delivering analysis to support decision-making processes. "We are washing data right now that we haven't been able to analyze ourselves," he said. "It's going to be way better at doing that than we are," he said. Along with database management, "somebody's got to worry about guardrails and data pollution issues."


Original Submission

posted by janrinok on Tuesday May 28, @07:43AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Splash a few drops of water on a hot pan and if the pan is hot enough, the water will sizzle and the droplets of water seem to roll and float, hovering above the surface.

The temperature at which this phenomenon, called the Leidenfrost effect, occurs is predictable, usually happening above 230 degrees Celsius. The team of Jiangtao Cheng, associate professor in the Virginia Tech Department of Mechanical Engineering, has discovered a method to create the aquatic levitation at a much lower temperature, and the results have been published in Nature Physics.

Alongside first author and Ph.D. student Wenge Huang, Cheng's team collaborated with Oak Ridge National Lab and Dalian University of Technology for sections of the research.

The discovery has great potential in heat transfer applications such as the cooling of industrial machines and surface fouling cleaning for heat exchangers. It also could help prevent damage and even disaster to nuclear machinery.

Currently, there are more than 90 licensed operable nuclear reactors in the U.S. that power tens of millions of homes, anchor local communities, and actually account for half of the country's clean energy electricity production. It requires resources to stabilize and cool those reactors, and heat transfer is crucial for normal operations.

For three centuries, the Leidenfrost effect has been a well-known phenomenon among physicists that establishes the temperature at which water droplets hover on a bed of their own vapor. While it has been widely documented to start at 230 degrees Celsius, Cheng and his team have pushed that limit much lower.

The effect occurs because there are two different states of water living together. If we could see the water at the droplet level, we would observe that not all of a droplet boils at the surface, only part of it. The heat vaporizes the bottom, but the energy doesn't travel through the entire droplet. The liquid portion above the vapor is receiving less energy because much of it is used to boil the bottom. That liquid portion remains intact, and this is what we see floating on its own layer of vapor. This has been referred to since its discovery in the 18th century as the Leidenfrost effect, named for German physician Johann Gottlob Leidenfrost.

That hot temperature is well above the 100 degree Celsius boiling point of water because the heat must be high enough to instantly form a vapor layer. Too low, and the droplets don't hover. Too high, and the heat will vaporize the entire droplet.

The traditional measurement of the Leidenfrost effect assumes that the heated surface is flat, which causes the heat to hit the water droplets uniformly. Working in the Virginia Tech Fluid Physics Lab, Cheng's team has found a way to lower the starting point of the effect by producing a surface covered with micropillars.

"Like the papillae on a lotus leaf, micropillars do more than decorate the surface," said Cheng. "They give the surface new properties."

The micropillars designed by Cheng's team are 0.08 millimeters tall, roughly the same as the width of a human hair. They are arranged in a regular pattern of 0.12 millimeters apart. A droplet of water encompasses 100 or more of them. These tiny pillars press into a water droplet, releasing heat into the interior of the droplet and making it boil more quickly.

Compared to the traditional view that the Leidenfrost effect triggers at 230 degrees Celsius, the fin-array-like micropillars press more heat into the water than a flat surface. This causes microdroplets to levitate and jump off the surface within milliseconds at lower temperatures because the speed of boiling can be controlled by changing the height of the pillars.

When the textured surface was heated, the team discovered that the temperature at which the floating effect was achieved was significantly lower than that of a flat surface, starting at 130 degrees Celsius.

Not only is this a novel discovery for the understanding of the Leidenfrost effect, it is a twist on the limits previously imagined. A 2021 study from Emory University found that the properties of water actually caused the Leidenfrost effect to fail when the temperature of the heated surface lowers to 140 degrees. Using the micropillars created by Cheng's team, the effect is sustainable even 10 degrees below that.

"We thought the micropillars would change the behaviors of this well-known phenomenon, but our results defied even our own imaginations," said Cheng. "The observed bubble-droplet interactions are a big discovery for boiling heat transfer."

The Leidenfrost effect is more than an intriguing phenomenon to watch, it is also a critical point in heat transfer. When water boils, it is most efficiently removing heat from a surface. In applications such as machine cooling, this means that adapting a hot surface to the textured approach presented by Cheng's team gets heat out more quickly, lowering the possibility of damage caused when a machine gets too hot.

"Our research can prevent disasters such as vapor explosions, which pose significant threats to industrial heat transfer equipment," said Huang. "Vapor explosions occur when vapor bubbles within a liquid rapidly expand due to the present of intense heat source nearby. One example of where this risk is particularly pertinent is in nuclear plants, where the surface structure of heat exchangers can influence vapor bubble growth and potentially trigger such explosions. Through our theoretical exploration in the paper, we investigate how surface structure affects the growth mode of vapor bubbles, providing valuable insights into controlling and mitigating the risk of vapor explosions."

More information: Wenge Huang et al, Low-temperature Leidenfrost-like jumping of sessile droplets on microstructured surfaces, Nature Physics (2024). DOI: 10.1038/s41567-024-02522-z , dx.doi.org/10.1038/s41567-024-02522-z

Journal information: Nature Physics


Original Submission

posted by janrinok on Tuesday May 28, @03:10AM   Printer-friendly

Elons New Supercomputer

https://www.straitstimes.com/world/united-states/musk-plans-largest-ever-supercomputer-for-xai-start-up-report

https://www.theinformation.com/articles/musk-plans-xai-supercomputer-dubbed-gigafactory-of-compute

https://en.wikipedia.org/wiki/Tesla_Dojo

Another day and Elon wants to do something new. Now he is going to build the worlds largest supercomputer, ready next fall (2025). His AI company is going to be the main customer, but I guess his other ventures from cars to rockets could use some computational power to.

So he is apparently just not going to be bigger then the rest. He is going to build it massively bigger. As in at least four times bigger then then the top computers today.

Renting supercomputing powers from other companies have apparently now become so expensive that it's cheaper and better to just build your own. A Gigafactory of Compute.

The previous one for Tesla, the Tesla Dojo, was apparently not enough.


Musk Plans Largest-ever Supercomputer, Report Says

Musk plans largest-ever supercomputer, report says - Taipei Times:

[...] The planned supercomputer would be "at least four times the size of the biggest GPU clusters that exist today," such as those used by Meta Platforms Inc to train its AI models, Musk was quoted as saying during a presentation to investors this month.

Since OpenAI's generative AI tool ChatGPT exploded on the scene in 2022, the technology has been an area of fierce competition between tech giants Microsoft Corp and Google Inc, as well as Meta and start-ups like Anthropic and Stability AI Inc.

Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

His company xAI is developing a chatbot named Grok, which can access social media platform X, also owned by Musk, in real time.

Earlier this year, Musk said training the Grok 2 model took about 20,000 Nvidia H100 GPUs, adding that the Grok 3 model and beyond would require 100,000 Nvidia H100 units.

In related news, Tesla shareholders are being urged by a major proxy advisory firm to reject a proposed US$56 billion pay package for Musk, in a blow to the electric-vehicle manufacturer's board.

Glass Lewis & Co made its recommendation in a report released on Saturday, citing the "excessive size" of the pay deal and the dilutive effect upon exercise.

"Mr. Musk's slate of extraordinarily time-consuming projects unrelated to the company was well-documented before the 2018 grant, and only expanded with his high-profile purchase of the company now known as X," Glass Lewis said.

The recommendation to large institutional investors might sway their vote over Musk's pay at the vehicle manufacturer's annual meeting on June 13. If the proposal is rejected, the CEO might make good on threats to develop products outside of Tesla.


Original Submission #1Original Submission #2

posted by janrinok on Monday May 27, @10:25PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Hey Google, can you spare a few hundred million to keep Rupert Murdoch’s yacht afloat? That’s essentially what some legislators are demanding with their harebrained schemes to force tech companies to fund journalism.

It is no secret that the journalism business is in trouble these days. News organizations are failing and journalists are being laid off in record numbers. There have been precious few attempts at carefully thinking through this situation and exploring alternative business models. The current state of the art thinking seems to be either (1) a secretive hedge fund buying up newspapers, selling off the pieces and sucking out any remaining cash, (2) replacing competent journalists with terrible AI-written content or (3) putting all the good reporting behind a paywall so that disinformation peddlers get to spread nonsense to the vast majority of the public for free.

Then, there’s the legislative side. Some legislators have (rightly!) determined that the death of journalism isn’t great for the future of democracy. But, so far, their solutions have been incredibly problematic and dangerous. Pushed by the likes of Rupert Murdoch, whose loud and proud support for “free market capitalism” crumbled to dust the second his own news business started failing, leading him to demand government handouts for his own failures in the market. The private equity folks buying up newspapers (mainly Alden Capital) jumped into the game as well, demanding that the government force Google and Meta to subsidize their strip-mining of the journalism field.

The end result has mostly been disastrous link taxes, which were pioneered in Europe a decade ago. They failed massively before being revived more recently in Australia and Canada, where they have also failed (despite people pretending they have succeeded).

For no good reason, the US Congress and California’s legislature are still considering their own versions of this disastrous policy that has proven (1) terrible for journalism and (2) even worse for the open web.

Recently, California Senator Steve Glazer offered up an alternative approach, SB 1327 that is getting a fair bit of attention. Instead of taxing links like all those other proposals, it would directly tax the digital advertising business model and use that tax to create a fund for journalism. Specifically, it would apply a tax on what it refers to (in a dystopian Orwellian way) as a “data extraction transaction.” It refers to the tax as a “data extraction mitigation fee” and that tax would be used to provide credits for “qualified” media entities.

I’ve seen very mixed opinions on this. It’s not surprising that some folks are embracing this as a potential path to funding journalism. Casey Newton described it as a “better way for platforms to fund journalism.”

And, I mean, when compared to link taxes, it is potentially marginally better (but also, with some very scary potential side effects). The always thoughtful Brandon Silverman (who created CrowdTangle and has worked to increase transparency from tech companies) also endorses the bill as “a potential path forward.”

But I tend to agree much more with journalism professor Jeff Jarvis who highlights the fundamental problems of the bill and the framework it creates. As I’ve pointed out with link taxes, the oft-ignored point of a tax on something is to get less of it. You tax something bad because that tax decreases how much of it is out there. And, as Jarvis points out here, this is basically a tax on information:

Furthermore, Jarvis rightly points out that Glazer’s bill is positioned as something unique when users give their attention to internet companies, but explicitly carves out when users give their attention to other types of media companies. This sets up a problematically tiered system for when attention gets taxed and when it doesn’t:

Indeed, the entire framing of the bill seems to suggest that data and advertising is a sort of “pollution,” that needs to be taxed in order to minimize it. And that seems particularly troublesome.

As Jarvis also notes, the true beneficiaries of a law like this would still be those rapacious hedge funds that have bought up a bunch of news orgs [...]


Original Submission

posted by janrinok on Monday May 27, @05:39PM   Printer-friendly
from the cheap-steel dept.

"Researchers from the University of Cambridge have developed a method to produce very low emission concrete at scale -- an innovation that could be transformative in the transition to net zero." reports ScienceDaily

The method, which the researchers say is "an absolute miracle," [are we taken as savages here?] uses the electrically-powered arc furnaces used for steel recycling to simultaneously recycle cement, the carbon-hungry component of concrete.
...
The Cambridge researchers found that used cement is an effective substitute for lime flux, which is used in steel recycling to remove impurities and normally ends up as a waste product known as slag. But by replacing lime with used cement, the end product is recycled cement that can be used to make new concrete.

The cement recycling method developed by the Cambridge researchers, reported in the journal Nature, does not add any significant costs to concrete or steel production and significantly reduces emissions from both concrete and steel, due to the reduced need for lime flux.
...
Recent tests carried out by the Materials Processing Institute, a partner in the project, showed that recycled cement can be produced at scale in an electric arc furnace (EAF), the first time this has been achieved. Eventually, this method could produce zero emission cement, if the EAF was powered by renewable energy.
...
"I had a vague idea from previous work that if it were possible to crush old concrete, taking out the sand and stones, heating the cement would remove the water, and then it would form clinker again," said first author Dr Cyrille Dunant, also from the Department of Engineering. "A bath of liquid metal would help this chemical reaction along, and an electric arc furnace, used to recycle steel, felt like a strong possibility. We had to try."
...
"We found the combination of cement clinker and iron oxide is an excellent steelmaking slag because it foams and it flows well," said Dunant. "And if you get the balance right and cool the slag quickly enough, you end up with reactivated cement, without adding any cost to the steelmaking process."

The cement made through this recycling process contains higher levels of iron oxide than conventional cement, but the researchers say this has little effect on performance.

DOI:10.1038/s41586-024-07338-8 (free access)

4:33min vid


Original Submission

posted by janrinok on Monday May 27, @12:52PM   Printer-friendly
from the breaking-oem-monopolies dept.

Several sites are reporting on Qualcomm's increasing Linux support. The tide is turning and the Microsoft monopoly on OEMs, at least the non-x86 ones, might be weakening as full Linux support is now expected on the modern hardware architectures these days:

Here's the thing. In the Linux world, ARM has had something like a 15-year head start over Microsoft's own often anemic ARM efforts, thanks to the Raspberry Pi and single-board computers making the platform a good choice for more than just basic web-surfers.

Collectives like Pine64 have been building laptops with first-class citizen Linux support for years. (They're admittedly not fast but they offer a good ecosystem to develop on.)

And then, we got fast ARM laptops from Apple, which smoked what's already out there but come with the side effect of a Linux experience that is still somewhat immature, despite the strides already made.

This may be a game-changer.

Qualcomm is making good progress on adapting its new Snapdragon X Elite laptop CPU for Linux use. The mobile SoC manufacturer revealed that it has laid a lot of the groundwork already to get the Snapdragon X Elite running the Linux operating systems. However, Qualcomm is far from done, as there's still a lot of development work needed to get the X Elite into a fully operational state in Linux. Upcoming Linux kernels should enable full support for all the chip's features.

Qualcomm prides itself on its Linux enablement work and has prioritized Linux enablement in all of its previous Snapdragon laptop CPUs, typically announcing Linux support one or two days after launch. The Snapdragon X Elite continues that pattern, with Linux enablement being announced the very next day after its original October 23, 2023 debut.

Tom's Hardware, Qualcomm goes where Apple won't, readies official Linux support for Snapdragon X Elite.

It seems that the Asahi Linux project has also done great work on the M-series chips despite Apple. There, Apple still has to get up to speed.

Previously,
(2024) Desktop GNU/Linux Surpasses 4% Market Share


Original Submission

posted by hubie on Monday May 27, @06:02AM   Printer-friendly
from the slowly-making-progress dept.

[Ed. note: Some of the links in the Ars article point to the Office of the Revisor of Statutes web site. The links did not resolve for the submitter nor this editor, but they are included below in the event that it is a temporary problem with the web site.]

Ars Technica is reporting on a Minnesota law passed this week which, according to the article:

Minnesota this week eliminated two laws that made it harder for cities and towns to build their own broadband networks. The state-imposed restrictions were repealed in an omnibus commerce policy bill [N.B., this link is not valid (Geo-Blocked, perhaps? I'm not in MN). I retained it as it is in TFA. See link to the MN House journal above] signed on Tuesday by Gov. Tim Walz, a Democrat.

Minnesota was previously one of about 20 states that imposed significant restrictions on municipal broadband. The number can differ depending on who's counting because of disagreements over what counts as a significant restriction. But the list has gotten smaller in recent years because states including Arkansas, Colorado, and Washington repealed laws that hindered municipal broadband.

The Minnesota bill enacted this week struck down a requirement that municipal telecommunications networks be approved in an election with 65 percent of the vote. The law is over a century old, the Institute for Local Self-Reliance's Community Broadband Network Initiative wrote yesterday.

"Though intended to regulate telephone service, the way the law had been interpreted after the invention of the Internet was to lump broadband in with telephone service thereby imposing that super-majority threshold to the building of broadband networks," the broadband advocacy group said.

The Minnesota omnibus bill also changed a law that let municipalities build broadband networks, but only if no private providers offer service or will offer service "in the reasonably foreseeable future." That restriction had been in effect since at least the year 2000.

The caveat that prevented municipalities from competing against private providers was eliminated from the law when this week's omnibus bill was passed. As a result, the law now lets cities and towns "improve, construct, extend, and maintain facilities for Internet access and other communications purposes" even if private ISPs already offer service.

I sure wish I could get municipal broadband. How about you Soylentils? Do you have municipal broadband? Does your ISP have competition at all? Abusive terms of service? Data caps?


Original Submission

posted by janrinok on Monday May 27, @01:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

South Korea's president has described the global semiconductor industry as "a field where all-out national warfare is underway" as he announced a $19 billion to diversify the nation's silicon sector.

In remarks presented on Thursday at a government economic review meeting, President Yoon Suk Yeol called for South Korea to "open a new future for the semiconductor industry."

"Our semiconductors have dominated the world in the memory field over the past 30 years," he declared, before lamenting that "our fabless market share still remains in the one percent range, and foundries that manufacture system semiconductors are unable to narrow the gap with leading companies such as TSMC."

"In the future, the success or failure of the semiconductor industry will be determined by system semiconductors, which account for two thirds of the entire market," he predicted, calling for his nation "to bet on system semiconductors, which are constantly expanding beyond CPUs and GPUs to AI semiconductors."

To make that happen, South Korea has created a $19 billion program to fund construction of chipmaking mega-clusters – especially the electrical and transport infrastructure they need. Provision of water resources for chipmaking has also been fast-tracked.

The plan will also see a "mini-fab" created, so that small and medium-sized fabless chip firms have a resource they can use to get their products off the drawing board. They will also be helped by a fund that President Yoon said will help to turn them into global enterprises.

The president noted that government funds flowing to chipmakers could be perceived as "a tax cut for large corporations or a tax cut for the rich." He rebutted that notion by arguing "the semiconductor industry is the most important and sure foundation for making our people's lives richer and making our economy take off."

"Semiconductors are the livelihood of the people, and all support for the semiconductor industry is for the benefit of the people," he argued, adding that government investment will pay for itself handsomely over time.

[...] President Yoon raised many solid points. The United States and European Union have thrown tens of billions of dollars and euros respectively at IC manufacturers, while South Korea's chip champs – Samsung Electronics and SK hynix – are indeed monsters of memory as they collectively hold over 70 percent of the market for DRAM and NAND flash. But beyond Samsung's modest Exynos SoC operation (which can't even satisfy demand for its own Galaxy smartphones), South Korea is not home to a notable manufacturer of high-value processors.

Samsung and SK hynix have also made enormous bets on factories to produce more memory – some on the peninsula and others stateside (where they could attract funds from Uncle Sam).

South Korea's new funding package is tiny compared to the sums its chip champions are spending, so it's unlikely it will divert their efforts notably. Nor will it deter them continuing to target the so-hot-right-now memory market, in which demand for DDR5 and related variants – plus high bandwidth memory (HBM) – is currently rampant. Indeed, SK hynix has already found buyers for all the HBM it can make in 2024 and most of the chips it will produce in 2025.

Memory-centric analyst firm TrendForce recently worried out loud about AI-fuelled demand for HBM skewing manufacturing investments away from DRAM, and maybe causing a shortage of the latter in years to come.


Original Submission

posted by janrinok on Sunday May 26, @08:31PM   Printer-friendly
from the this-one-is-for-the-hunters dept.

Parasitic worms infect 6 after bear meat served at family reunion:

Six family members caught a rare parasitic worm infection after sharing a meal that included black bear meat, which was initially served rare after being stored frozen for more than a month.

Two of the people reported only eating vegetables at the meal, so it's likely that the infected meat contaminated these sides at some point.

The worm infection, called trichinellosis, is rarely reported in the United States, according to a new report of the case published Thursday (May 23) by the Centers for Disease Control and Prevention (CDC). Between 2016 and 2022, only 35 probable and confirmed cases of the disease were recorded. "Bear meat was the suspected or confirmed source of infection in the majority of those outbreaks," the report noted.

Trichinellosis occurs when people inadvertently consume larvae of a roundworm in the Trichinella genus. The worm commonly infects bears, wild boars, wildcats, foxes, wolves, seals and walruses. People typically become infected after consuming raw or undercooked meat from infected animals.

Historically, people in the U.S. sometimes contracted the infection from raw or undercooked commercial pork products, but modern regulations and cooking guidelines have lowered this risk.

The newly reported case took place in 2022, when a 29-year-old man in Minnesota was hospitalized with a fever, severe muscle aches and pains and swelling around the eyes. He was also found to have a high number of immune cells called eosinophilia, a sign of infection.

Within a span of about half a month, the man had sought medical attention for his symptoms four times and was hospitalized twice. During the second hospitalization, he reported having consumed bear meat, and the medical team started him on medication for parasitic worms, just in case. They later confirmed he was carrying antibodies against Trichinella worms, and an investigation was launched to check for more cases.

About a week before he got sick, the Minnesota man had met up with nine family members in South Dakota. They'd shared a meal that included kabobs made with black bear (Ursus americanus) meat, which was originally harvested in Canada by one of the attending family members. It had been frozen for 45 days before being thawed, cooked and served with vegetables.

"The hunting outfitter had recommended freezing the meat to kill parasites," the CDC report notes. But some Trichinella species can survive being frozen. (This includes Trichinellanativa, which turned out to be the species likely involved in this case.)

"The meat was initially inadvertently served rare, reportedly because the meat was dark in color, and it was difficult for the family members to visually ascertain the level of doneness," the report noted. Some family members noticed the meat was underdone while eating it, and it was then cooked a bit more before being served again.


Original Submission

posted by hubie on Sunday May 26, @03:47PM   Printer-friendly

https://www.businessinsider.com/google-search-ai-overviews-glue-keep-cheese-pizza-2024-5

Archive link: https://archive.is/pkn6w

Google's new search feature, AI Overviews, seems to be going awry.

The tool, which gives AI-generated summaries of search results, appeared to instruct a user to put glue on pizza when they searched "cheese not sticking to pizza."

A screenshot of the summary it generated, shared on X, shows it responded with "cheese can slide off pizza for a number of reasons," and that the user could try adding "about ⅛ cup of non-toxic glue to the sauce to give it more tackiness."


Original Submission

posted by martyb on Sunday May 26, @11:00AM   Printer-friendly
from the maybe-Dracula-was-onto-something? dept.

Proteins in blood could give cancer warning seven years earlier - Cancer Research UK - Cancer News:

Proteins linked to cancer can start appearing in people's blood more than seven years before they're diagnosed, our funded researchers have found. In the future, it's possible doctors could use these early warning signs to find and treat cancer much earlier than they're able to today.

Across two studies, researchers at Oxford Population Health identified 618 proteins linked to 19 different types of cancer, including 107 proteins in a group of people whose blood was collected at least seven years before they were diagnosed.

The findings suggest that these proteins could be involved at the very earliest stages of cancer. Intercepting them could give us a way to stop the disease developing altogether.

"This research brings us closer to being able to prevent cancer with targeted drugs – once thought impossible but now much more attainable," explained Dr Karl Smith-Byrne, Senior Molecular Epidemiologist at Oxford Population Health, who worked on both papers.

For now, though, we need to do further research. The team want to find out more about the roles these proteins play in cancer development, how we can use tests to spot the most important ones, and which drugs we can use to stop them driving cancer.

[...] In the first study, scientists analysed 44,000 blood samples collected and stored by UK Biobank, including over 4,900 samples from people who were later diagnosed with cancer.

Their analysis of 1,463 proteins in each sample revealed 107 that changed at least seven years before a cancer diagnosis and 182 that changed at least three years before a cancer diagnosis.

In the second study, the scientists looked at genetic data from over 300,000 cancer cases to do a deep dive into which blood proteins were involved in cancer development and could be targeted by new treatments.

This time, they found 40 proteins in the blood that influence someone's risk of getting nine different types of cancer. While altering these proteins may increase or decrease the chances of someone developing cancer, more research is needed to make sure targeting them with drugs doesn't cause unintended side effects.


Original Submission

posted by hubie on Sunday May 26, @07:13AM   Printer-friendly
from the Quality-with-a-capital-Q dept.

The BBC is running a podcast on "Archive on 4" called "Turning 50: Zen and the Art of Motorcycle Maintenance" - https://www.bbc.co.uk/sounds/play/m001zfqh

I remember greatly enjoying that book in the 1970s, made a real impression and it changed how I thought about certain things. This podcast was a great refresher. It even includes original interviews with Robert Persig (the author) and others close to the creation of the book.

Some of the backstory behind the book was eye opening (but other parts were easy enough to work out just by reading it). For example, Persig flogged his original manuscript to many, many publishers before he got a nibble. Then he gives a lot of credit to a young editor who convinced him to change from (iirc) first to third person (via an unmamed observer) -- resulting in the eventual great success of the book.

Of course, the podcast can't resist revisiting the oft-quoted beer can shim story...or am I confusing that with a recent post on a motorcycle maintenance forum (grin)?


Original Submission

posted by hubie on Sunday May 26, @02:32AM   Printer-friendly
from the nothing-to-see-here dept.

Arthur T Knackerbracket has processed the following story:

The EU is concerned that Bing’s AI features could impact elections, while the UK’s CMA has decided not to investigate Microsoft’s partnership with Mistral AI.

The EU has Microsoft on its regulatory radar, as it has sent the company a legally binding request for information about Bing’s generative AI features.

The European Commission said this request for information is based on suspicions that Bing may have breached the Digital Services Act (DSA) due to risks linked to generative AI. These risks include AI ‘hallucinations’, the viral spread of deepfakes and the “automated manipulation of services that can mislead voters”.

[...] Meanwhile, the UK’s Competition and Markets Authority (CMA) has opted to not investigate Microsoft’s partnership with the start-up Mistral AI.

Microsoft backed the French unicorn earlier this year as part of a “multi-year partnership” to boost its Azure cloud computing platform with AI. But the CMA had concerns around whether the agreements between the two companies qualified as a merger deal.

As part of that effort, the CMA looked into the minority investment deals agreed by Microsoft and Mistral. The regulator had concerns that the links between the two companies could impact competition within the UK.

But in a brief statement released today (17 May), the CMA decided that Microsoft’s partnership with Mistral AI “does not qualify for investigation” under the merger provisions in the UK.


Original Submission

Today's News | May 29 | May 27  >