Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:119

posted by janrinok on Wednesday May 29, @10:07PM   Printer-friendly
from the well-wood-you-believe-it dept.

Abstract:

Silica glass, known for its brittleness, weight, and non-biodegradable nature, faces challenges in finding suitable alternatives.

Transparent wood, made by infusing polymers into wood, shows promise but is hindered by limited availability of wood in China and fire risks associated with its use. This study explores the potential of utilizing bamboo, which has a shorter growth cycle, as a valuable resource for developing flame-retardant, smoke-suppressing, and superhydrophobic transparent bamboo. A 3-layered flame-retardant barrier, composed of a top silane layer, an intermediate layer of SiO2 formed through hydrolysis-condensation of Na2SiO3 on the surface, and an inner layer of Na2SiO3, has been confirmed to be effective in reducing heat release, slowing flame spread, and inhibiting the release of combustible volatiles, toxic smoke, and CO.

Compared to natural bamboo and other congeneric transparent products, the transparent bamboo displays remarkable superiority, with the majority of parameters being notably lower by an entire order of magnitude.

It achieves a long ignition time of 116 s, low total heat release (0.7 MJ/m2), low total smoke production (0.063 m2), and low peak CO concentration (0.008 kg/kg). Moreover, when used as a substrate for perovskite solar cells, the transparent bamboo displays the potential to act as a light management layer, leading to a marked efficiency enhancement of 15.29%. The excellent features of transparent bamboo make it an enticing choice for future advancements in flame-retardant glasses and optical devices.

Bamboo, often referred to as "the second forest", boasts a rapid growth and regeneration rate, allowing it to reach maturity and be utilized as a building material within 4 to 7 years of growth. With an output 4 times higher than wood per acre, bamboo is recognized for its exceptional efficiency. In terms of chemical composition, bamboo shares similarities with wood, mainly consisting of lignin, cellulose, and hemicellulose. Furthermore, the internal hierarchical structure of bamboo closely resembles that of wood, featuring high porosity and permeability because of neatly arranged vertical channels.

This characteristic suggests the potential use of bamboo in the production of transparent composite materials. Transparent bamboo offers 3 distinct advantages over traditional silica glass. Firstly, the abundant and renewable nature of bamboo feedstock aligns with environmental sustainability goals. Secondly, transparent bamboo exhibits high light transmittance and haze, enabling privacy while facilitating the entry of natural light indoors. Lastly, the low density and excellent ability to regulate temperature and humidity from a bamboo template further position it as a promising alternative to conventional glass.


Original Submission

posted by janrinok on Wednesday May 29, @05:20PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The competition to produce the world's most advanced chips is fierce, and TSMC's product roadmap promises that the battle for supremacy will be intense. First, its performance-optimized N3P node is coming, set to enter mass production in the second half of 2024 and will be the company's most advanced node for a while.

Next year, however, TSMC will introduce two production nodes that will enter high-volume manufacturing in the second half of 2025, promising to accelerate the advantages of N3P. These nodes are N3X, a 3nm-class process, and N2, a 2nm-class process.

N3X is tailored for high-performance computing applications, with a maximum voltage of 1.2V. According to research compiled by AnandTech, N3X chips can either reduce power consumption by 7% by lowering Vdd from 1.0V to 0.9V, increase performance by 5%, or increase transistor density by around 10%.

N2 uses gate-all-around (GAA) nanosheet transistors – a first for TSMC – and features exceptional low Vdd performance that is designed for mobile and wearable applications. In addition, N2's ultra-thin stacked nanosheets deliver a new level of energy efficient computing for HPC, TSMC says. Backside power rail will also be added to boost performance even further.

N2 technology will come with TSMC NanoFlex, a design-technology co-optimization that provides designers with flexibility in N2 standard cells, with short cells emphasizing small area and greater power efficiency, and tall cells maximizing performance. Customers are able to optimize the combination of short and tall cells within the same design block.

In 2026, TSMC will introduce two more nodes: N2P (2nm-class) and A16 (1.6nm-class).

N2P is expected to deliver a 5% - 10% lower power or a 5% - 10% higher performance compared to the original N2. However, contrary to prior announcements, N2P will not incorporate a backside power delivery network, using conventional power delivery mechanisms instead. This means the integration of such advanced power delivery will shift to future generation nodes, including A16.


Original Submission

posted by hubie on Wednesday May 29, @12:34PM   Printer-friendly

https://steveblank.com/2024/05/16/secret-history-when-kodak-went-to-war-with-polaroid/

Kodak and Polaroid, the two most famous camera companies of the 20th century, had a great partnership for 20+ years. Then in an inexplicable turnabout Kodak decided to destroy Polaroid's business. To this day, every story of why Kodak went to war with Polaroid is wrong.

The real reason can be found in the highly classified world of overhead reconnaissance satellites.

Here's the real story.


Original Submission

posted by hubie on Wednesday May 29, @07:48AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

More than 190 nations agreed Friday on a new treaty to combat so-called biopiracy and regulate patents stemming from genetic resources such as medicinal plants, particularly ones whose uses owe a debt to traditional knowledge.

After lengthy negotiations, delegates approved to cheers and applause the "first WIPO Treaty to address the interface between intellectual property, genetic resources and traditional knowledge", the UN's World Intellectual Property Organization said in a statement.

[...] Genetic resources are increasingly used by companies in everything from cosmetics to seeds, medicines, biotechnology and food supplements.

They have enabled considerable progress in health, climate and food security, according to the United Nations.

[...] The treaty text says patent applicants will be required to disclose where the genetic resources used in an invention came from, and the indigenous people who provided the associated traditional knowledge.

The goal is to combat biopiracy by ensuring that an invention is genuinely new, and that the countries and local communities concerned agree with the use of their genetic resources, such as plant species cultivated over time, and the traditional knowledge surrounding them.

While natural genetic resources—such as those found in medicinal plants, agricultural crops and animal breeds—cannot be directly protected as intellectual property, inventions developed using them can be patented.

As it is currently not mandatory to publish the origin of innovations, many developing countries are concerned that patents are being granted that circumvent the rights of indigenous people.

Antony Scott Taubman set up WIPO's traditional knowledge division in 2001 but no longer works with the agency.

"I wouldn't go so far as to say it's revolutionary," he said of the treaty.

"Conceptually what we're looking at here is a recognition that when I apply for a patent, it's not purely a technical step... it recognizes that I have liabilities," he told AFP.

Brazilian ambassador Guilherme de Aguiar Patriota, who has chaired the talks, hailed the new treaty early Friday as a "very carefully balanced outcome" of the talks.

"It constitutes the best possible compromise and a carefully calibrated solution, which seeks to bridge and to balance a variety of interests, some very passionately held and assiduously expressed and defended over the course of decades."

[...] It took years of negotiations to reduce 5,000 pages of documentation on the subject down to the agreement.


Original Submission

posted by janrinok on Wednesday May 29, @02:55AM   Printer-friendly
from the lies-and-other-statistics dept.

https://www.straitstimes.com/singapore/plant-based-meat-substitutes-might-be-bad-for-diabetics-s-pore-study

The results showed that "contrary to our research hypothesis, we failed to substantiate any clear benefits for PBMD (Plant-Based Meat Diet) on cardiometabolic health compared with the corresponding ABMD (Animal-Based Meat Diet)", the team said.

However, one finding might be significant for the 9.5 per cent of the population here with diabetes. The study found that the group on ABMD had better glycaemic control.

https://www.sciencedirect.com/science/article/pii/S0002916524003964#fig2

However, time in range was significantly higher in the ABMD group than in the PBMD group [ABMD median: 94.1% (Q1: 87.2%, Q3: 96.7%); PBMD: 86.5% (81.7%, 89.4%); P = 0.041)]. This is shown in Figure 2, where the PBMD group had higher glucose concentration peaks and a greater proportion of time in range during the full-feeding period. No significant differences were found in other glycemic control and variability-related parameters during this full-feeding period.


Original Submission

posted by janrinok on Tuesday May 28, @10:09PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

A software maker serving more than 10,000 courtrooms throughout the world hosted an application update containing a hidden backdoor that maintained persistent communication with a malicious website, researchers reported Thursday, in the latest episode of a supply-chain attack.

The software, known as the JAVS Viewer 8, is a component of the JAVS Suite 8, an application package courtrooms use to record, play back, and manage audio and video from proceedings. Its maker, Louisville, Kentucky-based Justice AV Solutions, says its products are used in more than 10,000 courtrooms throughout the US and 11 other countries. The company has been in business for 35 years.

Researchers from security firm Rapid7 reported that a version of the JAVS Viewer 8 available for download on javs.com contained a backdoor that gave an unknown threat actor persistent access to infected devices. The malicious download, planted inside an executable file that installs the JAVS Viewer version 8.3.7, was available no later than April 1, when a post on X (formerly Twitter) reported it. It’s unclear when the backdoored version was removed from the company’s download page. JAVS representatives didn’t immediately respond to questions sent by email.

“Users who have version 8.3.7 of the JAVS Viewer executable installed are at high risk and should take immediate action,” Rapid7 researchers Ipek Solak, Thomas Elkins, Evan McCann, Matthew Smith, Jake McMahon, Tyler McGraw, Ryan Emmons, Stephen Fewer, and John Fenninger wrote. “This version contains a backdoored installer that allows attackers to gain full control of affected systems.”

The installer file was titled JAVS Viewer Setup 8.3.7.250-1.exe. When executed, it copied the binary file fffmpeg.exe to the file path C:\Program Files (x86)\JAVS\Viewer 8\. To bypass security warnings, the installer was digitally signed, but with a signature issued to an entity called “Vanguard Tech Limited” rather than to “Justice AV Solutions Inc.,” the signing entity used to authenticate legitimate JAVS software.

The researchers said fffmpeg.exe also downloaded the file chrome_installer.exe from the IP address 45.120.177.178. chrome_installer.exe went on to execute a binary and several Python scripts that were responsible for stealing the passwords saved in browsers. fffmpeg.exe is associated with a known malware family called GateDoor/Rustdoor. The exe file was already flagged by 30 endpoint protection engines.

The number of detections had grown to 38 at the time this post went live.


Original Submission

posted by janrinok on Tuesday May 28, @05:20PM   Printer-friendly
from the future-iron-man dept.

https://www.bbc.com/news/articles/c4nnjpjzryeo
https://www.standard.co.uk/news/health/jordan-marotta-bionic-hero-arm-iron-man-boy-new-york-b1159991.html
https://openbionics.com/

A five-year-old boy who was born without a left hand has become the youngest in the world to get a bionic Hero Arm, making him "feel like a superhero".

The custom-made, 3D printed prosthetic is produced by Bristol-based Open Bionics, which was founded in 2014 and launched four clinics in America in the last year.

Jordan, of Long Island, New York state, is now the youngest ever owner of one of the firm's Hero Arms.

The prosthetic uses special sensors which detect muscular contractions and turn them into bionic hand movements.

Most children with Hero Arms are aged seven years old or above, but the firm said Jordan's size for his age and his high IQ – meaning he was easy to teach how to use the Hero Arm – meant he could have one sooner.

[...] Open Bionics describes itself as the only company in the world making multi-articulating hands small and light enough for children as young as Jordan.


Original Submission

posted by janrinok on Tuesday May 28, @12:34PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

There's common agreement that generative artificial intelligence (AI) tools can help people save time and boost productivity. But while these technologies make it easy to run code or produce reports quickly, the backend work to build and sustain large language models (LLMs) may need more human labor than the effort saved up front. Plus, many tasks may not necessarily require the firepower of AI when standard automation will do. 

That's the word from Peter Cappelli, management professor at the University of Pennsylvania Wharton School, who spoke at a recent MIT event. On a cumulative basis, generative AI and LLMs may create more work for people than alleviate tasks. LLMs are complicated to implement, and "it turns out there are many things generative AI could do that we don't really need doing," said Cappelli. 

While AI is hyped as a game-changing technology, "projections from the tech side are often spectacularly wrong," he pointed out. "In fact, most of the technology forecasts about work have been wrong over time." He said the imminent wave of driverless trucks and cars, predicted in 2018, is an example of rosy projections that have yet to come true. 

Broad visions of technology-driven transformation often get tripped up in the gritty details. Proponents of autonomous vehicles promoted what "driverless trucks could do, rather than what needs to be done, and what is required for clearing regulations -- the insurance issues, the software issues, and all those issues." Plus, Cappelli added: "If you look at their actual work, truck drivers do lots of things other than just driving trucks, even on long-haul trucking."

A similar analogy can be drawn to using generative AI for software development and business. Programmers "spend a majority of their time doing things that don't have anything to do with computer programming," he said. "They're talking to people, they're negotiating budgets, and all that kind of stuff. Even on the programming side, not all of that is actually programming."  

The technological possibilities of innovation are intriguing but rollout tends to be slowed by realities on the ground. In the case of generative AI, any labor-saving and productivity benefits may be outweighed by the amount of backend work needed to build and sustain LLMs and algorithms. 

Both generative and operational AI "generate new work," Cappelli pointed out. "People have to manage databases, they have to organize materials, they have to resolve these problems of dueling reports, validity, and those sorts of things. It's going to generate a lot of new tasks, somebody is going to have to do those."

He said operational AI that's been in place for some time is still a work in progress. "Machine learning with numbers has been markedly underused. Some part of this has been database management questions. It takes a lot of effort just to put the data together so you can analyze it. Data is often in different silos in different organizations, which are politically difficult and just technically difficult to put together."  

Cappelli cites several issues in the move toward generative AI and LLMs that must be overcome:

Cappelli suggested the most useful generative AI application in the near term is sifting through data stores and delivering analysis to support decision-making processes. "We are washing data right now that we haven't been able to analyze ourselves," he said. "It's going to be way better at doing that than we are," he said. Along with database management, "somebody's got to worry about guardrails and data pollution issues."


Original Submission

posted by janrinok on Tuesday May 28, @07:43AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Splash a few drops of water on a hot pan and if the pan is hot enough, the water will sizzle and the droplets of water seem to roll and float, hovering above the surface.

The temperature at which this phenomenon, called the Leidenfrost effect, occurs is predictable, usually happening above 230 degrees Celsius. The team of Jiangtao Cheng, associate professor in the Virginia Tech Department of Mechanical Engineering, has discovered a method to create the aquatic levitation at a much lower temperature, and the results have been published in Nature Physics.

Alongside first author and Ph.D. student Wenge Huang, Cheng's team collaborated with Oak Ridge National Lab and Dalian University of Technology for sections of the research.

The discovery has great potential in heat transfer applications such as the cooling of industrial machines and surface fouling cleaning for heat exchangers. It also could help prevent damage and even disaster to nuclear machinery.

Currently, there are more than 90 licensed operable nuclear reactors in the U.S. that power tens of millions of homes, anchor local communities, and actually account for half of the country's clean energy electricity production. It requires resources to stabilize and cool those reactors, and heat transfer is crucial for normal operations.

For three centuries, the Leidenfrost effect has been a well-known phenomenon among physicists that establishes the temperature at which water droplets hover on a bed of their own vapor. While it has been widely documented to start at 230 degrees Celsius, Cheng and his team have pushed that limit much lower.

The effect occurs because there are two different states of water living together. If we could see the water at the droplet level, we would observe that not all of a droplet boils at the surface, only part of it. The heat vaporizes the bottom, but the energy doesn't travel through the entire droplet. The liquid portion above the vapor is receiving less energy because much of it is used to boil the bottom. That liquid portion remains intact, and this is what we see floating on its own layer of vapor. This has been referred to since its discovery in the 18th century as the Leidenfrost effect, named for German physician Johann Gottlob Leidenfrost.

That hot temperature is well above the 100 degree Celsius boiling point of water because the heat must be high enough to instantly form a vapor layer. Too low, and the droplets don't hover. Too high, and the heat will vaporize the entire droplet.

The traditional measurement of the Leidenfrost effect assumes that the heated surface is flat, which causes the heat to hit the water droplets uniformly. Working in the Virginia Tech Fluid Physics Lab, Cheng's team has found a way to lower the starting point of the effect by producing a surface covered with micropillars.

"Like the papillae on a lotus leaf, micropillars do more than decorate the surface," said Cheng. "They give the surface new properties."

The micropillars designed by Cheng's team are 0.08 millimeters tall, roughly the same as the width of a human hair. They are arranged in a regular pattern of 0.12 millimeters apart. A droplet of water encompasses 100 or more of them. These tiny pillars press into a water droplet, releasing heat into the interior of the droplet and making it boil more quickly.

Compared to the traditional view that the Leidenfrost effect triggers at 230 degrees Celsius, the fin-array-like micropillars press more heat into the water than a flat surface. This causes microdroplets to levitate and jump off the surface within milliseconds at lower temperatures because the speed of boiling can be controlled by changing the height of the pillars.

When the textured surface was heated, the team discovered that the temperature at which the floating effect was achieved was significantly lower than that of a flat surface, starting at 130 degrees Celsius.

Not only is this a novel discovery for the understanding of the Leidenfrost effect, it is a twist on the limits previously imagined. A 2021 study from Emory University found that the properties of water actually caused the Leidenfrost effect to fail when the temperature of the heated surface lowers to 140 degrees. Using the micropillars created by Cheng's team, the effect is sustainable even 10 degrees below that.

"We thought the micropillars would change the behaviors of this well-known phenomenon, but our results defied even our own imaginations," said Cheng. "The observed bubble-droplet interactions are a big discovery for boiling heat transfer."

The Leidenfrost effect is more than an intriguing phenomenon to watch, it is also a critical point in heat transfer. When water boils, it is most efficiently removing heat from a surface. In applications such as machine cooling, this means that adapting a hot surface to the textured approach presented by Cheng's team gets heat out more quickly, lowering the possibility of damage caused when a machine gets too hot.

"Our research can prevent disasters such as vapor explosions, which pose significant threats to industrial heat transfer equipment," said Huang. "Vapor explosions occur when vapor bubbles within a liquid rapidly expand due to the present of intense heat source nearby. One example of where this risk is particularly pertinent is in nuclear plants, where the surface structure of heat exchangers can influence vapor bubble growth and potentially trigger such explosions. Through our theoretical exploration in the paper, we investigate how surface structure affects the growth mode of vapor bubbles, providing valuable insights into controlling and mitigating the risk of vapor explosions."

More information: Wenge Huang et al, Low-temperature Leidenfrost-like jumping of sessile droplets on microstructured surfaces, Nature Physics (2024). DOI: 10.1038/s41567-024-02522-z , dx.doi.org/10.1038/s41567-024-02522-z

Journal information: Nature Physics


Original Submission

posted by janrinok on Tuesday May 28, @03:10AM   Printer-friendly

Elons New Supercomputer

https://www.straitstimes.com/world/united-states/musk-plans-largest-ever-supercomputer-for-xai-start-up-report

https://www.theinformation.com/articles/musk-plans-xai-supercomputer-dubbed-gigafactory-of-compute

https://en.wikipedia.org/wiki/Tesla_Dojo

Another day and Elon wants to do something new. Now he is going to build the worlds largest supercomputer, ready next fall (2025). His AI company is going to be the main customer, but I guess his other ventures from cars to rockets could use some computational power to.

So he is apparently just not going to be bigger then the rest. He is going to build it massively bigger. As in at least four times bigger then then the top computers today.

Renting supercomputing powers from other companies have apparently now become so expensive that it's cheaper and better to just build your own. A Gigafactory of Compute.

The previous one for Tesla, the Tesla Dojo, was apparently not enough.


Musk Plans Largest-ever Supercomputer, Report Says

Musk plans largest-ever supercomputer, report says - Taipei Times:

[...] The planned supercomputer would be "at least four times the size of the biggest GPU clusters that exist today," such as those used by Meta Platforms Inc to train its AI models, Musk was quoted as saying during a presentation to investors this month.

Since OpenAI's generative AI tool ChatGPT exploded on the scene in 2022, the technology has been an area of fierce competition between tech giants Microsoft Corp and Google Inc, as well as Meta and start-ups like Anthropic and Stability AI Inc.

Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

His company xAI is developing a chatbot named Grok, which can access social media platform X, also owned by Musk, in real time.

Earlier this year, Musk said training the Grok 2 model took about 20,000 Nvidia H100 GPUs, adding that the Grok 3 model and beyond would require 100,000 Nvidia H100 units.

In related news, Tesla shareholders are being urged by a major proxy advisory firm to reject a proposed US$56 billion pay package for Musk, in a blow to the electric-vehicle manufacturer's board.

Glass Lewis & Co made its recommendation in a report released on Saturday, citing the "excessive size" of the pay deal and the dilutive effect upon exercise.

"Mr. Musk's slate of extraordinarily time-consuming projects unrelated to the company was well-documented before the 2018 grant, and only expanded with his high-profile purchase of the company now known as X," Glass Lewis said.

The recommendation to large institutional investors might sway their vote over Musk's pay at the vehicle manufacturer's annual meeting on June 13. If the proposal is rejected, the CEO might make good on threats to develop products outside of Tesla.


Original Submission #1Original Submission #2

posted by janrinok on Monday May 27, @10:25PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Hey Google, can you spare a few hundred million to keep Rupert Murdoch’s yacht afloat? That’s essentially what some legislators are demanding with their harebrained schemes to force tech companies to fund journalism.

It is no secret that the journalism business is in trouble these days. News organizations are failing and journalists are being laid off in record numbers. There have been precious few attempts at carefully thinking through this situation and exploring alternative business models. The current state of the art thinking seems to be either (1) a secretive hedge fund buying up newspapers, selling off the pieces and sucking out any remaining cash, (2) replacing competent journalists with terrible AI-written content or (3) putting all the good reporting behind a paywall so that disinformation peddlers get to spread nonsense to the vast majority of the public for free.

Then, there’s the legislative side. Some legislators have (rightly!) determined that the death of journalism isn’t great for the future of democracy. But, so far, their solutions have been incredibly problematic and dangerous. Pushed by the likes of Rupert Murdoch, whose loud and proud support for “free market capitalism” crumbled to dust the second his own news business started failing, leading him to demand government handouts for his own failures in the market. The private equity folks buying up newspapers (mainly Alden Capital) jumped into the game as well, demanding that the government force Google and Meta to subsidize their strip-mining of the journalism field.

The end result has mostly been disastrous link taxes, which were pioneered in Europe a decade ago. They failed massively before being revived more recently in Australia and Canada, where they have also failed (despite people pretending they have succeeded).

For no good reason, the US Congress and California’s legislature are still considering their own versions of this disastrous policy that has proven (1) terrible for journalism and (2) even worse for the open web.

Recently, California Senator Steve Glazer offered up an alternative approach, SB 1327 that is getting a fair bit of attention. Instead of taxing links like all those other proposals, it would directly tax the digital advertising business model and use that tax to create a fund for journalism. Specifically, it would apply a tax on what it refers to (in a dystopian Orwellian way) as a “data extraction transaction.” It refers to the tax as a “data extraction mitigation fee” and that tax would be used to provide credits for “qualified” media entities.

I’ve seen very mixed opinions on this. It’s not surprising that some folks are embracing this as a potential path to funding journalism. Casey Newton described it as a “better way for platforms to fund journalism.”

And, I mean, when compared to link taxes, it is potentially marginally better (but also, with some very scary potential side effects). The always thoughtful Brandon Silverman (who created CrowdTangle and has worked to increase transparency from tech companies) also endorses the bill as “a potential path forward.”

But I tend to agree much more with journalism professor Jeff Jarvis who highlights the fundamental problems of the bill and the framework it creates. As I’ve pointed out with link taxes, the oft-ignored point of a tax on something is to get less of it. You tax something bad because that tax decreases how much of it is out there. And, as Jarvis points out here, this is basically a tax on information:

Furthermore, Jarvis rightly points out that Glazer’s bill is positioned as something unique when users give their attention to internet companies, but explicitly carves out when users give their attention to other types of media companies. This sets up a problematically tiered system for when attention gets taxed and when it doesn’t:

Indeed, the entire framing of the bill seems to suggest that data and advertising is a sort of “pollution,” that needs to be taxed in order to minimize it. And that seems particularly troublesome.

As Jarvis also notes, the true beneficiaries of a law like this would still be those rapacious hedge funds that have bought up a bunch of news orgs [...]


Original Submission

posted by janrinok on Monday May 27, @05:39PM   Printer-friendly
from the cheap-steel dept.

"Researchers from the University of Cambridge have developed a method to produce very low emission concrete at scale -- an innovation that could be transformative in the transition to net zero." reports ScienceDaily

The method, which the researchers say is "an absolute miracle," [are we taken as savages here?] uses the electrically-powered arc furnaces used for steel recycling to simultaneously recycle cement, the carbon-hungry component of concrete.
...
The Cambridge researchers found that used cement is an effective substitute for lime flux, which is used in steel recycling to remove impurities and normally ends up as a waste product known as slag. But by replacing lime with used cement, the end product is recycled cement that can be used to make new concrete.

The cement recycling method developed by the Cambridge researchers, reported in the journal Nature, does not add any significant costs to concrete or steel production and significantly reduces emissions from both concrete and steel, due to the reduced need for lime flux.
...
Recent tests carried out by the Materials Processing Institute, a partner in the project, showed that recycled cement can be produced at scale in an electric arc furnace (EAF), the first time this has been achieved. Eventually, this method could produce zero emission cement, if the EAF was powered by renewable energy.
...
"I had a vague idea from previous work that if it were possible to crush old concrete, taking out the sand and stones, heating the cement would remove the water, and then it would form clinker again," said first author Dr Cyrille Dunant, also from the Department of Engineering. "A bath of liquid metal would help this chemical reaction along, and an electric arc furnace, used to recycle steel, felt like a strong possibility. We had to try."
...
"We found the combination of cement clinker and iron oxide is an excellent steelmaking slag because it foams and it flows well," said Dunant. "And if you get the balance right and cool the slag quickly enough, you end up with reactivated cement, without adding any cost to the steelmaking process."

The cement made through this recycling process contains higher levels of iron oxide than conventional cement, but the researchers say this has little effect on performance.

DOI:10.1038/s41586-024-07338-8 (free access)

4:33min vid


Original Submission

posted by janrinok on Monday May 27, @12:52PM   Printer-friendly
from the breaking-oem-monopolies dept.

Several sites are reporting on Qualcomm's increasing Linux support. The tide is turning and the Microsoft monopoly on OEMs, at least the non-x86 ones, might be weakening as full Linux support is now expected on the modern hardware architectures these days:

Here's the thing. In the Linux world, ARM has had something like a 15-year head start over Microsoft's own often anemic ARM efforts, thanks to the Raspberry Pi and single-board computers making the platform a good choice for more than just basic web-surfers.

Collectives like Pine64 have been building laptops with first-class citizen Linux support for years. (They're admittedly not fast but they offer a good ecosystem to develop on.)

And then, we got fast ARM laptops from Apple, which smoked what's already out there but come with the side effect of a Linux experience that is still somewhat immature, despite the strides already made.

This may be a game-changer.

Qualcomm is making good progress on adapting its new Snapdragon X Elite laptop CPU for Linux use. The mobile SoC manufacturer revealed that it has laid a lot of the groundwork already to get the Snapdragon X Elite running the Linux operating systems. However, Qualcomm is far from done, as there's still a lot of development work needed to get the X Elite into a fully operational state in Linux. Upcoming Linux kernels should enable full support for all the chip's features.

Qualcomm prides itself on its Linux enablement work and has prioritized Linux enablement in all of its previous Snapdragon laptop CPUs, typically announcing Linux support one or two days after launch. The Snapdragon X Elite continues that pattern, with Linux enablement being announced the very next day after its original October 23, 2023 debut.

Tom's Hardware, Qualcomm goes where Apple won't, readies official Linux support for Snapdragon X Elite.

It seems that the Asahi Linux project has also done great work on the M-series chips despite Apple. There, Apple still has to get up to speed.

Previously,
(2024) Desktop GNU/Linux Surpasses 4% Market Share


Original Submission

posted by hubie on Monday May 27, @06:02AM   Printer-friendly
from the slowly-making-progress dept.

[Ed. note: Some of the links in the Ars article point to the Office of the Revisor of Statutes web site. The links did not resolve for the submitter nor this editor, but they are included below in the event that it is a temporary problem with the web site.]

Ars Technica is reporting on a Minnesota law passed this week which, according to the article:

Minnesota this week eliminated two laws that made it harder for cities and towns to build their own broadband networks. The state-imposed restrictions were repealed in an omnibus commerce policy bill [N.B., this link is not valid (Geo-Blocked, perhaps? I'm not in MN). I retained it as it is in TFA. See link to the MN House journal above] signed on Tuesday by Gov. Tim Walz, a Democrat.

Minnesota was previously one of about 20 states that imposed significant restrictions on municipal broadband. The number can differ depending on who's counting because of disagreements over what counts as a significant restriction. But the list has gotten smaller in recent years because states including Arkansas, Colorado, and Washington repealed laws that hindered municipal broadband.

The Minnesota bill enacted this week struck down a requirement that municipal telecommunications networks be approved in an election with 65 percent of the vote. The law is over a century old, the Institute for Local Self-Reliance's Community Broadband Network Initiative wrote yesterday.

"Though intended to regulate telephone service, the way the law had been interpreted after the invention of the Internet was to lump broadband in with telephone service thereby imposing that super-majority threshold to the building of broadband networks," the broadband advocacy group said.

The Minnesota omnibus bill also changed a law that let municipalities build broadband networks, but only if no private providers offer service or will offer service "in the reasonably foreseeable future." That restriction had been in effect since at least the year 2000.

The caveat that prevented municipalities from competing against private providers was eliminated from the law when this week's omnibus bill was passed. As a result, the law now lets cities and towns "improve, construct, extend, and maintain facilities for Internet access and other communications purposes" even if private ISPs already offer service.

I sure wish I could get municipal broadband. How about you Soylentils? Do you have municipal broadband? Does your ISP have competition at all? Abusive terms of service? Data caps?


Original Submission

posted by janrinok on Monday May 27, @01:16AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

South Korea's president has described the global semiconductor industry as "a field where all-out national warfare is underway" as he announced a $19 billion to diversify the nation's silicon sector.

In remarks presented on Thursday at a government economic review meeting, President Yoon Suk Yeol called for South Korea to "open a new future for the semiconductor industry."

"Our semiconductors have dominated the world in the memory field over the past 30 years," he declared, before lamenting that "our fabless market share still remains in the one percent range, and foundries that manufacture system semiconductors are unable to narrow the gap with leading companies such as TSMC."

"In the future, the success or failure of the semiconductor industry will be determined by system semiconductors, which account for two thirds of the entire market," he predicted, calling for his nation "to bet on system semiconductors, which are constantly expanding beyond CPUs and GPUs to AI semiconductors."

To make that happen, South Korea has created a $19 billion program to fund construction of chipmaking mega-clusters – especially the electrical and transport infrastructure they need. Provision of water resources for chipmaking has also been fast-tracked.

The plan will also see a "mini-fab" created, so that small and medium-sized fabless chip firms have a resource they can use to get their products off the drawing board. They will also be helped by a fund that President Yoon said will help to turn them into global enterprises.

The president noted that government funds flowing to chipmakers could be perceived as "a tax cut for large corporations or a tax cut for the rich." He rebutted that notion by arguing "the semiconductor industry is the most important and sure foundation for making our people's lives richer and making our economy take off."

"Semiconductors are the livelihood of the people, and all support for the semiconductor industry is for the benefit of the people," he argued, adding that government investment will pay for itself handsomely over time.

[...] President Yoon raised many solid points. The United States and European Union have thrown tens of billions of dollars and euros respectively at IC manufacturers, while South Korea's chip champs – Samsung Electronics and SK hynix – are indeed monsters of memory as they collectively hold over 70 percent of the market for DRAM and NAND flash. But beyond Samsung's modest Exynos SoC operation (which can't even satisfy demand for its own Galaxy smartphones), South Korea is not home to a notable manufacturer of high-value processors.

Samsung and SK hynix have also made enormous bets on factories to produce more memory – some on the peninsula and others stateside (where they could attract funds from Uncle Sam).

South Korea's new funding package is tiny compared to the sums its chip champions are spending, so it's unlikely it will divert their efforts notably. Nor will it deter them continuing to target the so-hot-right-now memory market, in which demand for DDR5 and related variants – plus high bandwidth memory (HBM) – is currently rampant. Indeed, SK hynix has already found buyers for all the HBM it can make in 2024 and most of the chips it will produce in 2025.

Memory-centric analyst firm TrendForce recently worried out loud about AI-fuelled demand for HBM skewing manufacturing investments away from DRAM, and maybe causing a shortage of the latter in years to come.


Original Submission

Today's News | May 30 | May 28  >