Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What is your favorite keyboard trait?

  • QWERTY
  • AZERTY
  • Silent (sounds)
  • Clicky sounds
  • Thocky sounds
  • The pretty colored lights
  • I use Braille you insensitive clod
  • Other (please specify in comments)

[ Results | Polls ]
Comments:63 | Votes:116

posted by janrinok on Saturday April 22 2023, @08:34PM   Printer-friendly

Researchers have created a sturdy, lightweight material made from sugar and wood-derived powders that disintegrates on-demand:

Sturdy, degradable materials made from plants and other non-petroleum sources have come a long way in recent years. For example, cornstarch-based packing peanuts disappear simply by dousing them in water, and some utensils are based on polymers synthesized from plant sugars. But those packing peanuts can't be used to protect anything wet, and plant-derived polymers still take a long time to break down. One potential alternative is a new type of rigid material designed from isomalt, which is a sugar alcohol rather than a polymer. With isomalt, bakers can create breathtaking, but brittle, structures for desserts, and then dissolve them away quickly in water. So, Scott Phillips and colleagues wanted to boost the sturdiness of isomalt with natural additives to create a robust material that degrades on-demand.

The researchers heated isomalt to a liquid-like state and mixed in either cellulose, cellulose and sawdust, or wood flour to produce three different materials. Then, using commercial plastics manufacturing equipment, the materials were extruded into small pellets and molded into various objects, including balls, a dodecahedron, a chess piece and flower-shaped saucers. All of the tested additives doubled the strength of isomalt, creating materials that were harder than plastics, including poly(ethylene terephthalate) (known as PET) and poly(vinyl chloride) (known as PVC), but were still lightweight. In experiments, samples dissolved in water within minutes. And saucers made of the material, and coated with a food grade shellac and cellulose acetate, withstood being immersed in water for up to seven days. However, once the saucers were broken and the coating cracked, they rapidly disintegrated in water. The team also repeatedly crushed, dissolved and recycled both coated and uncoated objects into new ones that were still as strong as the original items.

The researchers say that the material could be used for food-service items and temporary décor, and then crushed and sprayed with water to fall apart. But even if such items were simply tossed into the trash or somehow got into the environment, the slightest crack in the coating would start their collapse into sugars and the plant-based additives, which the researchers say might be good for soil.

There is also a video version of this press release


Original Submission

posted by hubie on Saturday April 22 2023, @03:49PM   Printer-friendly
from the also-an-oyster-and-cigar-bar-next-door dept.

The winery's layout and finishes suggest the wine-making was merely a spectacle for ancient Roman elites:

Archaeologists have discovered the remains of an 1,800-year-old winery at the Villa of the Quintilii outside of Rome. By the team's measure, the winery was designed as much for the spectacle of wine-making as the practice itself.

Decorated rooms around the winery appear to have hosted guests who would observe the wine-making process, the researchers found, and the finishes (including marble floors) seem installed for appearances over practicality. The team's research is published in Antiquity.

"Agricultural labor was romanticized by the ruling classes of many ancient cultures, especially as it was often the source of both their wealth and status," said Emlyn Dodd, a researcher at the Institute of Classical Studies at the University of London, in an Antiquity release.

Journal Reference:
Emlyn Dodd, Giuliana Galli and Riccardo Frontoni, The spectacle of production: a Roman imperial winery at the Villa of the Quintilii, Rome [open], Antiquity, 97, 2023. DOI: https://doi.org/10.15184/aqy.2023.18


Original Submission

posted by hubie on Saturday April 22 2023, @11:04AM   Printer-friendly

Apple's WaveOne purchase heralds a new era in smart-streaming of AR and video:

Apple's surprise purchase at the end of last month of WaveOne, a California-based startup that develops content-aware AI algorithms for video compression, showcases an important shift in how video signals are streamed to our devices. In the near-term Cuppertino's purchase will likely lead to smart video-compression tools in Apple's video-creation products and in the development of its much-discussed augmented-reality headset.

However, Apple isn't alone. Startups in the AI video codec space are likely to prove acquisition targets for other companies trying to keep up.

[...] AI codecs, having been developed over the course of decades, use machine-learning algorithms to analyze and understand the visual content of a video, identify redundancies and nonfunctional data, and compress the video in a more efficient way. They use learning-based techniques instead of manually designed tools for encoding and can use different ways to measure encoding quality beyond traditional distortion measures. Recent advancements, like attention mechanisms, help them understand the data better and optimize visual quality.

During the first half of the 2010s, Netflix and a California-based company called Harmonic helped to spearhead a movement of what's called "content-aware" encoding. CAE, as Harmonic calls it, uses AI to analyze and identify the most important parts of a video scene, and to allocate more bits to those parts for better visual quality, while reducing the bit rate for less important parts of the scene.

Content-aware video compression adjusts an encoder for different resolutions of encoding, adjusts the bit rate according to content, and adjusts the quality score—the perceived quality of a compressed video compared to the original uncompressed video. All those things can be done by neural encoders as well.

[...] WaveOne has shown success in neural-network compression of still images. In one comparison, WaveOne reconstructions of images were 5 to 10 times as likely to be chosen over conventional codecs by a group of independent users.

But the temporal correlation in video is much stronger than the spatial correlation in an image and you must encode the temporal domain extremely efficiently to beat the state of the art.

"At the moment, the neural video encoders are not there yet," said Yiannis Andreopoulos, a professor of data and signal processing at University College London and chief technology officer at iSize Technologies.

[...] Nonetheless, the industry appears to be moving toward combining AI with conventional codecs—rather than relying on full neural-network compression.

[...] For the time being, "AI and conventional technologies will work in tandem," said Andreopoulos, in part, he said, because conventional encoders are interpretable and can be debugged. Neural networks are famously obscure "black boxes." Whether in the very long term neural encoding will beat traditional encoding, Andreopoulos added, is still an open question.


Original Submission

posted by hubie on Saturday April 22 2023, @06:16AM   Printer-friendly
from the east-bound-and-down dept.

Kodiak Robotics will haul freight autonomously for Tyson Foods:

Autonomous trucking startup Kodiak Robotics is partnering with truckload carrier C.R. England to autonomously ship Tyson Foods products between Dallas and San Antonio, Texas.

A human safety operator will be present in the one dedicated truck Kodiak is allocating to this pilot. Deliveries will begin this month, according to the company.

The pilot program is the latest in Kodiak's growing string of paid partnerships with major carriers, and it further demonstrates the startup's potential path to sustainability and even profitability once it removes the human safety driver from operations.

A spokesperson for Kodiak said the company aims to remove the safety operator within the next couple of years.

[...] Kodiak says the partnership is not only emblematic of how human-driven trucks and autonomous trucks can work together, but it also provides a use case for autonomy as a solution for moving perishable products in a timely manner.

[...] As part of the partnership, C.R. England is also joining Kodiak's Partner Development Program, which is Kodiak's way of working with carriers to help establish autonomous freight operations and, hopefully, integrate Kodiak's self-driving system into their fleet.

"Our intent is to be a 'one-stop shop' for customers, whether they need their freight moved autonomously or not," said England.


Original Submission

posted by hubie on Saturday April 22 2023, @01:37AM   Printer-friendly

Create an AI agent that works from a set of goals:

To get good output from ChatGPT or another LLM, you usually have to feed it several prompts. But what if you could just give your AI bot a set of fairly broad goals at the start of a session and then sit back while it generates its own set of tasks to fulfill those goals? That's the idea behind Auto-GPT, a new open-source tool that uses the OpenAI API (same LLM as ChatGPT) to prompt itself, based on your initial input.

We've already seen a number of Twitter users talk about how they are using Auto-GPT for everything from creating marketing plans to analyzing market data for investments to preparing topics for a podcast. Based on our hands-on experience, we can't say that it always works well (we asked it to write a Windows 11 how-to and the result was awful), but it's early days and some tasks may work better than others.

If you want to try Auto-GPT on your computer, it is easy to install, and while there are a few sticky points in the process, we've found ways to work with them, or around them to write this condensed guide on how to create your own Auto-GPT AI to help you in your goals.

[...] You may or may not need to add payment information to your OpenAI account. By default, the system will give you a certain amount of free credits. In Editor-in-Chief Avram Piltch's case, it was $18 worth of free credit that he was able to use without entering any payment methods. You may not get as much free credit or may need to add a payment method to your OpenAI account to proceed.

The article has a step-by-step guide for getting up and running on a Windows machine. If you choose to add a payment method, make sure put a limit on how much money it can charge.

Let the chaos begin!


Original Submission

posted by hubie on Friday April 21 2023, @10:49PM   Printer-friendly
from the argument-full-of-holes dept.

A punctured bone fragment predates eyed needles in Western Europe by about 15,000 years:

An animal bone fragment full of human-made pits hints at how prehistoric people in Western Europe may have crafted clothing.

The nearly 40,000-year-old artifact probably served as a punch board for leatherwork, researchers report April 12 in Science Advances. They suggest that the bone fragment rested beneath animal hide while an artisan pricked holes in the material, possibly for seams. If so, it's the earliest-known tool of its kind and predates eyed needles in the region by about 15,000 years.

Found at an archaeological site south of Barcelona, the roughly 11-centimeter-long bone fragment contains 28 punctures scattered across one flat side, with 10 of them aligned and fairly evenly spaced.

The marks don't seem to have been a notation system or decoration, given that some holes are hard to see and the bone fragment wasn't otherwise shaped, says archaeologist Luc Doyon of the University of Bordeaux in France. He thought leatherwork could have made the marks. But it wasn't until he visited a cobbler shop and saw one of the artisan's tools that the hypothesis solidified, guiding Doyon's next steps.

[...] Scientists knew that humans wore clothing long before the oldest-known eyed needles existed (SN: 4/20/10). "What [the new finding] tells us is that the first modern humans who lived in Europe had the technology in their toolkit for making fitted clothes," Doyon says.

Journal Reference:
Luc Doyon, Thomas Faure, Montserrat Sanz, et al., A 39,600-year-old leather punch board from Canyars, Gavà, Spain [open], Sci. Adv., 12, 2023. (DOI: 10.1126/sciadv.adg0834)


Original Submission

posted by janrinok on Friday April 21 2023, @08:07PM   Printer-friendly
from the got-kefir? dept.

Ancient protein evidence shows milk consumption was a powerful cultural adaptation that stimulated human expansion onto the highland Tibetan Plateau:

The Tibetan Plateau, known as the "third pole", or "roof of the world", is one of the most inhospitable environments on Earth. While positive natural selection at several genomic loci enabled early Tibetans to better adapt to high elevations, obtaining sufficient food from the resource-poor highlands would have remained a challenge.

Now, a new study in the journal Science Advances reveals that dairy was a key component of early human diets on the Tibetan Plateau. The study reports ancient proteins from the dental calculus of 40 human individuals from 15 sites across the interior plateau.

[...] Ancient protein evidence indicates that dairy products were consumed by diverse populations, including females and males, adults and children, as well as individuals from both elite and non-elite burial contexts. Additionally, prehistoric Tibetan highlanders made use of the dairy products of goats, sheep, and possibly cattle and yak. Early pastoralists in western Tibet seem to have had a preference for goat milk.

"The adoption of dairy pastoralism helped to revolutionize people's ability to occupy much of the plateau, particularly the vast areas too extreme for crop cultivation," says Prof. Nicole Boivin, senior author of the study.

[...] "We were excited to observe an incredibly clear pattern," says Li Tang. "All our milk peptides came from ancient individuals in the western and northern steppes, where growing crops is extremely difficult. However, we did not detect any milk proteins from the southern-central and south-eastern valleys, where more farmable land is available."

Surprisingly, all the individuals with evidence for milk consumption were recovered from sites higher than 3700 meters above sea level (masl); almost half were above 4000 masl, with the highest at the extreme altitude of 4654 masl.

"It is clear that dairying was crucial in supporting early pastoralist occupation of the highlands," notes Prof. Shargan Wangdue. "Ruminant animals could convert the energy locked in alpine pastures into nutritional milk and meat, and this fueled the expansion of human populations into some of the world's most extreme environments." Li Tang concludes.

Journal Reference:

Li Tang, Shevan Wilkin, Kristine Korzow Richter, et al., Palaeoproteomic evidence reveals dairying supported prehistoric occupation of the highland Tibetan Plateau [open], Sci. Adv., 2023. DOI: 10.1126/sciadv.adf0345


Original Submission

posted by janrinok on Friday April 21 2023, @05:23PM   Printer-friendly
from the more-good-news-for-your-children-and-yourself dept.

It looks like the Paris Agreement is as dead as the fried chicken at my local deli.

At Paris, in 2015, the World agreed to limit the global temperature rise to 1.5 degrees Celsius. The latest report of the EU's Climate Change Service shows (summary pdf) that this target has been royally breached, at least for Europe. Temperatures there, averaged over the last 5 years, have increased by 2.2 degrees celsius.

Europe, at least, has a climate change service to measure these things. As for the rest of the world, an extrapolation of the pattern shown in Figure 1c, here, indicates that, there too, demand for swimming pools and flood insurance will grow.

To illustrate the complexity of the problem, the heatwave in mid-July of 2022 was caused by hot air from the Sahara moving into Europe, driving temperatures above 40 degrees Celsius. By mid-August, a stationary high-pressure system with clear skies and weak winds took hold, and caused a second heatwave, which was made worse due to the soil being dried out by the mid-July event, and no rains since.

Events above the Sahara might have come a second time in play, here. Increasing temperatures lead to a stronger evaporation over sea, while the land heats up more. This results in a stronger temperature gradient, which draws rains deeper inland: heavier rainfalls now are reported in the central Sahara, in summer, with formerly dry valleys being put under four meters of water. This causes less Sahara dust in the atmosphere, and hence shields the land less from solar radiation: the EU's report mentions that 2022 surface solar radiation was the highest in a 40 year record, and part of a positive trend.

To end with a positive note, the EU ain't doing so bad, compared to Greenland: three different heatwaves in 2022, and an average September temperature more than 8 degrees Celsius higher than normal.


Original Submission

posted by janrinok on Friday April 21 2023, @02:39PM   Printer-friendly

The Fermi bubbles may have started life as jets of high-energy charged particles:

Bubbles of radiation billowing from the galactic center may have started as a stream of electrons and their antimatter counterparts, positrons, new observations suggest. An excess of positrons zipping past Earth suggests that the bubbles are the result of a burp from our galaxy's supermassive black hole after a meal millions of years ago.

For over a decade, scientists have known about bubbles of gas, or Fermi bubbles, extending above and below the Milky Way's center (SN: 11/9/10). Other observations have since spotted the bubbles in microwave radiation and X-rays (SN: 12/9/20). But astronomers still aren't quite sure how they formed.

A jet of high-energy electrons and positrons, emitted by the supermassive black hole in one big burst, could explain the bubbles' multi-wavelength light, physicist Ilias Cholis reported April 18 at the American Physical Society meeting.

In the initial burst, most of the particles would have been launched along jets aimed perpendicular to the galaxy's disk. As the particles interacted with other galactic matter, they would lose energy and cause the emission of different wavelengths of light.

Those jets would have been aimed away from Earth, so those particles can never be detected. But some of the particles could have escaped along the galactic disk, perpendicular to the bubbles, and end up passing Earth. "It could be that just now, some of those positrons are hitting us," says Cholis, of Oakland University in Rochester, Mich.

So Cholis and Iason Krommydas of Rice University in Houston analyzed positrons detected by the Alpha Magnetic Spectrometer on the International Space Station. The pair found an excess of positrons whose present-day energies could correspond to a burst of activity from the galactic center between 3 million and 10 million years ago, right around when the Fermi bubbles are thought to have formed, Cholis said at the meeting.

The result, Cholis said, supports the idea that the Fermi bubbles came from a time when the galaxy's central black hole was busier than it is today.

Journal Reference:
Have we found the counterpart signal of the Fermi bubbles at the cosmic-ray positrons?, Bulletin of the American Physical Society (DOI: https://meetings.aps.org/Meeting/APR23/Session/U13.1)


Original Submission

posted by janrinok on Friday April 21 2023, @11:51AM   Printer-friendly

U.S. government imposes record fine on Seagate for violating sanctions against Seagate:

Seagate has been hit with a massive $300 million fine by the U.S. Department of Commerce [PDF] for violating export control restrictions imposed on Huawei in 2020. The report shows that the U.S. Department of Commerce states that Seagate shipped millions of hard drives to Huawei in 2020 – 2021 and become the sole supplier of HDDs to the company while its rivals Toshiba and Western Digital refrained to work with the conglomerate.

Seagate shipped 7.4 million hard drives to Huawei on 429 occasions between August 2020 and September 2021 without obtaining an export license from the U.S. Department of Commerce's Bureau of Industry and Security. Those drives were worth around $1.104 billion back then, a significant sum for Seagate, which revenue totaled $10.681 billion in 2021.

To settle the matter, Seagate has agreed to pay the $300 million fine in quarterly instalments of $15 million over five years starting in October 2023. The civil penalty of $300 million is more than double the estimated net profits that Seagate made from the alleged illegal exports to or involving Huawei, according to BIS. In fact, $300 million is a record fine for BIS.

"Today's action is the consequence: the largest standalone administrative resolution in our agency's history," said Matthew S. Axelrod, Assistant Secretary for Export Enforcement. "This settlement is a clarion call about the need for companies to comply rigorously with BIS export rules, as our enforcement team works to ensure both our national security and a level playing field."

As of mid-August 2020, the U.S. Department of Commerce's Bureau of Industry and Security mandated that any company planning to sell semiconductor hardware, software, equipment, or any other asset using American intellectual property to Huawei and its entities must obtain a special export license. The export controls on Huawei mostly pertain to semiconductors. However, Seagate's hard drives also fall under the export-controlled items category because they use controllers and memory designed with electronic design automation tools developed by American companies and produced using U.S.-made equipment.

These export licenses were subject to a presumption of denial policy, meaning they were difficult to obtain. However, multiple companies were granted appropriate licenses between 2020 and 2022, which allowed Huawei to acquire various products that were developed or manufactured in the United States.

Seagate did not apply for an appropriate license and said in September, 2020, that its drives could be shipped to Huawei without a license, an opinion that was not shared by its rival Western Digital. Since Huawei was not supposed to get HDDs at all, republican senator Roger Wicker wondered in mid-2021 how exactly the sanctioned company obtained such storage devices and whether three global makers of hard drives complied with the export rules.

As it turned out, although Toshiba and Western Digital ceased to sell HDDs to Huawei, Seagate continued to do so. In fact, the company became Huawei's exclusive hard drive supplier and even signed a three-year Strategic Cooperation Agreement and then a Long-Term Agreement to purchase over five million HDDs with the Chinese conglomerate in 2021.


Original Submission

posted by janrinok on Friday April 21 2023, @09:06AM   Printer-friendly

The Hyena code is able to handle amounts of data that make GPT-style technology run out of memory and fail:

In a paper published in March, artificial intelligence (AI) scientists at Stanford University and Canada's MILA institute for AI proposed a technology that could be far more efficient than GPT-4 -- or anything like it -- at gobbling vast amounts of data and transforming it into an answer.

Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail.

"Our promising results at the sub-billion parameter scale suggest that attention may not be all we need," write the authors. That remark refers to the title of a landmark AI report of 2017, 'Attention is all you need'. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google's Transformer AI program. The transformer became the basis for every one of the recent large language models.

But the Transformer has a big flaw. It uses something called "attention," where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output.

That attention operation -- the essential tool of all large language programs, including ChatGPT and GPT-4 -- has "quadratic" computational complexity (Wiki "time complexity" of computing). That complexity means the amount of time it takes for ChatGPT to produce an answer increases as the square of the amount of data it is fed as input.

At some point, if there is too much data -- too many words in the prompt, or too many strings of conversations over hours and hours of chatting with the program -- then either the program gets bogged down providing an answer, or it must be given more and more GPU chips to run faster and faster, leading to a surge in computing requirements.

In the new paper, 'Hyena Hierarchy: Towards Larger Convolutional Language Models', posted on the arXiv pre-print server, lead author Michael Poli of Stanford and his colleagues propose to replace the Transformer's attention function with something sub-quadratic, namely Hyena.

[...] The paper's contributing authors include luminaries of the AI world, such as Yoshua Bengio, MILA's scientific director, who is a recipient of a 2019 Turing Award, computing's equivalent of the Nobel Prize. Bengio is widely credited with developing the attention mechanism long before Vaswani and team adapted it for the Transformer.

Also among the authors is Stanford University computer science associate professor Christopher Ré, who has helped in recent years to advance the notion of AI as "software 2.0".

To find a sub-quadratic alternative to attention, Poli and team set about studying how the attention mechanism is doing what it does, to see if that work could be done more efficiently.

A recent practice in AI science, known as mechanistic interpretability, is yielding insights about what is going on deep inside a neural network, inside the computational "circuits" of attention. You can think of it as taking apart software the way you would take apart a clock or a PC to see its parts and figure out how it operates.

One work cited by Poli and team is a set of experiments by researcher Nelson Elhage of AI startup Anthropic. Those experiments take apart the Transformer programs to see what attention is doing.

In essence, what Elhage and team found is that attention functions at its most basic level by very simple computer operations, such as copying a word from recent input and pasting it into the output.

For example, if one starts to type into a large language model program such as ChatGPT a sentence from Harry Potter and the Sorcerer's Stone, such as "Mr. Dursley was the director of a firm called Grunnings...", just typing "D-u-r-s", the start of the name, might be enough to prompt the program to complete the name "Dursley" because it has seen the name in a prior sentence of Sorcerer's Stone. The system is able to copy from memory the record of the characters "l-e-y" to autocomplete the sentence.

However, the attention operation runs into the quadratic complexity problem as the amount of words grows and grows. More words require more of what are known as "weights" or parameters, to run the attention operation.

As the authors write: "The Transformer block is a powerful tool for sequence modeling, but it is not without its limitations. One of the most notable is the computational cost, which grows rapidly as the length of the input sequence increases."

While the technical details of ChatGPT and GPT-4 haven't been disclosed by OpenAI, it is believed they may have a trillion or more such parameters. Running these parameters requires more GPU chips from Nvidia, thus driving up the compute cost.

To reduce that quadratic compute cost, Poli and team replace the attention operation with what's called a "convolution", which is one of the oldest operations in AI programs, refined back in the 1980s. A convolution is just a filter that can pick out items in data, be it the pixels in a digital photo or the words in a sentence.

Poli and team do a kind of mash-up: they take work done by Stanford researcher Daniel Y. Fu and team to apply convolutional filters to sequences of words, and they combine that with work by scholar David Romero and colleagues at the Vrije Universiteit Amsterdam that lets the program change filter size on the fly. That ability to flexibly adapt cuts down on the number of costly parameters, or, weights, the program needs to have.

The result of the mash-up is that a convolution can be applied to an unlimited amount of text without requiring more and more parameters in order to copy more and more data. It's an "attention-free" approach, as the authors put it.


Original Submission

posted by hubie on Friday April 21 2023, @06:22AM   Printer-friendly
from the streamlining-processes dept.

Proposed emissions from a Mississippi Chevron plant could raise locals' cancer risk by 250,000x the acceptable level and a community group is fighting back:

We need climate action. But just because something gets grouped under the umbrella of things that theoretically combat climate change doesn't mean it's actually good for the planet or people. In an alarming example, production of certain alternative "climate-friendly" fuels could lead to dangerous, cancer-causing emissions.

A Chevron scheme to make new plastic-based fuels, approved by the Environmental Protection Agency, could carry a 1-in-4 lifetime cancer risk for residents near the company's refinery in Pascagoula, Mississippi. A February joint report from ProPublica and the Guardian brought the problem to light. Now, a community group is fighting back against the plan, suing the EPA for approving it in the first place, as first reported by ProPublica and the Guardian in a follow-up report on Tuesday.

Cherokee Concerned Citizens, an organization that represents a ~130 home subdivision less than two miles away from Chevron's Pascagoula refinery, filed its suit to the Washington D.C. Circuit Court of Appeals on April 7. The petition demands that the court review and re-visit the EPA's rubber stamp of the Chevron proposal.

[...] Last year, the EPA greenlit Chevron's plan to emit some unnamed, truly gnarly, cancer-causing chemicals at a refinery in Pascagoula. The approval fell under an effort described as fast tracking the review of "climate-friendly new chemicals." Chevron proposed turning plastics into novel fuels, and the EPA hopped on board, in accordance with a Biden Administration policy to prioritize developing replacements for standard fossil fuels.

By opting to "streamline the review" of certain alternative fuels, the agency wrote it could help "displace current, higher greenhouse gas emitting transportation fuels," in a January 2022 press release. But also, through that "streamlining," the EPA appears to have pushed aside some major concerns.

[...] That 1-in-4 risk is about 250,000 times higher than the 1-in-1 million acceptable cancer risk threshold that the EPA generally applies when considering harm to the public. Another chemical listed in the approval document as P-21-0150 carries a lifetime cancer risk estimate of 1-in-8,333 for those exposed to fugitive air emissions —also far above the EPA's acceptable risk threshold. [...]

[...] For some reason though, despite its own internal risk cut-offs and federal regulation surrounding new chemical approvals, the EPA allowed Chevron to move forward without any further testing or a clear mitigation plan in place.

It's hard to say, specifically, what these EPA-approved compounds are because in the single relevant agency document obtained by ProPublica and the Guardian, chemical names are blacked out. However, the substances in question are all plastic-based fuels, as outlined in another, related document. Though obtuse, their approval seems to stem from a recently renewed national program to promote biofuel development, through a loophole that allows for fuels derived from waste.

[...] Nonetheless, the Biden Administration's push for more "biofuels" and re-upped Renewable Fuel Standard makes wide allowances for any fuel source that comes from trash—apparently regardless of the possible fallout.


Original Submission

posted by hubie on Friday April 21 2023, @03:34AM   Printer-friendly

Chinese tech giant claims better performance than competing GPUs:

Chinese social media, cloud, and entertainment giant Tencent on Monday revealed that it has started mass production of a home brew video transcoding accelerator.

The announcement comes nearly two years after the company unveiled a trio of custom chips designed to accelerate everything from streaming video to networking and artificial intelligence workloads.

In a post published on WeChat, Tencent Cloud revealed that "tens of thousands" of its Canghai chips, which are designed to offload video encode/decode for latency sensitive workloads, have been deployed internally to accelerate cloud gaming and live broadcasting.

Tencent says the Canghai chip can be paired with GPUs from a variety of vendors to support low-latency game streaming. When used for video transcoding, Tencent said a single node equipped with Canghai can deliver up to 1,024 video channels . We'll note that Nvidia, with the launch of its L4 GPUs last month, made similar claims. Without real-world benchmarks, it's hard to say how either firm's claims stack up.

[...] When it comes to spinning custom chips to improve the efficiency and economics of cloud computing, Amazon Web Services gets a lot of credit. The American e-tail giant and cloud titan has developed everything from custom CPUs, AI training and inference accelerators, and smartNICs to offload many housekeeping workloads.

And while Google has developed an accelerator of its own, called the Tensor Processing Unit (TPU), most US cloud providers have largely stuck with commercially available parts from the likes of Intel, AMD, Ampere, Broadcom, or Nvidia, rather than designing their own.

However, in China, custom chips appear to be more prevalent, with development an imperative accelerated by US sanctions that mean some tech products can't be exported to the Middle Kingdom.


Original Submission

posted by hubie on Friday April 21 2023, @12:48AM   Printer-friendly
from the need-more-filament dept.

The company says it learned much from Terran-1's debut flight and is choosing to go bigger for its successor:

After its rocket failed to reach orbit last month, California-based Relativity Space doesn't want to dwell on the past. Instead, the company is leaping forward with its next launch vehicle, which promises to be bigger and better.

On Wednesday, Relativity Space announced its lessons learned from the launch of Terran-1, a 3D-printed, methane-fueled rocket that was set to break records on its first flight. The rocket took off from Cape Canaveral Space Force Station on March 22 but an engine failure prevented it from reaching orbit.

Shortly after its stage separation, the rocket engine did not reach full thrust, according to Relativity Space. The company shared key findings from the rocket anomaly, detailing that the engine's main valves opened slower than expected, preventing the propellant from reaching the thrust chamber in time.

Terran-1 is 85% 3D-printed by mass and it's also powered by a liquid methane-oxygen propellant known as methalox. [...]

[...] Unlike its predecessor, Terran-R is designed to be a much larger 3D printed, medium-to-heavy lift orbital launch vehicle capable of carrying 33.5 metric tons to orbit. The rocket's first stage will be outfitted with 13 3D-printed Aeon engines while its second stage will have a single methane-fueled engine.

Terran-R's design is focused on the reusability of its first stage rather than its second stage, made from printed aluminum that would allow up to 20 re-flights. The plan is land the rockets on drone ships stationed in the Atlantic Ocean, similar to how SpaceX lands its Falcon 9 first stage.

[...] "Terran 1 was like a concept car, redefining the boundaries of what is possible by developing many valuable brand-new technologies well ahead of their time," Ellis said.

It's a bold move for Relativity Space to move onto the next project despite Terran-1 not fulfilling its inaugural mission. But in this, the new commercial space race, it's important for companies to move quickly or risk being left behind.

Previously:
    With Eyes on Reuse, Relativity Plans Rapid Transition to Terran R Engines
    Relativity Space Announces Fully Reusable "Terran R" Rocket, Planned for 2024 Debut
    Relativity Space Selected to Launch Satellites for Telesat
    Aerospace Startup Making 3D-Printed Rockets Now Has a Launch Site at America's Busiest Spaceport


Original Submission

posted by hubie on Thursday April 20 2023, @10:03PM   Printer-friendly
from the what-did-the-article-say? dept.

Potentially good news for old machinists and over-the-hill heavy metal fans:

"Five years ago, a team of researchers at the University of Rochester Medical Center (URMC) was able to regrow cochlear hair cells in mice for the first time. These hair cells are found in the cochlear region of ears in all mammals. They sense sound vibrations, convert those into brain signals, and eventually allow a person to hear and understand the different sounds around them. The new study from URMC researchers sheds light on the underlying mechanism that allowed the ear hairs to regrow in mice."

"We know from our previous work that expression of an active growth gene, called ERBB2, was able to activate the growth of new hair cells (in mammals), but we didn't fully understand why. This new study tells us how that activation is happening—a significant advance toward the ultimate goal of generating new cochlear hair cells in mammals," said Patricia White, one of the study authors and a neuroscience professor at URMC."

https://www.zmescience.com/science/news-science/can-we-reverse-hearing-loss-yes-we-can-here-is-how-it-works/


Original Submission