Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:39 | Votes:86

posted by janrinok on Tuesday August 06, @08:38PM   Printer-friendly
from the what-could-possibly-go-wrong? dept.

Neuralink successfully implants its chip into a second patient's brain:

Neuralink's brain chip has been implanted into a second patient as part of early human trials, Elon Musk told podcast host Lex Fridman on Saturday. The company hasn't disclosed when the surgery took place or the name of the recipient, according to Reuters.

Musk said 400 of the electrodes on the second patient's brain are working out of 1,024 implanted. "I don't want to jinx it but it seems to have gone extremely well," he said. "There's a lot of signal, a lot of electrodes. It's working very well."

The device allows patients with spinal cord injuries to play video games, use the internet and control electronic devices using their thoughts alone. In May, the company announced that it was "accepting applications for the second participant" in trials following FDA approval.

The original Neuralink implant patient, Nolan Arbaugh, described the surgery as "super easy." In a demo, the company showed how Arbaugh was able to move a cursor around the screen of a laptop, pause an on-screen music device and play chess and Civilization VI.

Arbaugh himself participated in the marathon podcast with Musk and Fridman. He said that the device allows him to make anything happen on a computer screen just by thinking it, helping reduce his reliance on caregivers.

However, problems cropped up shortly after his surgery when some of electrodes retracted from his brain. The issue was partly rectified later on by modifying the algorithm to make the implants more sensitive. Neuralink told the FDA that in a second procedure, it would place the implant's threads deeper into the patient's brain to prevent them from moving as much as they did in Arbaugh's case.

[...] the company said it had over 1,000 volunteers for its second surgical trial. Musk said he expects Neuralink to implant its chips in up to eight more patients by the end of 2024.


Original Submission

posted by janrinok on Tuesday August 06, @03:52PM   Printer-friendly
from the time-marches-on dept.

Our resident shy submitter offers the following:

A nicely organized blog post at https://www.construction-physics.com/p/what-would-it-take-to-recreate-bell reviews the history of Bell Labs (going back into the late 1800s). It ends with a section that wonders if re-creating a research monster like Bell Labs (peak employment = 25,000 people) is possible today...or even needed. A sample from the middle:

Though it had many successes in the first 25 years of its life, the crowning achievement of Bell Labs research (and its strategy of leveraging early-stage scientific research to create new products) is undoubtedly its development of the transistor, along with its various derivatives (the MOSFET, the solar PV cell) and associated manufacturing technologies (including crystal pulling, zone melting, and diffusion furnaces). The transistor is a classic case of Bell Labs' strategy: wide research freedom, circumscribed by the requirement to produce things useful for the Bell System. The telephone network required enormous amounts of vacuum tubes [Bell Labs developed de Forest's tube into a useful amplifier] and mechanical relays to act as switches, but these were far from ideal components. ... Mervin Kelly, physicist and head of the Bell Labs vacuum tube department in the early 1930s (and later the president of Bell Labs), dreamed of replacing them with solid-state components with no moving parts. Advances in quantum mechanics, and novel materials known as semiconductors, suggested that such components might be possible.

Bell Labs had studied semiconductors since the early 1930s; Walter Brattain, who would eventually share the Nobel Prize for inventing the transistor, was hired in 1929 and had begun to study an early semiconductor device called the copper oxide rectifier. A Depression hiring freeze stymied more serious semiconductor efforts until 1936, when Mervin Kelly (now Bell Labs' director of research) was finally able to start building a more robust solid-state physics department and hired physicist William Shockley (the second of the three transistor inventors). While not giving Shockley any specific research tasks (indeed, the entire solid-state group had "unprecedented liberty to follow their own research noses as long as their work dovetailed with general company goals"), Kelly emphasized to Shockley the potential value of a solid-state component to replace tubes and mechanical relays.

The solid-state physicists continued their research over the next several years, studying the behavior of semiconductors and attempting to create a semiconductor amplifier. This research was interrupted by the war but resumed in 1945, the same year physicist John Bardeen was hired. Bardeen proved to be the catalyst the solid-state group needed, and over the next several years Bardeen, Brattain, and Shockley made progress in understanding semiconductor behavior. In December 1947, they unveiled their semiconductor amplifier: the transistor. By 1950, Western Electric was making 100 transistors a month for use in Bell System equipment. A few years later, in 1954, another Bell Labs solid-state research effort yielded the world's first silicon solar PV cell.

One of the kids in my neighborhood (1960s) became a physicist and worked at Bell Labs for years--seemed to really like it there. Anyone else have a connection to tell about?


Original Submission

posted by hubie on Tuesday August 06, @10:50AM   Printer-friendly
from the do-not-want dept.

Wired is running a story https://www.wired.com/story/cars-are-now-rolling-computers-so-how-long-will-they-get-updates-automakers-cant-say/ or https://archive.is/nAMkd about broken updates for in-car software. Starts out with a VW story:

In 2022, Jake Brown...bought a used 2017 Volkswagen Passat from a local dealership.
[..]
Brown had heard that some Volkswagens were having trouble with connected features for a reason that, because of Brown's telecommunications background, felt familiar to him: AT&T, which Volkswagen worked with to provide connectivity to the automaker's vehicles, had "sunsetted" its 3G service that year.
[...]
The 3G sunset left drivers of some Volkswagens, including a handful of models built between 2014 and 2019, unable to access Volkswagen's Car-Net service. Car-Net includes remote start, but also automated service notifications, emergency assistance, antitheft alerts, and remote automatic crash notifications, among other network-enabled features.

TFA goes on to say that several other manufacturers have the same problem--cars built with 3G connectivity have been timed out by the telcos--completely beyond control of the car manufacturer. A query by Wired to VW wasn't all that helpful--it seems there are already lawsuits about this and no solution in sight after several years since 3G stopped. They do mention that VW expect 4G (currently shipping in their cars) to last to 2035 [but that may be wishful thinking--submitter].

TFA also includes a discussion of expected service life of phones (a few years, updates for seven years) vs cars which average over 12 years (USA), with many at 20+.

Your AC wonders why there isn't an option to ignore the built-in wireless modem and either: swap out the modem module, or connect the car (through USB or other cable) to a current phone [but this may have security problems?] Personally I'm not interested in any of these connected cars--and this is just another nail in the coffin for me.


Original Submission

posted by mrpg on Tuesday August 06, @06:03AM   Printer-friendly
from the subzero dept.

Arthur T Knackerbracket has processed the following story:

Japanese researchers have discovered ice 0, a new type of ice that forms near the surface of water, potentially redefining scientific understanding and influencing technology and climate studies. (Artist’s concept). Credit: SciTechDaily.com

Ice is far more complex than most people realize, with science identifying over 20 different varieties formed under various combinations of pressure and temperature. The type we use to chill our drinks, known as ice I, is one of the few forms that occur naturally on Earth. Recently, researchers from Japan discovered another type: ice 0, an unusual form of ice that can initiate the formation of ice crystals in supercooled water.

The formation of ice near the surface of liquid water can start from tiny crystal precursors with a structure similar to a rare type of ice, known as ice 0. In a study recently published in Nature Communications, researchers from the Social Cooperation Research Department “Frost Protection Science,” at the Institute of Industrial Science, The University of Tokyo showed that these ice 0-like structures can cause a water droplet to freeze near its surface rather than at its core. This discovery resolves a longstanding puzzle and could help redefine our understanding of how ice forms.

Crystallization of ice, known as ice nucleation, usually happens heterogeneously, or in other words, at a solid surface. This is normally expected to happen at the surface of the water’s container, where liquid meets solid. However, this new research shows that ice crystallization can also occur just below the water’s surface, where it meets the air. Here, the ice nucleates around small precursors with the same characteristic ring-shaped structure as ice 0.

Reference: “Surface-induced water crystallisation driven by precursors formed in negative pressure regions” by Gang Sun, and Hajime Tanaka, 26 July 2024, Nature Communications.
  DOI: 10.1038/s41467-024-50188-1


Original Submission

posted by mrpg on Tuesday August 06, @01:18AM   Printer-friendly
from the cows-beware dept.

Arthur T Knackerbracket has processed the following story:

To address the climate crisis effectively, immediate action on methane emissions is essential. Methane has contributed about half the global warming we’ve experienced so far, and emissions are climbing rapidly. An international team of climate researchers writing today (July 30) in Frontiers in Science set out three imperatives to cut methane emissions and share a new tool to help us find the most cost-effective ways of doing so.

“The world has been rightly focused on carbon dioxide, which is the largest driver of climate change to date,” said Professor Drew Shindell of Duke University, lead author. “Methane seemed like something we could leave for later, but the world has warmed very rapidly over the past couple of decades, while we’ve failed to reduce our CO2 emissions. So that leaves us more desperate for ways to reduce the rate of warming rapidly, which methane can do.”

Methane is the second most potent greenhouse gas, but only about 2% of global climate finance goes towards cutting methane emissions. These emissions are also rising fast, due to a combination of emissions from fossil fuel production and increased emissions from wetlands, driven by the climate crisis. To slow the damage from climate change and make it possible to keep global warming below 2°C, we need to act immediately, following the Global Methane Pledge to reduce methane emissions by 30% from their 2020 level by 2030.

[...] Methane doesn’t accumulate in the atmosphere in the long term, so emissions reductions take effect more quickly. If we could cut all methane emissions tomorrow, in 30 years more than 90% of accumulated methane—but only around 25% of carbon dioxide—would have left the atmosphere.

Reference: “The methane imperative” 30 July 2024, Frontiers in Science.
  DOI: 10.3389/fsci.2024.1349770


Original Submission

posted by janrinok on Monday August 05, @08:34PM   Printer-friendly
from the SSITS-so-close-it-writes-its-own-dept dept.

http://www.righto.com/2024/08/space-shuttle-interim-teleprinter.html

The Space Shuttle contained a bulky printer so the astronauts could receive procedures, mission plans, weather reports, crew activity plans, and other documents. Needed for the first Shuttle launch in 1981, this printer was designed in just 7 months, built around an Army communications terminal. Unlike modern printers, the Shuttle's printer contains a spinning metal drum with raised characters, allowing it to rapidly print a line at a time.

This printer is known as the Space Shuttle Interim Teleprinter System.1 As the name "Interim" suggests, this printer was intended as a stop-gap measure, operating for a few flights until a better printer was operational. However, the teleprinter proved to be more reliable than its replacement, so it remained in use as a backup for over 50 flights, often printing thousands of lines per flight. This didn't come cheap: with a Shuttle flight costing $27,000 per pound, putting the 59-pound teleprinter in space cost over $1.5 million per flight.


Original Submission

posted by mrpg on Monday August 05, @03:44PM   Printer-friendly
from the hot-wheels dept.

Arthur T Knackerbracket has processed the following story:

In copper-containing materials called cuprates, superconductivity competes with two properties called magnetic spin and electric charge density wave (CDW) order. These properties reveal different parts of the electrons in the superconductor. Each electron possesses spin and charge.

In a regular metal, the spins cancel each other out and electrical charges are uniform across a material. However, the strong electron–electron interactions in high-temperature superconductors such as cuprates give rise to other possible states.

New research published in Nature Communications has examined materials where strong magnetic interaction causes some of the electron spins to order along stripes. This occurs when spin density waves (SDW) and CDWs lock together to form a stable long-range "stripe state" where the peaks and valleys of the two waves are aligned.

This state reinforces the stability of the SDW and CDW. This stripe state competes with and interrupts the superconducting phase. Now, however, researchers have found that short-range CDW can be compatible, rather than competitive, with superconductivity in cuprate materials. This finding runs counter to scientific conventional wisdom.

[...] The research also identified the possibility that short-range charge order may enable the formation and motion of vortices in the superconducting phase. This means researchers may be able to stabilize superconductivity at higher temperatures and magnetic fields by controlling or enhancing short-range charge order.

More information: J.-J. Wen et al, Enhanced charge density wave with mobile superconducting vortices in La1.885Sr0.115CuO4, Nature Communications (2023). DOI: 10.1038/s41467-023-36203-x


Original Submission

posted by janrinok on Monday August 05, @11:03AM   Printer-friendly

AI Images Exposed: Researchers Reveal Simple Method To Detect Deepfakes:

By using astronomical methods to analyze eye reflections, researchers can potentially detect deepfake images, though the technique includes some risk of inaccuracies.

In an era when anyone can create artificial intelligence (AI) images, the ability to detect fake pictures, particularly deepfakes of people, is becoming increasingly important. Now, scientists say the eyes may be the key to distinguishing deepfakes from real images.

New research presented at the Royal Astronomical Society's National Astronomy Meeting indicates that deepfakes can be identified by analyzing the reflections in human eyes, similar to how astronomers study pictures of galaxies. The study, led by University of Hull MSc student Adejumoke Owolabi, focuses on the consistency of light reflections in each eyeball. Discrepancies in these reflections often indicate a fake image.

"The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person," said Kevin Pimbblet, professor of astrophysics and director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull.

Researchers analyzed reflections of light on the eyeballs of people in real and AI-generated images. They then employed methods typically used in astronomy to quantify the reflections and checked for consistency between left and right eyeball reflections.

Fake images often lack consistency in the reflections between each eye, whereas real images generally show the same reflections in both eyes.

"To measure the shapes of galaxies, we analyze whether they're centrally compact, whether they're symmetric, and how smooth they are. We analyze the light distribution," said Pimbblet. "We detect the reflections in an automated way and run their morphological features through the CAS [concentration, asymmetry, smoothness] and Gini indices to compare similarity between left and right eyeballs.

"The findings show that deepfakes have some differences between the pair."

The Gini coefficient is normally used to measure how the light in an image of a galaxy is distributed among its pixels. This measurement is made by ordering the pixels that make up the image of a galaxy in ascending order by flux and then comparing the result to what would be expected from a perfectly even flux distribution. A Gini value of 0 is a galaxy in which the light is evenly distributed across all of the image's pixels, while a Gini value of 1 is a galaxy with all light concentrated in a single pixel.

The team also tested CAS parameters, a tool originally developed by astronomers to measure the light distribution of galaxies to determine their morphology, but found it was not a successful predictor of fake eyes.

"It's important to note that this is not a silver bullet for detecting fake images," Pimbblet added. "There are false positives and false negatives; it's not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes."


Original Submission

posted by mrpg on Monday August 05, @06:11AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

When giant solar storms hit Earth, they trigger beautiful auroral displays high in Earth's atmosphere. There's a dark side to this solar activity, though. The "space weather" it sets off also threatens our technology. The potential for damage is why we need highly accurate predictions of just when these storms will impact our planet's magnetosphere.

To figure that out, scientists in England went to the source: specific places on the sun where these storms erupt. Those outbursts are called coronal mass ejections (CMEs). They're huge explosions of magnetically charged particles and gases from the sun. They travel through space and hit whatever is in their way, including planets.

When that cloud of charged particles hits our magnetic field, it sets off a chain reaction of events. Of course, it creates beautiful auroral displays—northern and southern lights that dance in the skies. But, they also slam into and can damage orbiting satellites, including all our telecommunications and navigation systems for planes, boats, and trains.

[...] The team found a very strong relationship between the critical height of the CME as it gets started and its true speed as it moves out. "This insight allows us to predict the CME's speed and, consequently, its arrival time on Earth, even before the CME has fully erupted," Ghandhi said.

Knowing the actual speed of the CME to a higher degree of accuracy will let solar physicists predict when it will hit Earth. That, in turn, will allow satellite operators, grid owners, space agencies, and others to prepare for the action and protect their assets.

Journal information: Space Weather

More information:D. H. Boteler, A 21st Century View of the March 1989 Magnetic Storm, Space Weather (2019). DOI: 10.1029/2019SW002278


Original Submission

posted by janrinok on Monday August 05, @01:31AM   Printer-friendly

We should build a moon vault with ...

Scientist suggest building a biorepository on the moon, like a extra terrestrial Svalbard Global Seed Vault. If/when earth collapses how are the survivors going to make it to the moon to kickstart things? Or are we expecting benevolent xenomorphs to bring us back to life?

https://academic.oup.com/bioscience/advance-article/doi/10.1093/biosci/biae058/7715645?login=false
https://mashable.com/article/moon-lunar-repository-life-storage
https://www.theguardian.com/environment/article/2024/jul/31/scientists-propose-lunar-biorepository-as-backup-for-life-on-earth

Moon Ark: Scientists Propose Saving Earth's Species With a Lunar Biorepository

Moon Ark: Scientists Propose Saving Earth's Species With a Lunar Biorepository:

Researchers propose a lunar biorepository to protect Earth's endangered species by utilizing the Moon's cold temperatures for long-term storage of biological samples. This initiative seeks to overcome Earth's natural and political risks by fostering global collaboration and developing new technologies for space transport and sample preservation.

Faced with the threat of extinction for numerous species, an international team of researchers has suggested a groundbreaking solution to safeguard the planet's biodiversity: a lunar biorepository. As outlined in a recent article in the journal BioScience, this plan involves establishing a passive, enduring storage facility on the moon for cryopreserved samples of Earth's most endangered animal species.

Led by Dr. Mary Hagedorn of the Smithsonian's National Zoo and Conservation Biology Institute, the team envisions taking advantage of the Moon's naturally cold temperatures, particularly in permanently shadowed regions near the poles, where temperatures remain consistently below –196 degrees Celsius. Such conditions are ideal for long-term storage of biological samples without the need for human intervention or power supplies, two factors that could threaten the resilience of Earth-based repositories. Other key advantages of a lunar facility include protection from Earth-based natural disasters, climate change, and geopolitical conflicts.

An initial focus in the development of a lunar biorepository would be on cryopreserving animal skin samples with fibroblast cells. The author team has already begun developing protocols using the Starry Goby (Asterropteryx semipunctata) as an exemplar species, with other species to follow. The authors also plan to "leverage the continental-scale sampling that is currently underway at the U S National Science Foundation's National 190 Ecological Observatory Network (NEON)" as a source for future fibroblast cell development.

Challenges to be addressed include developing robust packaging for space transport, mitigating radiation effects, and establishing the complex international governance frameworks for the repository. The authors call for broad collaboration among nations, agencies, and international stakeholders to realize this decades-long program. The next steps include expanding partnerships, particularly with space research agencies, and conducting further testing on Earth and aboard the International Space Station.

Despite the challenges to be overcome, the authors highlight that the need for action is acute: "Because of myriad anthropogenic drivers, a high proportion of species and ecosystems face destabilization and extinction threats that are accelerating faster than our ability to save these species in their natural environment."

Journal Reference:
Hagedorn, Mary, Parenti, Lynne R, Craddock, Robert A, et al. Safeguarding Earth's biodiversity by creating a lunar biorepository [open], BioScience (DOI: 10.1093/biosci/biae058)


Original Submission #1Original Submission #2

posted by mrpg on Sunday August 04, @08:50PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The agency has been advancing optical communications, which use infrared light signals instead of the more conventional radio waves to transmit data. As part of these efforts, it recently conducted a series of flight tests that involved installing a laser terminal on the belly of a Pilatus PC-12 aircraft. This single-engine plane then proceeded to beam 4K video while soaring over Lake Erie to a ground station in Cleveland, Ohio.

From there, the video signal went on an epic journey, passing through NASA's White Sands facility in New Mexico before being fired off into space 22,000 miles away using infrared lasers towards an experimental satellite called the Laser Communications Relay Demonstration (LCRD). The LCRD then relayed the data to a special terminal aboard the ISS called ILLUMA-T, which beamed it back to Earth.

Despite this incredibly long distance, NASA says the laser link achieved transmission rates of over 900 Mbps. To put that into perspective, the average household internet in the US churns out 245 Mbps as of June 2024.


Original Submission

posted by hubie on Sunday August 04, @04:11PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

For many years, insecticide-treated bed nets and indoor spraying have been crucial and highly effective methods for controlling mosquitoes that spread malaria, a serious global health threat. These strategies have also, incidentally, helped to reduce populations of other unwanted household pests such as bed bugs, cockroaches, and flies.

Now, a new North Carolina State University study reviewing the academic literature on indoor pest control shows that as the household insects developed resistance to the insecticides targeting mosquitoes, the return of these bed bugs, cockroaches and flies into homes has led to community distrust and often abandonment of these treatments – and to rising rates of malaria.

In short, the bed nets and insecticide treatments that were so effective in preventing mosquito bites – and therefore malaria – are increasingly viewed as the causes of household pest resurgence.

“These insecticide-treated bed nets were not intended to kill household pests like bed bugs, but they were really good at it,” said Chris Hayes, an NC State Ph.D. student and co-corresponding author of a paper describing the work. “It’s what people really liked, but the insecticides are not working as effectively on household pests anymore.”

“Non-target effects are usually harmful, but in this case they were beneficial,” said Coby Schal, Blanton J. Whitmire Distinguished Professor of Entomology at NC State and co-corresponding author of the paper.

“The value to people wasn’t necessarily in reducing malaria, but was in killing other pests,” Hayes added. “There’s probably a link between the use of these nets and widespread insecticide resistance in these house pests, at least in Africa.”

[...] The researchers say that all hope is not lost, though.

“There are, ideally, two routes,” Schal said. “One would be a two-pronged approach with both mosquito treatment and a separate urban pest management treatment that targets pests. The other would be the discovery of new malaria-control tools that also target these household pests at the same time. For example, the bottom portion of a bed net could be a different chemistry that targets cockroaches and bed bugs.

“If you offer something in bed nets that suppresses pests, you might reduce the vilification of bed nets.”

Reference: “Review on the impacts of indoor vector control on domiciliary pests: good intentions challenged by harsh realities” by Christopher C. Hayes and Coby Schal, 1 July 2024, Proceedings of the Royal Society B. DOI: 10.1098/rspb.2024.0609


Original Submission

posted by hubie on Sunday August 04, @11:26AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Consider the drone: Although it is critical to national defense and prosperity, nearly all its components are made in China.

A country’s economic security—its ability to generate both national security and economic prosperity—is grounded in it having significant technological capabilities that outpace those of its adversaries and complement those of its allies. Though this is a principle well known throughout history, the move over the last few decades toward globalization and offshoring of technologically advanced industrial capacity has made ensuring a nation state's security and economic prosperity increasingly problematic. A broad span of technologies ranging from automation and secure communications to energy storage and vaccine design are the basis for wider economic prosperity—and high priorities for governments seeking to maintain national security. However, the necessary capabilities do not spring up overnight. They rely upon long decades of development, years of accumulated knowledge, and robust supply chains.

For the US and, especially, its allies in NATO, a particular problem has emerged: a “missing middle” in technology investment. Insufficient capital is allocated toward the maturation of breakthroughs in critical technologies to ensure that they can be deployed at scale. Investment is allocated either toward the rapid deployment of existing technologies or to scientific ideas that are decades away from delivering practical capability or significant economic impact (for example, quantum computers). But investment in scaling manufacturing technologies, learning while doing, and maturing of emerging technologies to contribute to a next-generation industrial base, is too often absent. Without this middle-ground commitment, the United States and its partners lack the production know-how that will be crucial for tomorrow’s batteries, the next generation of advanced computing, alternative solar photovoltaic cells, and active pharmaceutical ingredients.

While this once mattered only for economic prosperity, it is now a concern for national security too—especially given that China has built strong supply chains and other domestic capabilities that confer both economic security and significant geopolitical leverage.

Consider drone technology. Military doctrine has shifted toward battlefield technology that relies upon armies of small, relatively cheap products enabled by sophisticated software—from drones above the battlefield to autonomous boats to CubeSats in space.

Drones have played a central role in the war in Ukraine. First-person viewer (FPV) drones—those controlled by a pilot on the ground via a video stream—are often strapped with explosives to act as precision kamikaze munitions and have been essential to Ukraine’s frontline defenses. While many foundational technologies for FPV drones were pioneered in the West, China now dominates the manufacturing of drone components and systems, which ultimately enables the country to have a significant influence on the outcome of the war.

[...] China’s manufacturing dominance has resulted in a domestic workforce with the experience to achieve process innovations and product improvements that have no equal in the West.  And it has come with the sophisticated supply chains that support a wide range of today’s technological capabilities and serve as the foundations for the next generation. None of that was inevitable. For example, most drone electronics are integrated on printed circuit boards (PCBs), a technology that was developed in the UK and US.However, first-mover advantage was not converted into long-term economic or national security outcomes, and both countries have lost the PCB supply chain to China.

[...] China’s dominance in LiPo batteries for drones reflects its overall dominance in Li-ion manufacturing. China controls approximately 75% of global lithium-ion capacity—the anode, cathode, electrolyte, and separator subcomponents as well as the assembly into a single unit. It dominates the manufacture of each of these subcomponents, producing over 85% of anodes and over 70% of cathodes, electrolytes, and separators. China also controls the extraction and refinement of minerals needed to make these subcomponents.

[...] While the absence of the high-tech industrial capacity needed for economic security is easy to label, it is not simple to address. Doing so requires several interrelated elements, among them designing and incentivizing appropriate capital investments, creating and matching demand for a talented technology workforce, building robust industrial infrastructure, ensuring visibility into supply chains, and providing favorable financial and regulatory environments for on- and friend-shoring of production. This is a project that cannot be done by the public or the private sector alone. Nor is the US likely to accomplish it absent carefully crafted shared partnerships with allies and partners across both the Atlantic and the Pacific.

The opportunity to support today’s drones may have passed, but we do have the chance to build a strong industrial base to support tomorrow’s most critical technologies—not simply the eye-catching finished assemblies of autonomous vehicles, satellites, or robots but also their essential components. This will require attention to our manufacturing capabilities, our supply chains, and the materials that are the essential inputs. Alongside a shift in emphasis to our own domestic industrial base must come a willingness to plan and partner more effectively with allies and partners.

If we do so, we will transform decades of US and allied support for foundational science and technology into tomorrow’s industrial base vital for economic prosperity and national security. But to truly take advantage of this opportunity, we need to value and support our shared, long-term economic security. And this means rewarding patient investment in projects that take a decade or more, incentivizing high-capital industrial activity, and maintaining a determined focus on education and workforce development—all within a flexible regulatory framework.

Everyone thinks they know but no one can agree. And that’s a problem.


Original Submission

posted by hubie on Sunday August 04, @06:38AM   Printer-friendly
from the pitching-to-contact-or-to-the-injured-list? dept.

I've written previously about how Statcast data is changing professional baseball, but the application of the data has caused at least one very adverse effect: being a pitcher in today's game is bad for your health.

Two of the ways to be an effective pitcher are to generate a lot of swings and misses, and to induce a lot of poor contact. Poor contact means balls that are hit with low exit velocities, or at very high or low launch angles, and these disproportionately result in outs. Statcast data shows that pitchers can achieve this by throwing at high velocities and with a lot of vertical or lateral movement on their pitches. The pitch movement is achieved by spinning the ball at a high rotation rate, and the Magnus effect creates a pressure gradient force across the baseball that deflects it away from its original trajectory. Fastballs tend to have backspin, which imparts an upward acceleration. However, curveballs spin forward and have a downward acceleration, and it's also possible to generate lateral movement. The direction and amount of movement on a pitch is also sometimes referred to as its shape.

The desire for higher velocity and spin rates has led to the rise of "pitching labs" that develop training programs that are very effective at increasing arm strength, improving pitching mechanics, and raising the spin rate of pitches. This comes at a price, however, which is more stress on a pitcher's arm. Major League Baseball (MLB) teams have tried to account for this by allowing pitchers to throw fewer pitches per game and giving them more rest between outings. The added rest helps pitchers consistently throw with high velocity and spin rates, at least for awhile. But all of this added stress seems to have a cumulative effect on a pitcher's elbow. The weakest point is often the Ulnar collateral ligament (UCL), and a partially or completely torn UCL has become an increasingly common pitching injury.

Prior to the increased focus on pitch velocity and shape, high pitch counts were generally considered the biggest factor in UCL injuries. However, the data show an upward trend in fastball velocity in recent years corresponding with a large increase in elbow injuries. As this YouTube video from WIRED shows, throwing a fastball at the hardest velocities seen in MLB places an incredible amount of strain on a pitcher's elbow to the point that it exceeds what the UCL can withstand. Small tears form in the UCL from the forces needed to throw a pitch that hard, and the long-term effect of continuing to pitch under these conditions is often a ruptured ligament.

Several decades ago, a torn UCL was generally a career ending injury. In 1974, Dodgers' pitcher Tommy John was the first baseball player to undergo a UCL reconstruction, which involves grafting a tendon in place of the UCL, taking the tendon from elsewhere in the body or a donor. The procedure has become known as "Tommy John surgery" and has a high success rate, though with a long recovery time. However, continuing to pitch with high velocity and spin rates has led to the injury recurring a few years later and requiring a second surgery. There is also evidence that spin rates place a high level of stress on the elbow and are also correlated with arm injuries. MLB also imposes a pitch clock, limiting the amount of time a pitcher can rest between pitches. Although the pitch clock improves the pace of games, it has also been cited as a potential injury risk.

The obvious question is why pitchers would be willing to throw pitches at high velocities and spin rates knowing that the result is likely Tommy John surgery. The answer is that there are only so many spots on a major league roster available, and if one pitcher isn't willing to assume that risk, someone else will. The best starting pitchers get massive contracts that pay tens of millions of dollars per year, so there's a lot of money potentially available for those willing to accept the high risk of injuries. Even at lower levels, pitchers know that if they want to be successful, they need to be able to throw the ball hard. There has even been a large increase in youth pitchers having UCL injuries and undergoing Tommy John surgery. Some MLB pitchers like Josh Hader and Garrett Crochet have tried to impose their own limits on how teams can use them, a move that has been somewhat controversial.

Fortunately, the same data that allows us to link pitch velocity and spin rate with effectiveness may also offer a solution to reduce injuries. Tracking pitch velocity and spin rate can allow us to determine how frequently pitchers are throwing pitches that contribute most to UCL injuries. One proposal is to track the number of high risk pitches thrown by each pitcher and imposing a cap on a pitcher's innings in a season, progressively lowering that cap for pitchers who throw more high risk pitches. Part of a pitcher's value to a team is their availability. If a pitcher is unavailable because they've reached their innings cap, they're less valuable to a team, providing an incentive to reduce the number of pitches thrown at high velocity and perhaps high spin rates. The proposed rule in the linked article focuses on fastballs, but a similar strategy could be applied to other high risk pitches.


Original Submission

posted by hubie on Sunday August 04, @01:55AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Do you have your VMware ESXi hypervisor joined to Active Directory? Well, the latest news from Microsoft serves as a reminder that you might not want to do that given the recently patched vulnerability that has security experts deeply concerned.

CVE-2024-37085 only carries a 6.8 CVSS rating, but has been used as a post-compromise technique by many of the world's most high-profile ransomware groups and their affiliates, including Black Basta, Akira, Medusa, and Octo Tempest/Scattered Spider.

The vulnerability allows attackers who have the necessary privileges to create AD groups – which isn't necessarily an AD admin – to gain full control of an ESXi hypervisor.

This is bad for obvious reasons. Having unfettered access to all running VMs and critical hosted servers offers attackers the ability to steal data, move laterally across the victim's network, or just cause chaos by ending processes and encrypting the file system.

The "how" of the exploit is what caused such a stir in cyber circles. There are three ways of exploiting CVE-2024-37085, but the underlying logic flaw in ESXi enabling them is what's attracted so much attention.

Essentially, if an attacker was able to add an AD group called "ESX Admins," any user added to it would by default be considered an admin.

That's it. That's the exploit.

[...] Broadcom said in a security advisory that it already issued a patch for CVE-2024-37085 on June 25, but only updated Cloud Foundation as recently as July 23, which is perhaps why Microsoft's report only just went live.

Jake Williams, VP of research and development at Hunter Strategy and IANS faculty member, was critical of Broadcom's approach to security, especially with regard to the severity it assigned the vulnerability.

[...] "I can only conclude Broadcom is not serious about security. I don't know how you conclude anything else. Oh also, there are no patches planned for ESXi 7.0."

Many commentators have questioned why an organization would join their ESXi hosts to AD in the first place, despite it being a relatively common practice.

"Why are ESX servers joined with an active directory in the first place? Because it is convenient to manage admin access to servers using a centralized platform in large corporations," Dr Martin J Kraemer, security awareness advocate at KnowBe4, told The Register

"This is very common but also creates challenges. In many environments, the AD itself might run on a VM. Cold boot can be a nightmare. A chicken and egg problem. How can you start ESX without AD while AD runs on ESX? Admins must think about this. A well-known challenge.

[...] "Over the last year, we have seen ransomware actors targeting ESXi hypervisors to facilitate mass encryption impact in few clicks, demonstrating that ransomware operators are constantly innovating their attack techniques to increase impact on the organizations they target," it said.

Microsoft also said that ESXi hypervisors often fly further under the radar in security operations centers (SOCs) because security solutions often don't have the necessary visibility into ESXi, potentially allowing attackers to go undetected for longer periods of time.

Because of the destruction a successful ESXi attack could cause, attacks have risen sharply. In the past three years, the targeting of ESXi hypervisors has doubled.

[...] Microsoft recommends that all ESXi users install the available patches and scrub up their credential hygiene to prevent future attacks, as well as use a robust vulnerability scanner, if you don't already.


Original Submission