Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Memory, or the ability to store information in a readily accessible way, is an essential operation in both computers and human brains. However, there are key differences in how they process information. While the human brain performs computations directly on stored data, computers must transfer data between a memory unit and a central processing unit (CPU). This inefficient separation, known as the von Neumann bottleneck, contributes to the rising energy costs of computers.
The von Neumann bottleneck is a fundamental limitation in computer architecture, named after the mathematician and physicist John von Neumann. It arises from the design of the von Neumann architecture, which uses a single bus for both data and instructions to be fetched from memory. This creates a communication bottleneck because the CPU can either retrieve data or instructions at any given time, but not both simultaneously. Consequently, the speed of data processing is constrained by the memory bandwidth, leading to inefficiencies and slower overall system performance. This bottleneck has driven the development of alternative architectures and optimization techniques to improve data throughput and computational speed.
For over 50 years, researchers have been working on the concept of a memristor (memory resistor), an electronic component that can both compute and store data, much like a synapse. Aleksandra Radenovic of the Laboratory of Nanoscale Biology (LBEN) at EPFL’s School of Engineering set her sight on something even more ambitious: a functional nanofluidic memristive device that relies on ions, rather than electrons and their oppositely charged counterparts (holes). This approach would mimic the human brain’s way of processing information more closely and is therefore more energy-efficient.
Radenovic says, “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living organisms.”
“We have fabricated a new nanofluidic device for memory applications that is significantly more scalable and much more performant than previous attempts,” says LBEN postdoctoral researcher Théo Emmerich. “This has enabled us, for the very first time, to connect two such ‘artificial synapses’, paving the way for the design of brain-inspired liquid hardware.” The research has recently been published in Nature Electronics.
[...] The device was fabricated on a chip at EPFL’s Center of MicroNanoTechnology by creating a nanopore at the center of a silicon nitride membrane. The researchers added palladium and graphite layers to create nano-channels for ions. As a current flows through the chip, the ions percolate through the channels and converge at the pore, where their pressure creates a blister between the chip surface and the graphite. As the graphite layer is forced up by the blister, the device becomes more conductive, switching its memory state to ‘on’. Since the graphite layer stays lifted, even without a current, the device ‘remembers’ its previous state. A negative voltage puts the layers back into contact, resetting the memory to the ‘off’ state.
“Ion channels in the brain undergo structural changes inside a synapse, so this also mimics biology,” says LBEN PhD student Yunfei Teng, who worked on fabricating the devices – dubbed highly asymmetric channels (HACs) in reference to the shape of the ion flow toward the central pores.
Reference: “Nanofluidic logic with mechano–ionic memristive switches” by Theo Emmerich, Yunfei Teng, Nathan Ronceray, Edoardo Lopriore, Riccardo Chiesa, Andrey Chernev, Vasily Artemov, Massimiliano Di Ventra, Andras Kis and Aleksandra Radenovic, 19 March 2024, Nature Electronics. DOI: 10.1038/s41928-024-01137-9
[Ed. comment: This story has been updated with a recent decision from MLB]
What do Don Denkinger and Jim Joyce have in common? If you're a baseball fan, you might recognize them as umpires who are known for famously missing a critical call late in a game on national TV. Before Major League Baseball (MLB) embraced video-assisted replay (VAR), which it resisted long after other sports like football had demonstrated that replaces could be used successfully, there was no way to reverse the missed calls. Even after MLB finally allowed VAR to be used, by far the most frequent call in a game still cannot be reviewed: whether a pitch is a ball or strike.
The technology to track the fight of a baseball and reliably determine balls and strikes has been in use for a couple of decades. Systems like QuesTec, PITCHf/x, and Statcast can accurately track the flight of a baseball and determine whether its trajectory crossed the strike zone when it reached home plate. Statcast not only determines each pitch's horizontal and vertical location when it crossed the plate, but a plethora of other data like the pitcher's release point in three dimensions, the velocity when the pitch left the pitcher's hand, it's spin axis and rate, the pitch's acceleration in three dimensions, and a classification of the pitch type. Despite the capability to accurately call balls and strikes automatically, MLB still relies solely on human umpires this call.
The horizontal location of the strike zone is identical for every pitch, requiring that some portion of the baseball pass above home plate. However, the vertical location is defined as being from the bottom of the hitter's kneecap to the midpoint between the top of the hitter's pants and the hitter's shoulders. This is affected by the hitter's height, body shape, and their batting stance. A hitter won't have exactly the same batting stance on any two pitches, so the actual strike zone varies slightly from pitch to pitch, even for the same hitter. This data is determined by Statcast while the pitch is in flight, and is recorded in the sz_bot (bottom of the strike zone in feet above ground) and sz_top (the top, with the same units) fields in Statcast data. The flight of the baseball is currently tracked by 12 Hawk-Eye cameras stationed throughout each stadium, five of which operate at 300 frames per second. The images from the different cameras can be used to pinpoint the location of the baseball within a few millimeters. The same type of camera is used for VAR in tennis matches to determine if a ball was out of bounds.
When Don Denkinger mistakenly called Jorge Orta safe at first base in the ninth inning of game 6 of the 1985 World Series, known in St. Louis simply as "The Call", it was followed by a series of poor plays by the by the Cardinals that led them to blowing a 1-0 lead and losing the game. Although the Cardinals proceeded to get blown out 11-0 in game 7, but Denkinger is often blamed for the Cardinals losing the series. Following the blown call, two St. Louis radio personalities doxxed Denkinger, who received hate mail and even death threats from irate fans. At the time, Cardinals manager Whitey Herzog was furious at Denkinger. After the series was over, Herzog became very dismayed by the harassment Denkinger received from St. Louis. It was so severe that Herzog made public appearances with Denkinger to raise money for charity and try to get Cardinal fans to forgive the umpire.
Jim Joyce was also known for a missed force out at first base, this time what should have been the last out of a perfect game for Armando Galarraga, an otherwise mediocre pitcher attempting to complete one of the rarest feats in all of baseball. The first base umpire generally watches to see whether the fielder's foot is on the first base and when the runner's foot touches the base, listening for the sound of the ball popping into the fielder's glove. It's an extremely difficult call that umpires get correct a remarkably high percentage of the time. In this case, Joyce believed that the runner, Jason Donald, reached the base before the baseball arrived in the fielder's glove, and he called the runner safe. Galarraga retired the next hitter, but there was no way after the game to correct the blown call. After seeing a replay, Joyce held a press conference in which he tearfully admitted publicly that he blew the call and felt awful for costing Galarraga the perfect game.
Had MLB made use of the available technology, neither Denkinger nor Joyce would be remember for missing calls. It's possible the Cardinals might have imploded and lost the World Series anyway. In the case of Joyce, the play would have been reviewed for a minute or two, the umpire would have raised his fist to signal an out, and the Detroit Tigers players and coaches would have run onto the field after a brief awkward pause to celebrate the perfect game. Denkinger and Joyce were excellent umpires who were well-respected by players and managers but are both mostly known for making a single bad call that could have easily been corrected with the proper VAR tools.
Despite the potential for technology to further assist umpires in getting calls correct, there is significant resistance to automatic balls and strikes. While the ball-tracking technology is widely accepted by tennis fans, there are concerns that baseball fans might see pitches that appear to be balls get called as strikes, and that the technology would be viewed as untrustworthy. Part of the issue is that the strike zone is actually a three-dimensional volume that is 17 inches wide and 17 inches deep. If the flight of the ball intersects any part of the zone, it's a strike. For pitches with a high rate of forward spin and a lot of vertical break, it could clip the bottom part of the zone at the front of home plate, be caught well below the batter's knees, and still get called a strike.
Some fans are also reluctant to end the skill of pitch framing, in which a catcher receives a pitch that's a ball but catches it in a manner to give the illusion of it being a strike. The umpire is fooled into calling the pitch a strike anyway, giving an advantage to the pitcher. One estimate suggests that the best catchers were at one time able to save as many as 40 runs during a season with pitch framing, which is worth roughly about four wins to the team. Some baseball purists have opposed using cameras to automatically call balls and strikes because it would put an end to pitch framing.
Instead of fully embracing robot umps to call balls and strikes, MLB intends to test a system of challenging balls and strikes at AAA this season, which is the highest level of minor league baseball. Teams will receive a certain number of challenges each game, where a ball or strike call can be reviewed and, if necessary, overturned. Part of the issue with fully embracing automatic balls and strikes is the need to determine how to set the "correct" strike zone. One option is to estimate it from the batter's height. The other is to determine in on every pitch based on the batter's stance, using the sz_top and sz_bot fields in Statcast data. If the strike zone was determined by the batter's stance on every pitch, a batter could use an exaggerated stance to make the strike zone artificially small, making it difficult to throw strikes. Although catchers would no longer be able to steal strikes with pitch framing, adjusting the strike zone for every pitch could allow hitters to steal balls.
[UPDATE] In his meeting with the Baseball Writers' Association of America prior to the All-Star Game, MLB Commissioner Rob Manfred said that "robot umpires" would probably be tested in spring training in 2025 and could be implemented in the 2026 season. "Robot umpires" are the colloquial term in baseball for an automated system for calling balls and strikes using computer vision to track the flight of the baseball and whether a pitch crosses the strike zone. The main concern for MLB seems to be how the strike zone will be determined, whether it will be estimated from each player's height, or if it will be based on a recent average of the size of each player's actual strike zone. One of the 2024 rule changes in the minor leagues was to define the automated strike zone from the median of recent pitches that were put into play, limiting the opportunity for hitters to manipulated the strike zone with an exaggerated crouch. Although fully automated ball and strike calls were tested in the minor leagues, this was replaced with a challenge system that is likely to be implemented in the major leagues in 2026. Human umpires will still call balls and strikes, and I anticipate that this system will mostly be used to challenge borderline pitches that could be strike three or ball four.
UPDATED An update to a product from infosec vendor CrowdStrike is bricking computers running Windows.
The Register has found numerous accounts of Windows 10 PCs crashing, displaying the Blue Screen of Death, then being unable to reboot.
"We're seeing BSOD Org wide that are being caused by csagent.sys, and it's taking down critical services. I'll open a ticket, but this is a big deal," wrote one user.
Forums report that Crowdstrike has issued an advisory with a URL that includes the text "Tech-Alert-Windows-crashes-related-to-Falcon-Sensor-2024-07-19" – but it's behind a regwall that only customers can access.
An apparent screenshot of that article reads "CrowdStrike is aware of reports of crashes on Windows hosts related to the Falcon Sensor. Symptoms include hosts experiencing a bugcheck\blue screen error related to the Falcon Sensor."
CrowdStrike's engineers are working on the issue.
Falcon Sensor is an agent that CrowdStrike claims "blocks attacks on your systems while capturing and recording activity as it happens to detect threats fast."
Right now, however, the sensor appears to be the threat.
This is a developing story and The Register will update it as new info comes to hand. ®
Updated at 0730 UTC to add Brody Nisbet, CrowdStrike's chief threat hunter, has confirmed the issue and on X posted the following:
There is a faulty channel file, so not quite an update. There is a workaround... 1. Boot Windows into Safe Mode or WRE. 2. Go to C:\Windows\System32\drivers\CrowdStrike 3. Locate and delete file matching "C-00000291*.sys" 4. Boot normally.
In a later post he wrote "That workaround won't help everyone though and I've no further actionable help to provide at the minute".
More to come as the situation evolves ...
In Australia, CrowdStrike IT outage hits airports, banks, supermarkets as emergency committee meets
A major network outage has affected several Australian institutions and businesses, including multiple airports, the Commonwealth Bank, Optus, Australia Post and Woolworths.
Original Submission #1 Original Submission #2
Disruption to air traffic control systems is being reported around the world. Preliminary reports say a computer glitch may be causing the problem. Issues have arisen in the US, Spain, Germany, Australia, and elsewhere, with authorities forced to cancel takeoffs and landings due to safety concerns.
The outage was first reported about midnight CET on Thursday night/Friday.
The failure may have been caused by a software update that locks Microsoft operating systems and is reportedly not restricted to airlines. Some banks, emergency services, broadcasters, and financial institutions are also said to have been affected.
Computers using Windows 10 OS are reportedly crashing and showing "the blue screen of death" (BSOD) after an update for a security product provided by the firm CrowdStrike. The company is reportedly working on resolving the issue.
Brody Nisbet, CrowdStrike's chief threat hunter, has offered a workaround to deal with what he called a "faulty channel file" related to the Falcon Sensor cybersecurity app.
See also:
Arthur T Knackerbracket has processed the following story:
Entri, which offers a tool for automatically connecting SaaS applications to custom domains via DNS configurations, said [PDF] in a lawsuit, filed in federal court in Virginia, USA, that GoDaddy initially embraced the success of Entri Connect before abruptly reversing course in favor of forcing folks to use GoDaddy's own DNS record update tool Domain Connect.
"Shortly after Entri Connect's launch, GoDaddy saw the value of Entri Connect and the two companies entered a partnership together," Entri's lawyers said in the complaint, submitted this month. "But as Entri grew in popularity, GoDaddy saw an opportunity to use its tremendous size to its advantage."
Entri alleges that GoDaddy changed its stance on the partnership late last year, first telling customers that Entri Connect could no longer be used to update GoDaddy-registered domains, and then updating its terms of use to block Entri from updating DNS settings.
GoDaddy also "implemented a series of technological measures designed to cause Entri Connect to malfunction when used by GoDaddy customers," the suit alleges.
In place of Entri Connect, GoDaddy pushed its own Domain Connect, which Entri alleges is far less sophisticated and easy to use than its own tool.
"While the Domain Connect protocol may currently be used to update DNS records with only four DNS providers actively using the protocol, Entri Connect may be used with more than forty," Entri said in the suit. The company also alleged that a GoDaddy representative admitted that "third-party software applications preferred to offer their end users Entri Connect over Domain Connect 80 percent of the time."
Entri began life in 2021 - well after Domain Connect was initiated by GoDaddy.
All of those allegations, says Entri, add up to GoDaddy abusing its market power to disadvantage a competitor, which it says is a violation of America's Sherman Antitrust Act.
"Customers of GoDaddy's domain registration services are being improperly denied access to the full suite of choices when it comes to automated domain configuration," Entri argued, adding that it has lost sales, configuration volume, and revenue as a result of GoDaddy's move.
Entri further alleges that GoDaddy has accused it of breaking the Computer Fraud and Abuse Act by accessing the registrar's systems, violated GoDaddy's API terms of use, and committed a violation of US trademark law. Along with standard requests for injunctive and monetary relief, Entri is asking a jury to declare it hasn't committed any of that wrongdoing.
GoDaddy told us it doesn't comment on pending litigation, [...] .
FTC Warns Gaming Companies Over Warranty Stickers:
The Federal Trade Commission has sent letters to eight companies, including leading makers of PC gaming rigs, warning them that their warranty language is a violation of the federal Magnuson-Moss Warranty Act (MMWA).
In a statement July 3rd, the Federal Trade Commission staff said statements that customers were required use authorized service providers or manufacturer supplied parts or risk voiding their warranty "may be standing in the way of consumers' right to repair products they have purchased." These "warning letters put companies on notice that restricting consumers' right to repair violates the law," said Samuel Levine, Director of the FTC's Bureau of Consumer Protection in a published statement on the FTC website. "The Commission will continue our efforts to protect consumers' right to repair and independent dealers' right to compete."
Requiring consumers to use specified parts or service providers to keep their warranties intact is prohibited under the MMWA, unless warrantors provide the parts or services for free or receive a waiver from the FTC. The agency also warned that such statements may be considered deceptive business practices under the FTC Act. Letters issued to gaming hardware makers ASRock, Zotac, and Gigabyte,which market and sell gaming PCs, graphics chips, motherboards, and other accessories, specifically warned about the use of stickers stating that warranties are "void if removed."
In recent years the FTC has increased its scrutiny of companies warranty-related practices and re-exerted its authority to enforce laws like MMWA and other federal laws. It issued similar warnings to six companies in 2018 regarding MMWA violations. A study by PIRG of 50 home appliance makers that same year found that "the overwhelming majority (45) would void warranties due to independent or self-repair." Then, in 2022, the Commission issued orders requiring motorcycle maker Harley Davidson and grill maker Weber to take multiple steps to correct violations of the MMWA including to cease telling consumers that their warranties will be void if they use third-party services or parts, or that they should only use branded parts or authorized service providers. The FTC said it would seek civil penalties of up to $46,517 per violation in federal court.
The agency has also appealed to the public and businesses for stories of manufacturers forcing consumers to use authorized repair providers and threatening to void warranties for those that don't. The Commission has set up a special link for warranty or repair stories and said it wants to hear about consumer experiences across a wide range of products – from cars, kitchen appliances, and cell phones to grills and generators.
Arthur T Knackerbracket has processed the following story:
Law enforcement agencies from eight nations, led by Australia, have issued an advisory that details the tradecraft used by China-aligned threat actor APT40 – aka Kryptonite Panda, GINGHAM TYPHOON, Leviathan and Bronze Mohawk – and found it prioritizes developing exploits for newly found vulnerabilities and can target them within hours.
The advisory describes APT40 as a "state-sponsored cyber group" and the People's Republic of China (PRC) as that sponsor. The agencies that authored the advisory – which come from Australia, the US, Canada, New Zealand, Japan, South Korea, the UK, and Germany – believe APT40 "conducts malicious cyber operations for the PRC Ministry of State Security (MSS)."
[...] The advisory is the result, and suggests that APT40 "possesses the capability to rapidly transform and adapt exploit proof-of-concept(s) (POCs) of new vulnerabilities and immediately utilize them against target networks possessing the infrastructure of the associated vulnerability." The gang also watches networks of interest to look for unpatched targets.
"This regular reconnaissance postures the group to identify vulnerable, end-of-life or no longer maintained devices on networks of interest, and to rapidly deploy exploits," the advisory warns.
Those efforts yield results, because some systems have not been patched for problems identified as long ago as 2017. Some of the vulns APT40 targets are old news – Log4J (CVE 2021 44228), Atlassian Confluence (CVE-2021-31207, CVE-2021- 26084). and Microsoft Exchange (CVE-2021-31207, CVE 2021-34523, CVE-2021-34473) are high on the hit list.
[...] "APT40 has embraced the global trend of using compromised devices, including small-office/home-office (SOHO) devices, as operational infrastructure and last-hop redirectors for its operations in Australia," the advisory observes. "Many of these SOHO devices are end-of-life or unpatched and offer a soft target for N-day exploitation."
[...] The advisory outlines mitigation tactics that are said to offer decent defences against APT40. They're not rocket science: logging, patch management, and network segmentation are all listed.
So are multifactor authentication, disabling unused network services, use of web application firewalls, least privilege access, and replacement of end-of-life equipment.
https://arstechnica.com/science/2024/07/will-space-based-solar-power-ever-make-sense/
Is space-based solar power a costly, risky pipe dream? Or is it a viable way to combat climate change? Although beaming solar power from space to Earth could ultimately involve transmitting gigawatts, the process could be made surprisingly safe and cost-effective, according to experts from Space Solar, the European Space Agency, and the University of Glasgow.
But we're going to need to move well beyond demonstration hardware and solve a number of engineering challenges if we want to develop that potential.
[...]
"The idea [has] been around for just over a century," said Nicol Caplin, deep space exploration scientist at the ESA, on a Physics World podcast. "The original concepts were indeed sci-fi. It's sort of rooted in science fiction, but then, since then, there's been a trend of interest coming and going."Researchers are scoping out multiple designs for space-based solar power. Matteo Ceriotti, senior lecturer in space systems engineering at the University of Glasgow, wrote in The Conversation that many designs have been proposed.
[...]
Using microwave technology, the solar array for an orbiting power station that generates a gigawatt of power would have to be over 1 square kilometer in size, according to a Nature article by senior reporter Elizabeth Gibney. "That's more than 100 times the size of the International Space Station, which took a decade to build." It would also need to be assembled robotically, since the orbiting facility would be uncrewed.
[...]
Space Solar is working on a satellite design called CASSIOPeiA, which Physics World describes as looking "like a spiral staircase, with the photovoltaic panels being the 'treads' and the microwave transmitters—rod-shaped dipoles—being the 'risers.'" It has a helical shape with no moving parts.
[...]
Ceriotti wrote that SPS-ALPHA, another design, has a large solar-collector structure that includes many heliostats, which are modular small reflectors that can be moved individually. These concentrate sunlight onto separate power-generating modules, after which it's transmitted back to Earth by yet another module.
[...] For microwave radiation from a space-based solar power installation, "the only known effect of those frequencies on humans or living things is tissue heating," Vijendran said. "If you were to stand in such a beam at that power level, it would be like standing in the... evening sun." Still, Caplin said that more research is needed to study the effects of these microwaves on humans, animals, plants, satellites, infrastructure, and the ionosphere.
Getting that across to the public may remain a challenge, however. "There's still a public perception issue to work through, and it's going to need strong engagement to bring this to market successfully," Adlen said.
[...]
Vijendran said he expects the cost of space-based solar power will eventually fall to a point where it is competitive with solar and wind power on Earth, which is below $50 per megawatt-hour. According to the Energy Information Administration's 2022 publication on this subject, both solar power and onshore wind cost around $20–$45 per megawatt-hour in 2021.
[...]
"The first major decision point would be to implement a... small-scale in-space demo mission for launch sometime around 2030," Vijendran said.Outside of the ESA, Caltech has demonstrated a lightweight prototype that converts sunlight to radio-frequency electrical power and transmits it as a beam. The university has been researching modular, foldable, ultralight space-based solar power equipment.
"My view is that much like the world of connectivity went from wired to wireless, so we're going to see the world of power move in a similar direction," Adlen said. International cooperation will be key to creating space-based solar power stations if projects like these move forward.
Arthur T Knackerbracket has processed the following story:
The latest Linux kernel is here, with relatively few new features but better support for several hardware platforms, including non-Intel kit.
Linus Torvalds announced kernel 6.10 this weekend and as usual it contains so many hundreds of changes that we can't summarize them all – for instance, the Kernel Newbies summary for this release has 636 bullet points.
The release means that the merge window is now open for proposed changes to go into kernel 6.11, which will probably appear around September. That means it is likely to be too late for both Ubuntu and Fedora's second releases of the year, so kernel 6.10 may be what you get around that time.
There are, as always, some fresh software features in the new release, of which maybe the most interesting is a new memory-management API call called mseal(). Modern CPUs allow blocks of memory to be marked as special in various ways – for example, as non-executable. AMD introduced the NX bit over 20 years ago as part of its x86-64 specification, and a couple of years later Intel added it to its implementation. The mseal() call protects these mappings: it makes them immutable for the life of that process. The patch was submitted by Google last year, and it's likely it will first be used by Chrome and Chromium-based browsers – but probably by other things later. The call reproduces settings which already exist in OpenBSD, as well as the XNU kernel used in multiple Apple OSes.
Additionally, there are small improvements to various filesystems, including bcachefs, Btrfs, ext4, XFS, F2FS, EROFS, and OCFS2. There's support for a much wider range of compression algorithms for the kernel boot image.
However, for this release, more changes overall seem to be in the direction of improved hardware support, over a wide range of devices. On Linux's native x86 (increasingly, x86-64) architecture, this includes better support for hardware encryption, which among other things should deliver faster disk encryption. There's also better TPM2 chip support, improved power management and handling of dynamic CPU speeds. Multiple wired and wireless network drivers have been tuned, and there's support for various new models of CPU and GPU.
Arm support has been improved in multiple areas, both for server processors and CPUs and SOCs used in laptops, including for the Arm's Mali family of GPUs. If the Qualcomm Snapdragon-based Lenovo Thinkpad X13s appealed, notably as a Linux machine, then you might be interested in its inexpensive indirect ancestor too, Acer's Acer Aspire 1 A114-61. This machine's hardware is now more or less fully supported. Although it was a 2021 model, you may be able to find a second-hand unit for $NOT_A_LOT if you fancy an Arm64 Linux laptop. The MIPI webcam sensor used in the X13S, as well as several Intel Thinkpad models, is now supported, too.
Other Arm-powered kit with new support includes several gaming handhelds, such as as the Gameforce Chi, and several Anbernic devices. As we have noted previously looking at SteamOS, gaming support is now a factor visibly driving improvements in Linux.
It's not just Arm: there's also improved support for RISC-V hardware, for instance the budget Milk-V Mars SBC. This extends to the still quite new support for Rust in the kernel. The revision of Rust supported in the kernel has been bumped to version 1.78.0. As we noted when Rust support was first added, whereas the kernel is usually built with GCC, Rust is usually compiled with LLVM and that mainly targets x86-64 and Arm. Now, kernel Rust can be used in RISC-V as well.
Fats from thin air: Startup makes butter using CO2 and water:
Bill Gates has thrown his weight – and his money – behind a Californian startup that believes it can make a rich, fatty spread akin to butter, using just carbon dioxide and hydrogen. And 'butter' is just the start, with milk, ice-cream, cheese, meat and tropical oils also in development.
The San Jose company, Savor, uses a thermochemical process to create its animal-like fat, which is free of the environmental footprint of both the dairy industry and plant-based alternatives.
"They started with the fact that all fats are made of varying chains of carbon and hydrogen atoms," Gates wrote in a blog post. "Then they set out to make those same carbon and hydrogen chains – without involving animals or plants. They ultimately developed a process that involves taking carbon dioxide from the air and hydrogen from water, heating them up, and oxidizing them to trigger the separation of fatty acids and then the formulation of fat."
Many of us know the stats – according to the United Nations Food and Agriculture Organization (FAO), livestock are responsible for 14.5% of all global greenhouse gas emissions, and animal-fat alternatives that use palm oil contribute to widespread deforestation and biodiversity loss – but also know how delicious dairy products are. So will Gates' enthusiastic support be enough to get people excited about butter made from CO2?
"The idea of switching to lab-made fats and oils may seem strange at first," Gates wrote. "But their potential to significantly reduce our carbon footprint is immense. By harnessing proven technologies and processes, we get one step closer to achieving our climate goals."
Savor's 'butter' is easily produced and scalable, but convincing people to swap out butter and other dairy products for 'experimental' foods will remain a challenge for the foreseeable future. Gates is hoping, however, that his support will do more than start a conversation.
"The big challenge is to drive down the price so that products like Savor's become affordable to the masses – either the same cost as animal fats or less," Gates wrote. "Savor has a good chance of success here, because the key steps of their fat-production process already work in other industries.
"The process doesn't release any greenhouse gases, and it uses no farmland and less than a thousandth of the water that traditional agriculture does," he added. "And most important, it tastes really good – like the real thing, because chemically it is."
Savor's research was published in the journal Nature Sustainability.
Source: Savor
See also:
Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers' computers when executed.
The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon's S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js.
[...] "We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days," researchers from Phylum, the security firm that spotted the packages, wrote. "This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time."
[...] In the past 17 months, threat actors backed by the North Korean government have targeted developers twice, one of those using a zero-day vulnerability.
Phylum researchers provided a deep-dive analysis of how the concealment worked
[...]
One of the most innovative methods in recent memory for concealing an open source backdoor was discovered in March, just weeks before it was to be included in a production release of the XZ Utils[...] The person or group responsible spent years working on the backdoor. Besides the sophistication of the concealment method, the entity devoted large amounts of time to producing high-quality code for open source projects in a successful effort to build trust with other developers.
In May, Phylum disrupted a separate campaign that backdoored a package available in PyPI that also used steganography, a technique that embeds secret code into images.
"In the last few years, we've seen a dramatic rise in the sophistication and volume of malicious packages published to open source ecosystems," Phylum researchers wrote. "Make no mistake, these attacks are successful. It is absolutely imperative that developers and security organizations alike are keenly aware of this fact and are deeply vigilant with regard to open source libraries they consume."
Related stories on SoylentNews:
Trojanized jQuery Packages Found on Npm, GitHub, and jsDelivr Code Repositories - 20240713
48 Malicious Npm Packages Found Deploying Reverse Shells on Developer Systems - 20231104
Open-Source Security: It's Too Easy to Upload 'Devastating' Malicious Packages, Warns Google - 20220504
Dev Corrupts NPM Libs 'Colors' and 'Faker' Breaking Thousands of Apps - 20220111
Malicious NPM Packages are Part of a Malware "Barrage" Hitting Repositories - 20211213
Heavily Used Node.js Package Has a Code Injection Vulnerability - 20210227
Discord-Stealing Malware Invades NPM Packages - 20210124
Here's how NPM Plans to Improve Security and Reliability in 2019 - 20181217
NPM Fails Worldwide With "ERR! 418 I'm a Teapot" Error - 20180530
Backdoored Python Library Caught Stealing SSH Credentials - 20180511
Arthur T Knackerbracket has processed the following story:
A research team from the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen has investigated the risk of fire on spacecraft in a recent study. The results show that fires on planned exploration missions, such as a flight to Mars, could spread significantly faster than, for example, on the International Space Station (ISS). This is due to the planned adjustment to a lower ambient pressure on spacecraft.
"A fire on board a spacecraft is one of the most dangerous scenarios in space missions," explains Dr. Florian Meyer, head of the Combustion Technology research group at ZARM. "There are hardly any options for getting to a safe place or escaping from a spacecraft. It is therefore crucial to understand the behavior of fires under these special conditions."
The ZARM research team has been conducting experiments on the propagation of fires in reduced gravity since 2016. The environmental conditions are similar to those on the ISS—with an oxygen level in the breathing air and an ambient pressure similar to that on Earth, as well as forced air circulation. These earlier experiments have shown that flames behave completely differently in weightlessness than on Earth.
A fire burns with a smaller flame and spreads more slowly, which means it can go unnoticed for a long time. However, it burns hotter and can therefore also ignite materials that are basically non-flammable on Earth. In addition, incomplete combustion can produce more toxic gases.
Future space missions are currently being planned with modified atmospheric conditions. The crew will be exposed to lower pressure. This offers two crucial advantages: The astronauts can prepare for an external mission more quickly and the spacecraft can be built lighter, i.e. with less mass, which saves fuel. The disadvantage: at lower pressure, the crew needs a higher proportion of oxygen in the breathing air—and this can have dangerous consequences in the event of a fire.
We know from various everyday situations that the speed of the air flow also has a strong influence on the spread of fire, from lighting barbecue charcoal to fighting wild fires.
The current series of experiments on which the study is based was carried out under microgravity conditions in the Drop Tower Bremen. Florian Meyer and his team observed the propagation of flames after lighting acrylic glass foils and investigated how the fire reacts when one of the three parameters—ambient pressure, oxygen content and flow velocity—is changed in different proportions.
The results of the experiments are clear: although the lower pressure has a dampening effect, the fire-accelerating effects of the increased oxygen level in the air predominate. Increasing the oxygen level from 21% (as on the ISS) to the planned 35% for future space missions will cause a fire to spread three times faster. This means an enormous increase in the danger to the crew in case of a fire accident.
Dr. Meyer says, "Our results highlight critical factors that need to be considered when developing fire safety protocols for astronautic space missions. By understanding how flames spread under different atmospheric conditions, we can mitigate the risk of fire and improve the safety of the crew."
More information: Hans-Christoph Ries et al, Effect of oxygen concentration, pressure, and opposed flow velocity on the flame spread along thin PMMA sheets, Proceedings of the Combustion Institute (2024). DOI: 10.1016/j.proci.2024.105358
The growing reach of gesture-based user interfaces:
User interface (UI) design is currently experiencing a transition from traditional graphical user interfaces (GUIs) to systems designed to recognize a person's gestures and movements.
Hence, in this blog, we will discuss the possible implications of this groundbreaking transition in terms of user experience (UX) and the accessibility of modern interfaces. Likewise, we'll explore how developers adapt to the technological shift to deliver innovative solutions while outlining the challenges of adopting gesture-based interactions.
Gesture-based interactions are quickly becoming a standard and the technology is widely considered the future of UI. Therefore, modern devices and applications must adapt to meet the needs of their users. On top of that, recent data shows that 82% of users prefer apps with gesture-based controls.
The algorithms built into touch screen devices, such as smartphones recognize a range of touch types, from scrolling to swiping. Because of this technology, users are now able to navigate applications with simple gestures like pinches or taps. A classic example of this is the navigation controls of Google Maps which require the user to pinch the screen to zoom in or out, and swipe/drag to move to a different location.
[...] Enhancing user engagement is one of the key benefits of gesture-based interactions, allowing users to directly manipulate screen elements to quickly reach their goal. The direct nature of using gestures can create a better sense of connection when using an application, not only boosting user satisfaction but also increasing loyalty, ensuring the app has longevity.
Our Shy Submitter has provided the following story:
Scientific American is running an opinion piece that claims the origin of the t-test is a scientist working at the Guinness Brewery in the early 1900s, https://www.scientificamerican.com/article/how-the-guinness-brewery-invented-the-most-important-statistical-method-in/
Near the start of the 20th century, Guinness had been in operation for almost 150 years and towered over its competitors as the world's largest brewery. Until then, quality control on its products had consisted of rough eyeballing and smell tests. But the demands of global expansion motivated Guinness leaders to revamp their approach to target consistency and industrial-grade rigor. The company hired a team of brainiacs and gave them latitude to pursue research questions in service of the perfect brew. The brewery became a hub of experimentation to answer an array of questions: Where do the best barley varieties grow? What is the ideal saccharine level in malt extract? How much did the latest ad campaign increase sales?
Amid the flurry of scientific energy, the team faced a persistent problem: interpreting its data in the face of small sample sizes. One challenge the brewers confronted involves hop flowers, essential ingredients in Guinness that impart a bitter flavor and act as a natural preservative. To assess the quality of hops, brewers measured the plants' soft resin content. Let's say they deemed 8 percent a good and typical value. Testing every flower in the crop wasn't economically viable, however. So they did what any good scientist would do and tested random samples of flowers.
The fine article goes on to illustrate the difference between the t-test and normal distribution and also explains why it's often called the "Student" test.
I wonder if it rubs off--can you drink some Guinness Stout and then pass your stat class final exam?
Arthur T Knackerbracket has processed the following story:
The closure affects less than 50 U.S. employees, but the impact on cybersecurity could be far more significant.
Kaspersky Lab, a Russian cybersecurity and antivirus software company, announced it will start shutting down all of its operations in the U.S. on July 20. The departure was inevitable after 12 of the company’s executives were hit with sanctions, and the company’s products were banned from sale in the U.S.
Kaspersky Lab told BleepingComputer of the pending closure and confirmed it would lay off all of its U.S.-based employees. Reportedly, the shutdown affects less than 50 employees in the U.S. The impact on cybersecurity could be much greater since the company’s researchers have been responsible for stopping or slowing countless major security exploits.
The United States government has claimed that Kaspersky’s continued operations in the U.S. posed a significant privacy risk. Since Kaspersky is based in Russia, officials worry the Russian government could exploit the cybersecurity firm to collect and weaponize sensitive U.S. information.
In June, the Department of Commerce’s Bureau of Industry & Security (BIS) issued sanctions on Kaspersky. A Final Determination hearing resulted in Kaspersky being banned from providing any antivirus or cybersecurity solutions to anyone in the United States. Kaspersky’s customers in the U.S. have until September 29, 2024, to find alternative security and antivirus software.
Kaspersky told BleepingComputer that it had “carefully examined and evaluated the impact of the U.S. legal requirements and made this sad and difficult decision as business opportunities in the country are no longer viable.” After all, it’s hard to run a business that provides cybersecurity and antivirus solutions when you’re banned from doing so.
The BIS placed Kaspersky Lab and its U.K. holding company on the U.S. government’s Entity List because of their ties to Russia. This prevented Kaspersky from conducting business in the U.S. At the same time, a dozen members of Kaspersky’s board of executives and leadership were individually sanctioned.
These sanctions froze the executives’ U.S. assets and prevented access to them until the sanctions were lifted. While Kaspersky insisted the ban was based on theoretical concerns rather than evidence of wrongdoing, sources close to the matter have said otherwise. Russian backdoors into Kaspersky’s software are an “open secret,” they said, and a Commerce Department official stated the department believes it is more than just a theoretical threat.
https://arstechnica.com/gadgets/2024/07/report-apple-approves-epic-games-store-on-ios-in-europe/
It's been a whirlwind journey of stops and starts, but AppleInsider reports the Epic Game Store for iOS in the European Union has passed Apple's notarization process.
This paves the way for Epic CEO Tim Sweeney to realize his long-stated goal of launching an alternative game store on Apple's closed platform—at least in Europe.
[...] Apple's new policies allow for alternative app marketplaces but with some big caveats regarding the deal that app developers agree to. We discussed it in some detail earlier this year.
[...] Even after the shift, Apple is said to have rejected the Epic Games Store app twice. The rejections were over specific rules about the copy and shape of buttons within the app, though not about its primary function.
[...] After those rejections, Epic took to X to accuse Apple of rejecting the app in a way that was "arbitrary, obstructive, and in violation of the DMA." Epic claimed it followed Apple's suggested design conventions for the buttons and noted that the copy matched language it has been using in its store on other platforms for a long time.
Not long after, Apple went ahead and approved the app despite the disagreement over the copy and button designs. However, AppleInsider reported that Apple will still require Epic to change the copy and buttons later. Epic disputed that on X, and Sweeney offered his own take: