Slash Boxes

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password

Site News

Join our Folding@Home team:
Main F@H site
Our team page

Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:



Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag

We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

For my devices that support it, I have implemented IPv6 . . .

  • on none of my devices
  • on some of my devices
  • on all of my devices
  • What is IPv6?
  • I use token ring, you insensitive clod

[ Results | Polls ]
Comments:18 | Votes:116

posted by janrinok on Thursday July 18, @08:44PM   Printer-friendly
from the here-comes-the-sun dept.

Is space-based solar power a costly, risky pipe dream? Or is it a viable way to combat climate change? Although beaming solar power from space to Earth could ultimately involve transmitting gigawatts, the process could be made surprisingly safe and cost-effective, according to experts from Space Solar, the European Space Agency, and the University of Glasgow.

But we're going to need to move well beyond demonstration hardware and solve a number of engineering challenges if we want to develop that potential.
"The idea [has] been around for just over a century," said Nicol Caplin, deep space exploration scientist at the ESA, on a Physics World podcast. "The original concepts were indeed sci-fi. It's sort of rooted in science fiction, but then, since then, there's been a trend of interest coming and going."

Researchers are scoping out multiple designs for space-based solar power. Matteo Ceriotti, senior lecturer in space systems engineering at the University of Glasgow, wrote in The Conversation that many designs have been proposed.
Using microwave technology, the solar array for an orbiting power station that generates a gigawatt of power would have to be over 1 square kilometer in size, according to a Nature article by senior reporter Elizabeth Gibney. "That's more than 100 times the size of the International Space Station, which took a decade to build." It would also need to be assembled robotically, since the orbiting facility would be uncrewed.
Space Solar is working on a satellite design called CASSIOPeiA, which Physics World describes as looking "like a spiral staircase, with the photovoltaic panels being the 'treads' and the microwave transmitters—rod-shaped dipoles—being the 'risers.'" It has a helical shape with no moving parts.
Ceriotti wrote that SPS-ALPHA, another design, has a large solar-collector structure that includes many heliostats, which are modular small reflectors that can be moved individually. These concentrate sunlight onto separate power-generating modules, after which it's transmitted back to Earth by yet another module.

[...] For microwave radiation from a space-based solar power installation, "the only known effect of those frequencies on humans or living things is tissue heating," Vijendran said. "If you were to stand in such a beam at that power level, it would be like standing in the... evening sun." Still, Caplin said that more research is needed to study the effects of these microwaves on humans, animals, plants, satellites, infrastructure, and the ionosphere.

Getting that across to the public may remain a challenge, however. "There's still a public perception issue to work through, and it's going to need strong engagement to bring this to market successfully," Adlen said.
Vijendran said he expects the cost of space-based solar power will eventually fall to a point where it is competitive with solar and wind power on Earth, which is below $50 per megawatt-hour. According to the Energy Information Administration's 2022 publication on this subject, both solar power and onshore wind cost around $20–$45 per megawatt-hour in 2021.
"The first major decision point would be to implement a... small-scale in-space demo mission for launch sometime around 2030," Vijendran said.

Outside of the ESA, Caltech has demonstrated a lightweight prototype that converts sunlight to radio-frequency electrical power and transmits it as a beam. The university has been researching modular, foldable, ultralight space-based solar power equipment.

"My view is that much like the world of connectivity went from wired to wireless, so we're going to see the world of power move in a similar direction," Adlen said. International cooperation will be key to creating space-based solar power stations if projects like these move forward.

Original Submission

posted by janrinok on Thursday July 18, @04:03PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The latest Linux kernel is here, with relatively few new features but better support for several hardware platforms, including non-Intel kit.

Linus Torvalds announced kernel 6.10 this weekend and as usual it contains so many hundreds of changes that we can't summarize them all – for instance, the Kernel Newbies summary for this release has 636 bullet points.

The release means that the merge window is now open for proposed changes to go into kernel 6.11, which will probably appear around September. That means it is likely to be too late for both Ubuntu and Fedora's second releases of the year, so kernel 6.10 may be what you get around that time.

There are, as always, some fresh software features in the new release, of which maybe the most interesting is a new memory-management API call called mseal(). Modern CPUs allow blocks of memory to be marked as special in various ways – for example, as non-executable. AMD introduced the NX bit over 20 years ago as part of its x86-64 specification, and a couple of years later Intel added it to its implementation. The mseal() call protects these mappings: it makes them immutable for the life of that process. The patch was submitted by Google last year, and it's likely it will first be used by Chrome and Chromium-based browsers – but probably by other things later. The call reproduces settings which already exist in OpenBSD, as well as the XNU kernel used in multiple Apple OSes.

Additionally, there are small improvements to various filesystems, including bcachefs, Btrfs, ext4, XFS, F2FS, EROFS, and OCFS2. There's support for a much wider range of compression algorithms for the kernel boot image.

However, for this release, more changes overall seem to be in the direction of improved hardware support, over a wide range of devices. On Linux's native x86 (increasingly, x86-64) architecture, this includes better support for hardware encryption, which among other things should deliver faster disk encryption. There's also better TPM2 chip support, improved power management and handling of dynamic CPU speeds. Multiple wired and wireless network drivers have been tuned, and there's support for various new models of CPU and GPU.

Arm support has been improved in multiple areas, both for server processors and CPUs and SOCs used in laptops, including for the Arm's Mali family of GPUs. If the Qualcomm Snapdragon-based Lenovo Thinkpad X13s appealed, notably as a Linux machine, then you might be interested in its inexpensive indirect ancestor too, Acer's Acer Aspire 1 A114-61. This machine's hardware is now more or less fully supported. Although it was a 2021 model, you may be able to find a second-hand unit for $NOT_A_LOT if you fancy an Arm64 Linux laptop. The MIPI webcam sensor used in the X13S, as well as several Intel Thinkpad models, is now supported, too.

Other Arm-powered kit with new support includes several gaming handhelds, such as as the Gameforce Chi, and several Anbernic devices. As we have noted previously looking at SteamOS, gaming support is now a factor visibly driving improvements in Linux.

It's not just Arm: there's also improved support for RISC-V hardware, for instance the budget Milk-V Mars SBC. This extends to the still quite new support for Rust in the kernel. The revision of Rust supported in the kernel has been bumped to version 1.78.0. As we noted when Rust support was first added, whereas the kernel is usually built with GCC, Rust is usually compiled with LLVM and that mainly targets x86-64 and Arm. Now, kernel Rust can be used in RISC-V as well.

Original Submission

posted by martyb on Thursday July 18, @11:16AM   Printer-friendly

Fats from thin air: Startup makes butter using CO2 and water:

Bill Gates has thrown his weight – and his money – behind a Californian startup that believes it can make a rich, fatty spread akin to butter, using just carbon dioxide and hydrogen. And 'butter' is just the start, with milk, ice-cream, cheese, meat and tropical oils also in development.

The San Jose company, Savor, uses a thermochemical process to create its animal-like fat, which is free of the environmental footprint of both the dairy industry and plant-based alternatives.

"They started with the fact that all fats are made of varying chains of carbon and hydrogen atoms," Gates wrote in a blog post. "Then they set out to make those same carbon and hydrogen chains – without involving animals or plants. They ultimately developed a process that involves taking carbon dioxide from the air and hydrogen from water, heating them up, and oxidizing them to trigger the separation of fatty acids and then the formulation of fat."

Many of us know the stats – according to the United Nations Food and Agriculture Organization (FAO), livestock are responsible for 14.5% of all global greenhouse gas emissions, and animal-fat alternatives that use palm oil contribute to widespread deforestation and biodiversity loss – but also know how delicious dairy products are. So will Gates' enthusiastic support be enough to get people excited about butter made from CO2?

"The idea of switching to lab-made fats and oils may seem strange at first," Gates wrote. "But their potential to significantly reduce our carbon footprint is immense. By harnessing proven technologies and processes, we get one step closer to achieving our climate goals."

Savor's 'butter' is easily produced and scalable, but convincing people to swap out butter and other dairy products for 'experimental' foods will remain a challenge for the foreseeable future. Gates is hoping, however, that his support will do more than start a conversation.

"The big challenge is to drive down the price so that products like Savor's become affordable to the masses – either the same cost as animal fats or less," Gates wrote. "Savor has a good chance of success here, because the key steps of their fat-production process already work in other industries.

"The process doesn't release any greenhouse gases, and it uses no farmland and less than a thousandth of the water that traditional agriculture does," he added. "And most important, it tastes really good – like the real thing, because chemically it is."

Savor's research was published in the journal Nature Sustainability.

Source: Savor

See also:

Original Submission

posted by janrinok on Thursday July 18, @06:34AM   Printer-friendly
from the validate-your-source,-Luke! dept.

Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers' computers when executed.

The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon's S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js.

[...] "We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days," researchers from Phylum, the security firm that spotted the packages, wrote. "This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time."

[...] In the past 17 months, threat actors backed by the North Korean government have targeted developers twice, one of those using a zero-day vulnerability.

Phylum researchers provided a deep-dive analysis of how the concealment worked
One of the most innovative methods in recent memory for concealing an open source backdoor was discovered in March, just weeks before it was to be included in a production release of the XZ Utils

[...] The person or group responsible spent years working on the backdoor. Besides the sophistication of the concealment method, the entity devoted large amounts of time to producing high-quality code for open source projects in a successful effort to build trust with other developers.

In May, Phylum disrupted a separate campaign that backdoored a package available in PyPI that also used steganography, a technique that embeds secret code into images.

"In the last few years, we've seen a dramatic rise in the sophistication and volume of malicious packages published to open source ecosystems," Phylum researchers wrote. "Make no mistake, these attacks are successful. It is absolutely imperative that developers and security organizations alike are keenly aware of this fact and are deeply vigilant with regard to open source libraries they consume."

Related stories on SoylentNews:
Trojanized jQuery Packages Found on Npm, GitHub, and jsDelivr Code Repositories - 20240713
48 Malicious Npm Packages Found Deploying Reverse Shells on Developer Systems - 20231104
Open-Source Security: It's Too Easy to Upload 'Devastating' Malicious Packages, Warns Google - 20220504
Dev Corrupts NPM Libs 'Colors' and 'Faker' Breaking Thousands of Apps - 20220111
Malicious NPM Packages are Part of a Malware "Barrage" Hitting Repositories - 20211213
Heavily Used Node.js Package Has a Code Injection Vulnerability - 20210227
Discord-Stealing Malware Invades NPM Packages - 20210124
Here's how NPM Plans to Improve Security and Reliability in 2019 - 20181217
NPM Fails Worldwide With "ERR! 418 I'm a Teapot" Error - 20180530
Backdoored Python Library Caught Stealing SSH Credentials - 20180511

Original Submission

posted by janrinok on Thursday July 18, @01:50AM   Printer-friendly
from the I'm-going-outside-to-have-a-smoke dept.

Arthur T Knackerbracket has processed the following story:

A research team from the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen has investigated the risk of fire on spacecraft in a recent study. The results show that fires on planned exploration missions, such as a flight to Mars, could spread significantly faster than, for example, on the International Space Station (ISS). This is due to the planned adjustment to a lower ambient pressure on spacecraft.

"A fire on board a spacecraft is one of the most dangerous scenarios in space missions," explains Dr. Florian Meyer, head of the Combustion Technology research group at ZARM. "There are hardly any options for getting to a safe place or escaping from a spacecraft. It is therefore crucial to understand the behavior of fires under these special conditions."

The ZARM research team has been conducting experiments on the propagation of fires in reduced gravity since 2016. The environmental conditions are similar to those on the ISS—with an oxygen level in the breathing air and an ambient pressure similar to that on Earth, as well as forced air circulation. These earlier experiments have shown that flames behave completely differently in weightlessness than on Earth.

A fire burns with a smaller flame and spreads more slowly, which means it can go unnoticed for a long time. However, it burns hotter and can therefore also ignite materials that are basically non-flammable on Earth. In addition, incomplete combustion can produce more toxic gases.

Future space missions are currently being planned with modified atmospheric conditions. The crew will be exposed to lower pressure. This offers two crucial advantages: The astronauts can prepare for an external mission more quickly and the spacecraft can be built lighter, i.e. with less mass, which saves fuel. The disadvantage: at lower pressure, the crew needs a higher proportion of oxygen in the breathing air—and this can have dangerous consequences in the event of a fire.

We know from various everyday situations that the speed of the air flow also has a strong influence on the spread of fire, from lighting barbecue charcoal to fighting wild fires.

The current series of experiments on which the study is based was carried out under microgravity conditions in the Drop Tower Bremen. Florian Meyer and his team observed the propagation of flames after lighting acrylic glass foils and investigated how the fire reacts when one of the three parameters—ambient pressure, oxygen content and flow velocity—is changed in different proportions.

The results of the experiments are clear: although the lower pressure has a dampening effect, the fire-accelerating effects of the increased oxygen level in the air predominate. Increasing the oxygen level from 21% (as on the ISS) to the planned 35% for future space missions will cause a fire to spread three times faster. This means an enormous increase in the danger to the crew in case of a fire accident.

Dr. Meyer says, "Our results highlight critical factors that need to be considered when developing fire safety protocols for astronautic space missions. By understanding how flames spread under different atmospheric conditions, we can mitigate the risk of fire and improve the safety of the crew."

More information: Hans-Christoph Ries et al, Effect of oxygen concentration, pressure, and opposed flow velocity on the flame spread along thin PMMA sheets, Proceedings of the Combustion Institute (2024). DOI: 10.1016/j.proci.2024.105358

Original Submission

posted by janrinok on Wednesday July 17, @09:03PM   Printer-friendly
from the there's-only-one-gesture-that's-needed dept.

The growing reach of gesture-based user interfaces:

User interface (UI) design is currently experiencing a transition from traditional graphical user interfaces (GUIs) to systems designed to recognize a person's gestures and movements.

Hence, in this blog, we will discuss the possible implications of this groundbreaking transition in terms of user experience (UX) and the accessibility of modern interfaces. Likewise, we'll explore how developers adapt to the technological shift to deliver innovative solutions while outlining the challenges of adopting gesture-based interactions.

Gesture-based interactions are quickly becoming a standard and the technology is widely considered the future of UI. Therefore, modern devices and applications must adapt to meet the needs of their users. On top of that, recent data shows that 82% of users prefer apps with gesture-based controls.

The algorithms built into touch screen devices, such as smartphones recognize a range of touch types, from scrolling to swiping. Because of this technology, users are now able to navigate applications with simple gestures like pinches or taps. A classic example of this is the navigation controls of Google Maps which require the user to pinch the screen to zoom in or out, and swipe/drag to move to a different location.

[...] Enhancing user engagement is one of the key benefits of gesture-based interactions, allowing users to directly manipulate screen elements to quickly reach their goal. The direct nature of using gestures can create a better sense of connection when using an application, not only boosting user satisfaction but also increasing loyalty, ensuring the app has longevity.

Original Submission

posted by janrinok on Wednesday July 17, @04:12PM   Printer-friendly
from the every-pint-the-same dept.

Our Shy Submitter has provided the following story:

Scientific American is running an opinion piece that claims the origin of the t-test is a scientist working at the Guinness Brewery in the early 1900s,

Near the start of the 20th century, Guinness had been in operation for almost 150 years and towered over its competitors as the world's largest brewery. Until then, quality control on its products had consisted of rough eyeballing and smell tests. But the demands of global expansion motivated Guinness leaders to revamp their approach to target consistency and industrial-grade rigor. The company hired a team of brainiacs and gave them latitude to pursue research questions in service of the perfect brew. The brewery became a hub of experimentation to answer an array of questions: Where do the best barley varieties grow? What is the ideal saccharine level in malt extract? How much did the latest ad campaign increase sales?

Amid the flurry of scientific energy, the team faced a persistent problem: interpreting its data in the face of small sample sizes. One challenge the brewers confronted involves hop flowers, essential ingredients in Guinness that impart a bitter flavor and act as a natural preservative. To assess the quality of hops, brewers measured the plants' soft resin content. Let's say they deemed 8 percent a good and typical value. Testing every flower in the crop wasn't economically viable, however. So they did what any good scientist would do and tested random samples of flowers.

The fine article goes on to illustrate the difference between the t-test and normal distribution and also explains why it's often called the "Student" test.

I wonder if it rubs off--can you drink some Guinness Stout and then pass your stat class final exam?

Original Submission

posted by janrinok on Wednesday July 17, @11:25AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The closure affects less than 50 U.S. employees, but the impact on cybersecurity could be far more significant.

Kaspersky Lab, a Russian cybersecurity and antivirus software company, announced it will start shutting down all of its operations in the U.S. on July 20. The departure was inevitable after 12 of the company’s executives were hit with sanctions, and the company’s products were banned from sale in the U.S.

Kaspersky Lab told BleepingComputer of the pending closure and confirmed it would lay off all of its U.S.-based employees. Reportedly, the shutdown affects less than 50 employees in the U.S. The impact on cybersecurity could be much greater since the company’s researchers have been responsible for stopping or slowing countless major security exploits.

The United States government has claimed that Kaspersky’s continued operations in the U.S. posed a significant privacy risk. Since Kaspersky is based in Russia, officials worry the Russian government could exploit the cybersecurity firm to collect and weaponize sensitive U.S. information.

In June, the Department of Commerce’s Bureau of Industry & Security (BIS) issued sanctions on Kaspersky. A Final Determination hearing resulted in Kaspersky being banned from providing any antivirus or cybersecurity solutions to anyone in the United States. Kaspersky’s customers in the U.S. have until September 29, 2024, to find alternative security and antivirus software.

Kaspersky told BleepingComputer that it had “carefully examined and evaluated the impact of the U.S. legal requirements and made this sad and difficult decision as business opportunities in the country are no longer viable.” After all, it’s hard to run a business that provides cybersecurity and antivirus solutions when you’re banned from doing so.

The BIS placed Kaspersky Lab and its U.K. holding company on the U.S. government’s Entity List because of their ties to Russia. This prevented Kaspersky from conducting business in the U.S. At the same time, a dozen members of Kaspersky’s board of executives and leadership were individually sanctioned.

These sanctions froze the executives’ U.S. assets and prevented access to them until the sanctions were lifted. While Kaspersky insisted the ban was based on theoretical concerns rather than evidence of wrongdoing, sources close to the matter have said otherwise. Russian backdoors into Kaspersky’s software are an “open secret,” they said, and a Commerce Department official stated the department believes it is more than just a theoretical threat.

Original Submission

posted by hubie on Wednesday July 17, @06:42AM   Printer-friendly
from the bleeping dept.

It's been a whirlwind journey of stops and starts, but AppleInsider reports the Epic Game Store for iOS in the European Union has passed Apple's notarization process.

This paves the way for Epic CEO Tim Sweeney to realize his long-stated goal of launching an alternative game store on Apple's closed platform—at least in Europe.

[...] Apple's new policies allow for alternative app marketplaces but with some big caveats regarding the deal that app developers agree to. We discussed it in some detail earlier this year.

[...] Even after the shift, Apple is said to have rejected the Epic Games Store app twice. The rejections were over specific rules about the copy and shape of buttons within the app, though not about its primary function.

[...] After those rejections, Epic took to X to accuse Apple of rejecting the app in a way that was "arbitrary, obstructive, and in violation of the DMA." Epic claimed it followed Apple's suggested design conventions for the buttons and noted that the copy matched language it has been using in its store on other platforms for a long time.

Not long after, Apple went ahead and approved the app despite the disagreement over the copy and button designs. However, AppleInsider reported that Apple will still require Epic to change the copy and buttons later. Epic disputed that on X, and Sweeney offered his own take:

Original Submission

posted by hubie on Wednesday July 17, @01:55AM   Printer-friendly
from the don't-expose-my-programming-language-to-water dept.

Why Rust is becoming the programming language of choice for many high-level developers:

Rust is revolutionizing high-performance Web service development with its memory safety, resource management, and speed. Initially used in operating systems and gaming engines, Rust now excels in web development, offering low-level control and high-level concurrency. Its advanced ownership model and robust type system eliminate memory errors at compile time, enhancing performance and reliability.

[...] Rust's popularity in the software development community continues to rise, with even the likes of Linus Torvalds giving the language his blessing, and announcing driver integration for major subsystems sometime in 2024.

So, it's clear Rust is 'one of the big boys' by now, but why exactly is it one of the most popular programming languages? Well, it's down to:

  • Memory safety without garbage collection
  • [...] Thread safety
  • [...] Performance
  • [...] Syntax innovations
  • [...] Tooling and ecosystem

These capabilities make Rust a popular option for enterprise-level applications, providing sufficient speeds to execute processes like Workday staff augmentation, customizing existing ERP software, and other demanding backend tasks.

The article goes on to describe specific features that make Rust popular and also discusses the key challenges to Rust adoption, namely learning curve and complexity.


Original Submission

posted by hubie on Tuesday July 16, @09:10PM   Printer-friendly
from the hollow-moon-cheese-or-nazis dept.

Scientists have for the first time discovered a cave on the Moon.

At least 100m deep, it could be an ideal place for humans to build a permanent base, they say.

It is just one in probably hundreds of caves hidden in an "underground, undiscovered world", according to the researchers.

Astronomers say they've found a possible way to get into caves under the Moon's surface on the Sea of Tranquillity.

[...] "These caves have been theorized for over 50 years, but it is the first time ever that we have demonstrated their existence

The Moon's surface is dotted with pits, sometimes called skylights, which have been formed by lava tubes caving in.

"Although more than 200 pits have now been detected in various lunar geological settings and latitudes, it remains uncertain whether any of these openings could lead to extended cave conduits underground," write the researchers in their paper.

Time to regress to become cave dwellers again, just on another celestial body.

Original Submission

posted by hubie on Tuesday July 16, @04:23PM   Printer-friendly
from the I'm-sorry-[Bill]-I'm-afraid-I-can't-do-that dept.

Microsoft has withdrawn from its non-voting observer role on OpenAI's board, while Apple has opted not to take a similar position, reports Axios and Financial Times. The ChatGPT maker plans to update its business partners and investors through regular meetings instead of board representation. The development comes as regulators in the EU and US increase their scrutiny of Big Tech's investments in AI startups due to concerns about stifling competition.
Microsoft accepted a non-voting position on OpenAI's board in November following the ouster and reinstatement of OpenAI CEO Sam Altman.

Last week, Bloomberg reported that Apple's Phil Schiller, who leads the App Store and Apple Events, might join OpenAI's board in an observer role as part of an AI deal. However, the Financial Times now reports that Apple will not take up such a position, citing a person with direct knowledge of the matter. Apple did not immediately respond to our request for comment.
Microsoft remains a critical financial and technology resource for OpenAI, having invested over $10 billion in the company since early 2023.
While no official source has yet officially linked Microsoft's board withdrawal (and Apple's change of direction on a potential OpenAI board position) to regulatory scrutiny, it's unlikely to be a coincidence. Regulators in both the US and Europe are worried that Big Tech's heavy influence in fast-growing AI startups may unreasonably edge out competition and establish de facto monopolies over key technologies that would stifle smaller competitors.
Even though Microsoft's financial ties run deep into OpenAI, as Financial Times notes, the ChatGPT maker states: "While our partnership with Microsoft includes a multibillion dollar investment, OpenAI remains an entirely independent company governed by the OpenAI Nonprofit."

Original Submission

posted by hubie on Tuesday July 16, @11:40AM   Printer-friendly
from the IoT dept.

MBed OS and platform are shutting down in 2026, although rumor has it almost all of the devs have already been downsized.

A couple of possible discussion points from the perspective of someone who used it for STM32:

It was one of those FOSS-but-not-really products that was completely corporate controlled and funded and written, but under a FOSS license. It never really gained any traction outside corporate. There is a winner-take-all mentality in microcontroller RTOS... why use Mbed if Zephyr supports 10x as much "stuff" out of the box? Also, given the primary source of funding, it really only practically functioned on ARM processors. Pragmatically it seems multiplatform RTOS are the only ones that survive long-term, single platform seems always doomed, a bit different than the desktop/laptop/phone market.

There was something of a product-tying thing going on with Pelion IoT cloud platform, which used to be free, but the free tier disappeared. It was pretty awesome for hobbyist use until they intentionally got rid of the hobbyists, presumably to "save money". However this seems to be a common pattern for decades, the devs who influence million dollar contracts during the day want to play with pirated/free versions at home at night, so arguably Pelion and thus Mbed shot themselves in their own foot.

I wonder how much C19 killed Mbed a couple years later. After STM32 procs and ARM microcontrollers were unobtainable for couple of years, there was no way to get hardware to run Mbed.

It was a bit memory-hungry; IIRC by the time you got a full IoT platform with auto-updates and telemetry over WiFi working on commodity dev board hardware, you were out of either flash, ram, or both so you couldn't run your app.

I have happy memories of being introduced to LwM2M protocol; it was an interesting innovation on MQTT but a little too "organized" for widespread use. Take MQTT and "compress" by turning all common (and uncommon) nouns and verbs into integers; kind of like the old Apollo spacecraft computer, kind of like a fixed compression standard.

A final interesting discussion point is tool manufacturers going out of business is a pretty strong signal the bubble is over. The permanent solution to "The S in IoT stands for security" may very well be the IoT industry drying up and blowing away, and this shutdown is a sign of the start of the end.

Anyone else have fond memories of MbedOS? I thought it was pretty awesome back in the day, although I switched to Zephyr years ago. Other contemporary microcontroller or IoT comments?

Original Submission

posted by hubie on Tuesday July 16, @06:55AM   Printer-friendly
from the arguing-balls-and-strikes-is-not-permitted dept.

What do Don Denkinger and Jim Joyce have in common? If you're a baseball fan, you might recognize them as umpires who are known for famously missing a critical call late in a game on national TV. Before Major League Baseball (MLB) embraced video-assisted replay (VAR), which it resisted long after other sports like football had demonstrated that replaces could be used successfully, there was no way to reverse the missed calls. Even after MLB finally allowed VAR to be used, by far the most frequent call in a game still cannot be reviewed: whether a pitch is a ball or strike.

The technology to track the fight of a baseball and reliably determine balls and strikes has been in use for a couple of decades. Systems like QuesTec, PITCHf/x, and Statcast can accurately track the flight of a baseball and determine whether its trajectory crossed the strike zone when it reached home plate. Statcast not only determines each pitch's horizontal and vertical location when it crossed the plate, but a plethora of other data like the pitcher's release point in three dimensions, the velocity when the pitch left the pitcher's hand, it's spin axis and rate, the pitch's acceleration in three dimensions, and a classification of the pitch type. Despite the capability to accurately call balls and strikes automatically, MLB still relies solely on human umpires this call.

The horizontal location of the strike zone is identical for every pitch, requiring that some portion of the baseball pass above home plate. However, the vertical location is defined as being from the bottom of the hitter's kneecap to the midpoint between the top of the hitter's pants and the hitter's shoulders. This is affected by the hitter's height, body shape, and their batting stance. A hitter won't have exactly the same batting stance on any two pitches, so the actual strike zone varies slightly from pitch to pitch, even for the same hitter. This data is determined by Statcast while the pitch is in flight, and is recorded in the sz_bot (bottom of the strike zone in feet above ground) and sz_top (the top, with the same units) fields in Statcast data. The flight of the baseball is currently tracked by 12 Hawk-Eye cameras stationed throughout each stadium, five of which operate at 300 frames per second. The images from the different cameras can be used to pinpoint the location of the baseball within a few millimeters. The same type of camera is used for VAR in tennis matches to determine if a ball was out of bounds.

When Don Denkinger mistakenly called Jorge Orta safe at first base in the ninth inning of game 6 of the 1985 World Series, known in St. Louis simply as "The Call", it was followed by a series of poor plays by the by the Cardinals that led them to blowing a 1-0 lead and losing the game. Although the Cardinals proceeded to get blown out 11-0 in game 7, but Denkinger is often blamed for the Cardinals losing the series. Following the blown call, two St. Louis radio personalities doxxed Denkinger, who received hate mail and even death threats from irate fans. At the time, Cardinals manager Whitey Herzog was furious at Denkinger. After the series was over, Herzog became very dismayed by the harassment Denkinger received from St. Louis. It was so severe that Herzog made public appearances with Denkinger to raise money for charity and try to get Cardinal fans to forgive the umpire.

Jim Joyce was also known for a missed force out at first base, this time what should have been the last out of a perfect game for Armando Galarraga, an otherwise mediocre pitcher attempting to complete one of the rarest feats in all of baseball. The first base umpire generally watches to see whether the fielder's foot is on the first base and when the runner's foot touches the base, listening for the sound of the ball popping into the fielder's glove. It's an extremely difficult call that umpires get correct a remarkably high percentage of the time. In this case, Joyce believed that the runner, Jason Donald, reached the base before the baseball arrived in the fielder's glove, and he called the runner safe. Galarraga retired the next hitter, but there was no way after the game to correct the blown call. After seeing a replay, Joyce held a press conference in which he tearfully admitted publicly that he blew the call and felt awful for costing Galarraga the perfect game.

Had MLB made use of the available technology, neither Denkinger nor Joyce would be remember for missing calls. It's possible the Cardinals might have imploded and lost the World Series anyway. In the case of Joyce, the play would have been reviewed for a minute or two, the umpire would have raised his fist to signal an out, and the Detroit Tigers players and coaches would have run onto the field after a brief awkward pause to celebrate the perfect game. Denkinger and Joyce were excellent umpires who were well-respected by players and managers but are both mostly known for making a single bad call that could have easily been corrected with the proper VAR tools.

Despite the potential for technology to further assist umpires in getting calls correct, there is significant resistance to automatic balls and strikes. While the ball-tracking technology is widely accepted by tennis fans, there are concerns that baseball fans might see pitches that appear to be balls get called as strikes, and that the technology would be viewed as untrustworthy. Part of the issue is that the strike zone is actually a three-dimensional volume that is 17 inches wide and 17 inches deep. If the flight of the ball intersects any part of the zone, it's a strike. For pitches with a high rate of forward spin and a lot of vertical break, it could clip the bottom part of the zone at the front of home plate, be caught well below the batter's knees, and still get called a strike.

Some fans are also reluctant to end the skill of pitch framing, in which a catcher receives a pitch that's a ball but catches it in a manner to give the illusion of it being a strike. The umpire is fooled into calling the pitch a strike anyway, giving an advantage to the pitcher. One estimate suggests that the best catchers were at one time able to save as many as 40 runs during a season with pitch framing, which is worth roughly about four wins to the team. Some baseball purists have opposed using cameras to automatically call balls and strikes because it would put an end to pitch framing.

Instead of fully embracing robot umps to call balls and strikes, MLB intends to test a system of challenging balls and strikes at AAA this season, which is the highest level of minor league baseball. Teams will receive a certain number of challenges each game, where a ball or strike call can be reviewed and, if necessary, overturned. Part of the issue with fully embracing automatic balls and strikes is the need to determine how to set the "correct" strike zone. One option is to estimate it from the batter's height. The other is to determine in on every pitch based on the batter's stance, using the sz_top and sz_bot fields in Statcast data. If the strike zone was determined by the batter's stance on every pitch, a batter could use an exaggerated stance to make the strike zone artificially small, making it difficult to throw strikes. Although catchers would no longer be able to steal strikes with pitch framing, adjusting the strike zone for every pitch could allow hitters to steal balls.

Original Submission

posted by mrpg on Tuesday July 16, @02:11AM   Printer-friendly
from the snafu dept.

Out-of-control heat is making Earth more "weird":

For the 13th consecutive month, Earth's average monthly temperature has broken all previous records, continuing a streak that began in June 2023. Significantly, the European climate service Copernicus added that that the world has been 1.5 degrees Celsius (2.7 degrees Fahrenheit) higher than pre-industrial levels for more than a year, pushing the planet up against the threshold established by the 2015 Paris climate agreement.

"We see increases in deadly heat waves and droughts, but also an increased experience of 'global weirding' — more extreme weather events producing conditions that are entirely new for communities."

"It's a stark warning that we are getting closer to this very important limit set by the Paris Agreement," Copernicus senior climate scientist Nicolas Julien told NPR. "The global temperature continues to increase. It has at a rapid pace."

[...] "Along with this warming, we see increases in deadly heat waves and droughts, but also an increased experience of 'global weirding,'" Dr. Twila Moon, a climatologist and deputy lead scientist at NASA's National Snow and Ice Data Center, told Salon. Such weirding, she explained, encompasses "more extreme weather events producing conditions that are entirely new for communities, weather whiplash as folks may experience quick swings between hot and cold or drought and flood, and many challenges for crops, wildlife, recreation, and being able to plan for what we previously considered normal weather conditions."

[...] "In addition," Trenberth added, "increasing conflicts around the world (Sudan, Russia-Ukraine, Gaza-Israel, etc.) and increasing wildfires have meant that many emissions are not adequately counted but they nonetheless contribute substantially to well measured atmospheric concentrations. These all counter the considerable progress made in cutting emissions elsewhere."

Original Submission