Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:59 | Votes:103

posted by janrinok on Friday October 06 2023, @10:48PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

AI chip startup Tenstorrent announced on Monday that it will use Samsung's foundry to manufacture its next generation of products, with both partners alluding to potential future RISC-V collaborations.

"Samsung Foundry's commitment to advancing semiconductor technology aligns with our vision for advancing RISC-V and AI and makes them an ideal partner to bring our AI chiplets to market," beamed Tenstorrent CEO Jim Keller.

Samsung's head of US Foundry business echoed those sentiments: "Samsung's advanced silicon manufacturing nodes will accelerate Tenstorrent's innovations in RISC-V and AI for datacenter and automotive solutions."

Tenstorrent hopes to become an alternative to Nvidia for AI hardware. It builds some of its products – such as its 2023 standalone ML computer, Black Hole – on RISC-V CPU cores. Sixteen of them, to be exact.

In June, Samsung announced it was an official member of the RISC-V Software Ecosystem, which develops code to run on open processor architecture.

The current deal for next-gen products, however, has the Korean megalith manufacturing Tenstorrent's Quasar chiplet using Samsung's SF4X process and 4nm architecture.

[...] "We leave the decision to them where the chips get made," said [Tenstorrent vice president of strategy and corporate communications Bob] Grim. The veep noted that his current customers – LG Electronics and Hyundai – are both in Korea.

Samsung is a licensee of Arm processor designs – a rival to RISC-V. Working with Tenstorrent gives Samsung potential exposure to the open processor design that could help it to win more fabrication work from other RISC-V players.


Original Submission

posted by janrinok on Friday October 06 2023, @06:03PM   Printer-friendly
from the many-eyes-detect-bugs-after-35-years dept.

X.Org Hit By New Security Vulnerabilities - Two Date Back To 1988 With X11R2:

It was a decade ago that a security researcher commented on X.Org Server security being even "worse than it looks" and that the GLX code for example was "80,000 lines of sheer terror" and hundreds of bugs being uncovered throughout the codebase. In 2023 new X.Org security vulnerabilities continue to be uncovered, two of which were made public today and date back to X11R2 code from the year 1988.

Made public today was CVE-2023-43785 as an out-of-bounds memory access within the libX11 code that has been around since 1996. A second libX11 flaw is stack exhaustion from infinite recursion within the PutSubImage() function of libX11... This vulnerability has been around since X11R2 in February of 1988.

A third libX11 vulnerability made public today is an integer overflow within XCreateImage() that leads to a heap overflow... That too has been around since X11R2 in 1988.

Two libXpm vulnerabilities were also disclosed today related to out-of-bounds reads and both of those date back to 1998.

Due to these issues coming to light, libX11 1.8.7 and libXpm 3.5.17 were released today with the necessary security fixes. More details on these latest X.Org security vulnerabilities via today's X.Org security advisory.;


Original Submission

posted by janrinok on Friday October 06 2023, @01:17PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Sandra Rivera is off as executive veep of Intel's Datacenter and AI group, and will instead be CEO of the x86 giant's now-soon-to-be-spun-off FPGA business.

On a call with investors late Tuesday, Intel boss Pat Gelsinger discussed the chipmaker's decision to hive off its Platform Solutions Group (PSG), arguing the move ought to provide the unit the autonomy it needs to compete more aggressively in the FPGA market.

"We haven't been managing it as well as we could have," Gelsinger said of PSG on the call. "We also see that we have the opportunity to execute more effectively in the lower margin, mid and low-end areas of the business."

For those who don't recall, Intel bought its way into the FPGA market in 2015 with the $16.7 billion acquisition of Altera. Since then we've seen a flurry of FPGAs from Intel for a variety of applications, particularly as of late. So far this year, the Xeon processor goliath has rolled out 11 products, including an update to its Agilex programmable array portfolio that we took a look at last month.

And also for those who don't know, FPGAs – or Field Programmable Gate Arrays – are chips packed with circuitry that can be configured as needed to perform specific tasks at relatively high speed in hardware. You can program FPGAs to handle stuff like glue logic, peripheral control, data processing on the line, and many more things. They are quite useful in solving application-specific problems, especially when they ship with accelerators and CPU cores already on the die.

To lead the spin off, Intel tapped 23-year veteran Rivera as CEO of the operation, who most recently took over Intel's datacenter group as part of an executive shakeup following Gelsinger's return to the fab giant in 2021. Rivera is due to make the transition to the standalone PSG biz in January, and will continue to head up Intel's datacenter group until a replacement is found.

[...] While Intel is still working to fill out PSG's executive suite, Rivera will be joined by Shannon Poulin as chief operating officer. Poulin previously served as VP of PSG.

Looking ahead to 2024, Intel aims to bring in outside investors in preparation for an initial public offering within the next two to three years. But much like Softbank's Arm IPO last month, Intel says it'll retain a majority stake in PSG.

[...] Intel is due to report its Q3 financials on October 26. As we reported in July, the corp expects revenues to fall 13 percent year over year to between $12.9 and $13.9 billion during the past quarter.


Original Submission

posted by janrinok on Friday October 06 2023, @08:34AM   Printer-friendly

https://phys.org/news/2023-10-prehistoric-cosmic-airburst-advent-agriculture.html

Agriculture in Syria started with a bang 12,800 years ago as a fragmented comet slammed into the Earth's atmosphere. The explosion and subsequent environmental changes forced hunter-gatherers in the prehistoric settlement of Abu Hureyra to adopt agricultural practices to boost their chances for survival.

That's the assertion made by an international group of scientists in one of four related research papers, all appearing in the journal Science Open: Airbursts and Cratering Impacts. The papers are the latest results in the investigation of the Younger Dryas Impact Hypothesis, the idea that an anomalous cooling of the Earth almost 13 millennia ago was the result of a cosmic impact.

"In this general region, there was a change from more humid conditions that were forested and with diverse sources of food for hunter-gatherers, to drier, cooler conditions when they could no longer subsist only as hunter-gatherers," said Earth scientist James Kennett, a professor emeritus of UC Santa Barbara . The settlement at Abu Hureyra is famous among archaeologists for its evidence of the earliest known transition from foraging to farming. "The villagers started to cultivate barley, wheat and legumes," he noted. "This is what the evidence clearly shows."
...
In the 12,800-year-old layers corresponding to the shift between hunting and gathering and agriculture, the record at Abu Hureyra shows evidence of massive burning. The evidence includes a carbon-rich "black mat" layer with high concentrations of platinum, nanodiamonds and tiny metallic spherules that could only have been formed under extremely high temperatures—higher than any that could have been produced by man's technology at the time.

The airburst flattened trees and straw huts, splashing meltglass onto cereals and grains, as well as on the early buildings, tools and animal bones found in the mound—and most likely on people, too.

More information: Andrew M.T. Moore, James P. Kennett and Malcolm A. LeCompte et al. Abu Hureyra, Syria, Part 1: Shock-fractured quartz grains support 12,800-year-old cosmic airburst at the Younger Dryas onset. Airbursts and Cratering Impacts (2023) DOI: 10.14293/ACI.2023.0003

Andrew M.T. Moore, James P. Kennett and William M. Napier et al. Abu Hureyra, Syria, Part 2: Additional evidence supporting the catastrophic destruction of this prehistoric village by a cosmic airburst ~12,800 years ago. Airbursts and Cratering Impacts (2023) DOI: 10.14293/ACI.2023.0002

Andrew M.T. Moore, James P. Kennett and William M. Napier et al. Abu Hureyra, Syria, Part 3: Comet airbursts triggered major climate change 12,800 years ago that initiated the transition to agriculture. Airbursts and Cratering Impacts. (2023) DOI: 10.14293/ACI.2023.0004

Robert E. Hermes, Hans-Rudolf Wenk and James P. Kennett et al. Microstructures in shocked quartz: linking nuclear airbursts and meteorite impacts. Airbursts and Cratering Impacts (2023) DOI: 10.14293/ACI.2023.0001


Original Submission

posted by hubie on Friday October 06 2023, @03:51AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Humans are increasingly settling in areas highly exposed to dangerous flooding, a study warned Wednesday, with China helping drive the rise in risky urban expansion into exposed areas.

The research, led by a World Bank economist, warns that settlement growth in flood zones has vastly outpaced growth in safe areas since 1985.

"In a time when human settlements should be adapting to climate change, many countries are actually rapidly increasing their exposure to floods," author Jun Rentschler told AFP.

The study analyzed 30 years of satellite imagery tracking the expansion of human settlement globally, along with flood maps.

[...] East Asia and the Pacific region are among the most exposed, driven particularly by urban expansion in China, as well as Vietnam and Bangladesh.

"In Vietnam, where almost one-third of the coastline is now built up, the safest and most productive locations are increasingly occupied," the authors wrote.

"Thus, new developments are disproportionately forced onto hazardous land and previously avoided areas, such as riverbeds or floodplains."

The analysis does not incorporate potential increases in flood risks caused by climate change, deforestation or changes to features such as riverbeds.

But Rentschler said there was little evidence flood zones were expanding at a rate similar to human settlement in known risk areas, suggesting settlement patterns remain the key factor for policymakers to address.

[...] Rentschler argues understanding the settlement trend should be the first step in shifting urbanization policies.

"This is where you want to start: before reducing risks, countries need to stop increasing it," he said.

"Local authorities can actually do much more to protect people and prevent future climate change impacts."

Journal Reference:
Rentschler, J., Avner, P., Marconcini, M. et al. Global evidence of rapid urban growth in flood zones since 1985. Nature 622, 87–92 (2023). https://doi.org/10.1038/s41586-023-06468-9


Original Submission

posted by hubie on Thursday October 05 2023, @11:03PM   Printer-friendly
from the face-of-the-future dept.

Arthur T Knackerbracket has processed the following story:

Last week the internet was abuzz with talk that Singapore's commercial Changi airport was no longer going to require passports for clearance at immigration. Although it is true the paper documentation will be replaced by biometric measures, it's not quite time to pack the document away.

The news came through as Singapore passed its Immigration Amendment Bill which, among other things, enables the use of end-to-end biometric clearance at airports and checkpoints, beginning in the first half of 2024.

"Singapore will be one of the first few countries in the world to introduce automated, passport-free immigration clearance," said minister for communications and information Josephine Teo in a wrap-up speech for the bill. Teo did concede that Dubai had such clearance for select enrolled travelers, but there was no assurance of other countries planning similar actions.

And therein lies one of the most important reasons passports will not yet go away.

[...] What travelers will see is an expansion of a program already taking form. Changi airport currently uses facial recognition software and automated clearance for some parts of immigration.

The plan is to expand to universal coverage, which Teo called one of the keys to the successful implementation of the New Clearance Concept (NCC).

"This requires a willingness to phase out traditional methods of identifying and authenticating travelers. The alternative of running two systems in parallel is not only costly but also cumbersome," said Teo.

[...] This collection and sharing of biometric information is what enables the passport-free immigration process – passenger and crew information will need to be disclosed to the airport operator to use for bag management, access control, gate boarding, duty-free purchases, as well as tracing individuals within the airport for security purposes.

The shared biometrics will serve as a "single token of authentication" across all touch points.

Members of Singapore's parliament have raised concerns about shifting to universal automated clearance, including data privacy, and managing technical glitches.

According to Teo, only Singaporean companies will be allowed ICA-related IT contracts, vendors will be given non-disclosure agreements, and employees of such firms must undergo security screening. Traveler data will be encrypted and transported through data exchange gateways.

As for who will protect the data, that role goes to CAG, with ICA auditing its compliance.

In case of disruptions that can't be handled by an uninterruptible power supply, off-duty officers will be called in to go back to analog.

And even though the ministry is pushing universal coverage, there will be some exceptions, such as those who are unable to provide certain biometrics or are less digitally literate. Teo promised their clearance can be done manually by immigration officers.


Original Submission

posted by hubie on Thursday October 05 2023, @06:21PM   Printer-friendly
from the fuel-your-wireless-audio-connection dept.

Some gas station owners are falling victim to a sophisticated scam. Scammers are using cellphone's Bluetooth option to hack the pump - and get it for free:

Paying at the pump is for chumps - when you can get gas for free - and illegal, but it didn't stop a Detroit man from stealing almost 800 gallons of gas at the Shell at Eight Mile and Wyoming.

[...] And when the clerks inside try to stop it - they can't.

"Every time we push Pump Three stop, it wasn't doing anything," [station owner Mo] said. "We have to shut off the whole pumps - we have emergency stops."

[...] But it's not just one guy, and this maneuver is not new, just re-surfacing.

Like at a Speedway station Downriver – in Riverview this month. In that case, they used a bait-and-switch. One guy distracted the clerk with a Cash App problem inside, while the other hacked the pump.

Originally spotted on Schneier on Security.


Original Submission

posted by hubie on Thursday October 05 2023, @01:37PM   Printer-friendly
from the I'll-probably-still-be-late dept.

Arthur T Knackerbracket has processed the following story:

Set your watches! Scientists have set the clock ticking for the development of a new generation of timepieces with accuracy of up to 1 second in 300 billion years or about 22 times the age of the universe.

Researchers working at European XFEL X-ray have examined the potential of scandium as the basis for nuclear clocks, long seen as the next step forward in accuracy over the current generation of atomic clocks.

Most atomic clocks rely on oscillators such as caesium, which can oscillate at very reliable frequencies when excited by microwave radiation. For example, the US Department of Commerce's National Institute of Standards and Technology's NIST-F2 clock would neither gain nor lose one second in about 300 million years.

But scientists have held the ambition of going one step further by using the oscillation of the atomic nucleus – rather than the electron shell – to create the next level in timekeeping.

[...] Ralf Röhlsberger, researcher at Germany's Deutsches Elektronen-Synchrotron, was part of the team. He said the level of accuracy possible from a nuclear clock using scandium could be equivalent to one second in 300 billion years, according to a statement.

In other words, if your watch loses a second a year, it will be 9,512 years slow by the time a nuclear clock based on scandium is a second out.

Journal Reference:
Shvyd'ko, Y., Röhlsberger, R., Kocharovskaya, O. et al. Resonant X-ray excitation of the nuclear clock isomer 45Sc [open]. Nature (2023). https://doi.org/10.1038/s41586-023-06491-w


Original Submission

posted by Fnord666 on Thursday October 05 2023, @08:49AM   Printer-friendly
from the mutation-chooses-you dept.

https://arstechnica.com/health/2023/09/covid-anti-viral-drug-is-actively-helping-sars-cov-2-mutate-and-evolve/

With every new infection, the pandemic coronavirus gets new chances to mutate and adapt, creating opportunities for the virus to evolve new variants that are better at dodging our immune systems and making us sicker.

Anti-viral drugs, such as Paxlovid and remdesivir, aim to halt this incessant evolution in individual patients—shortening illnesses, snuffing out opportunities for mutation, and reducing transmission. But one antiviral appears to be backfiring—allowing SARS-CoV-2 more opportunities to mutate.

According to a new peer-reviewed study in the journal Nature, the anti-viral drug dubbed molnupiravir is linked to specific SARS-CoV-2 mutation signatures that happened to spring up in 2022 when the drug was introduced.
[...]
Beyond simply finding molnupiravir-linked mutations, the researchers noted concerning features of them. The researchers found evidence that some of the molnupiravir-linked mutations were under positive selection—that is, they increased in frequency, suggesting that they were advantageous to the virus in some way. They also noted that some viruses with molnupiravir-linked mutations were passed on from person to person in clusters, which suggested onward transmission of these drug-induced mutations.
[...]
When molnupiravir was authorized in the US, there was concern about its potential to cause mutations in people's DNA, rather than the virus. For this reason, it is not recommended for use in pregnant people. But the new data gives birth to more concern about mutations. And this risk is coupled with lackluster efficacy data. In a final data analysis for the Food and Drug Administration, molnupiravir's maker, Merck, combined two sets of trial data to come up with an estimate of just 30 percent efficacy at preventing hospitalization and death.
[...]
FDA advisers summarized the data at the time as "not overwhelmingly good" and "modest at best." They voted in favor of authorizing the drug in a narrow 13 to 10 vote. The drug has never gained a foothold in Europe. Earlier this year, the European Medicines Agency refused to issue marketing authorization for molnupiravir. Upon the EMA's rejection, Merck said it was confident of the drug's role in fighting COVID-19 and that it would appeal the decision. In addition to the US, molnupiravir is approved for use in over two dozen countries, including Australia, China, Japan, and the UK, though use of the drug has been scaled back in many places.

Journal Reference:
Sanderson, T., Hisner, R., Donovan-Banfield, I. et al. A molnupiravir-associated mutational signature in global SARS-CoV-2 genomes. Nature (2023). https://doi.org/10.1038/s41586-023-06649-6


Original Submission

posted by hubie on Thursday October 05 2023, @04:04AM   Printer-friendly
from the more-power dept.

As recently as less than a year ago, NASCAR stated it planned to implement hybrid engines in 2024. Although rumors indicate that this timeline may have changed, IndyCar has successfully tested the hybrid engines it will begin using in 2024, and NASCAR probably won't be too far behind. A hybrid engine simply means that it's powered from more than one source of energy, usually a combination of gasoline and electricity.

Formula 1 began using hybrid power units in 2014, which are powered both with gasoline and electric power, and use fuel more efficiently than cars without hybrid components. F1's experiences with hybrid power units and their mistakes could provide some guidance for how other racing series might switch to hybrid engines. Chain Bear provides an excellent discussion of how F1 power units work. They still contain an internal combustion engine, but the efficiency is increased and energy is recovered in a few ways.

F1 engines are turbocharged, meaning that energy from exhaust getting expelled is used for forced induction. This means that the air in the intake is compressed, and the combustion is more efficient than in a naturally aspirated engine. However, the turbo requires a high exhaust pressure, meaning that there is a lag between when the car accelerates and when the turbo can operate efficiently, which is known as turbo lag. One of the hybrid components is the MGU-K (K for kinetic energy), which captures energy through regenerative braking. Instead of energy being lost as heat during braking, the energy is used to charge the energy store, which is usually a capacitor or a battery. Another component is the MGU-H (H is for heat), which captures energy from the exhaust as it goes through the turbo, and can charge the energy store. The MGU-H can also put energy into the turbo during acceleration to avoid turbo lag.

The last time F1 ran a points race on a true oval track was the 1960 Indianapolis 500, only going to street circuits and road courses since then. These tracks usually have hard braking zones, providing frequent opportunities to capture energy during braking. Even without the MGU-H, F1 cars have many opportunities to capture energy during a lap.

NASCAR runs a few races each year on road courses, and some short ovals like Martinsville and Gateway also have hard braking zones. This is a combination of high speeds on relatively long straights and much slower speeds through corners with small radii and low banking. Regenerative braking would work well at these tracks. However, most other oval tracks do not require nearly as hard of braking, limiting the opportunity to capture energy through regenerative braking. For hybrid engines to have an effect without hard braking zones, energy will need to be captured in other ways these tracks. Despite the lack of a turbo, the obvious solution would seem to be capturing energy from exhaust heat while on throttle.

The problem with capturing energy from exhaust heat is that the MGU-H component of F1 power units is prohibitively expensive and complex, leading to them being removed in rules for 2026. This will result in decreased thermal efficiency, though F1 is planning to use fully sustainable fuels. NASCAR has also sought to cut costs for teams, meaning that adding a component like an MGU-H to capture energy while on throttle seems unlikely.

At least in the short term, racing series that have long races are likely to use hybrid engines instead of going fully electric. If nothing else, this will attract more OEMs to the sport. NASCAR currently has only three OEMs, which are Ford, Chevrolet, and Toyota. Despite frequent rumors of Dodge returning, this has yet to occur, and NASCAR has also failed to attract other new OEMs. It is also likely that hybrid engines would allow for increased overall power despite the internal combustion engine currently being limited to 670 horsepower, and more power may well improve the quality of racing on some tracks. Switching to hybrid engines is likely to benefit both NASCAR teams and fans, and would be a step closer to once again racing true stock cars. However, it's much less obvious how to go about capturing energy on oval tracks where there is little braking, meaning that hybrid engines might not make a meaningful impact at a large percentage of NASCAR tracks.

Given the cost and technical limitations, how can NASCAR actually make hybrid engines work and have a meaningful impact on most of their tracks? It might not be practical to boost power through hybrid engines on NASCAR's fastest superspeedways, where engine power is greatly restricted to keep speeds down and improve safety. But how can NASCAR make hybrid engines work for the other 30-32 races each year?


Original Submission

posted by hubie on Wednesday October 04 2023, @11:17PM   Printer-friendly

Reddit is removing ability to opt out of ad personalization based on your activity on the platform:

Reddit said Wednesday that the platform is revamping its privacy settings with an aim to make ad personalization and account visibility toggles consistent. Most notably though, it is removing the ability to opt out of ad personalization based on Reddit activity.

The company said that it will still have opt-out controls in "select countries" without specifying which ones. It mentioned in a blog post that users won't see more ads but they will see better-targeted ads following this change.

"Reddit requires very little personal information, and we like it that way. Our advertisers instead rely on on-platform activity—what communities you join, leave, upvotes, downvotes, and other signals—to get an idea of what you might be interested in," Reddit said.

The company is essentially removing the option to not track you based on whatever you do on Reddit.

[...] The company noted that ad-limiting controls will possibly show you fewer ads from mentioned categories if the toggles are turned off, but won't possibly filter out all ads. Reddit justified this by saying it uses manual tagging and machine learning to label ads, so there is a chance that it is not 100% accurate.

[...] The social platform has made several changes to increase monetization. It infamously made changes to its data API terms that led to many third-party clients shutting down and subreddits protesting inretaliation. Last week, it rolled out a new creator rewards program to incentivize people to post more and better content on the platform. But it also introduced a change that made it easier for users to purchase Gold rewards.

In an interview with The Verge in June, Reddit CEO Steve Huffman responded to IPO rumors and said "Getting to breakeven is a priority for us in any climate."


Original Submission

posted by hubie on Wednesday October 04 2023, @06:32PM   Printer-friendly

Microsoft CEO warns of 'nightmare' future for AI if Google's search dominance continues

Microsoft CEO Satya Nadella warned on Monday of a "nightmare" scenario for the internet if Google's dominance in online search is allowed to continue, a situation, he said, that starts with searches on desktop and mobile but extends to the emerging battleground of artificial intelligence.

Nadella testified on Monday as part of the US government's sweeping antitrust trial against Google, now into its 14th day. He is the most senior tech executive yet to testify during the trial that focuses on the power of Google as the default search engine on mobile devices and browsers around the globe.

[...] even more worrisome, Nadella argued, is that the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else's — threatening to give Google an unassailable advantage in generative AI that would further entrench its power.

[...] In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. In Nadella's own meetings with publishers, he said that he now hears that Google "wants ... to write this check and we want you to match it." (Google didn't immediately respond to questions about those deals.)

The requests highlight concerns that "what is publicly available today [may not be] publicly available tomorrow" for AI training, according to the testimony.


Original Submission

posted by janrinok on Wednesday October 04 2023, @01:47PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Most enterprises have gotten very mature at network and perimeter security, but are still juvenile in their understanding and workflow around open source provenance and software supply chain security. Hackers have shifted their attention towards not only the security of individual open source projects themselves, but the gaps between software artifacts: their transitive dependencies and the build systems they touch.

We need to fix this, and the way to do so is arguably not at the individual project level but rather at the level of the distribution.

“Basically open source got much more popular, and the front door got harder to break into so attackers are targeting the back door,” said Dan Lorenc, CEO and cofounder at Chainguard, in an interview. Bad actors, in other words, needn’t target your code. They can attack one of the dependencies you didn’t even know you had.

The cost of open source popularity is that a lot of the mechanisms of trust never really got built in at the onset. Linux (and other) distributions have played a critical role in the adoption of open source historically by doing a lot of the heavy lifting of packaging, building, and signing open source. Distros like Debian, Alpine, or Gentoo have well-deserved reputations as authorities, so users didn’t have to trust all open source blindly and got some guardrail guarantees.

But the pace of new open source packages being introduced has far exceeded the ability of distros to keep up. Even a single popular registry (like npm for JavaScript) gets more than 10,000 new packages per day. This basic mismatch between the pace of new open source technology and the relatively glacial speed of the distros results in developers going outside of the distros. They’re installing packages to get the latest and greatest as fast as possible but losing trust guarantees in the process.

It’s not that distributions have intentionally slowed the pace of progress; rather, they have to balance update speed with distribution stability. Still, given developer impatience, the distributions need to figure out how to accelerate updates and thereby keep better pace with the rampant adoption and security upkeep of open source software.

The Common Vulnerability Scoring System (CVSS) and other signals, such as the OpenSSF Scorecard offer great metrics on specific vulnerabilities and their severity. But modern operating system distributions ship with so many packages preinstalled that the average OS is flush with these vulnerabilities. If your car’s check engine light were on all of the time, how would you know when you actually needed to see your mechanic? The prevalence of vulnerabilities is so great across Linux distributions they’ve become easy to ignore.

Another problem is the semantic difference that occurs when developers install open source outside of distros and package databases. Modern security scanners all rely on this metadata, so security vulnerabilities go undetected for open source that is installed outside of the distro or package database.

[...] We’ve seen great progress the past few years in better establishing the security of open source projects. From the previously mentioned SSDF framework, to Sigstore and SLSA, multiple complementary projects have created developer toolchains for establishing where open source comes from, whether it has been tampered with, and other more reliable trust signals. This range of concerns is frequently referred to as “provenance,” and these open source projects have been aggressively baked into the major programming language registries such as npm, Maven and PyPi, as well as Kubernetes itself supporting software signing with Sigstore. Abstractions like eBPF and Cilium are also bringing software supply chain security visibility and enforcement closer to the Linux kernel.

[...] One particularly interesting technology to watch is Wolfi, an open source distro created and maintained by Chainguard, whose founders were cocreators of Sigstore and SLSA. Wolfi strips down the distro to its most essential components and introduces a novel rolling-release cadence so that only updated packages are available for download, and developers no longer need to download open source software outside of the distro.

This distro seeks to clear out all the nonessential packages so that when you see a CVE or CVSS score, you know it is a real vulnerability and don’t miss out on false negatives. With less code, fewer bugs, and fewer vulnerabilities, this slim-down of the distro also lets Wolfi give its users more severity-level data CVSS scores, plus support for new versions of open source software packages. On its one-year anniversary, Wolfi supports 1,300 package configurations and has gained the support of scanners from the major container security players such as Docker Scout, Grype, Snyk, Trivy, Wiz, and Prisma Cloud.

“Open source used to mean that you get a free copy of that source code forever,” says Lorenc. “Software doesn’t work like that anymore. You need a plan to constantly update every piece of software because of the rate of vulnerabilities being found. Software expires, and this is no longer a static problem, it’s dynamic.”


Original Submission

posted by janrinok on Wednesday October 04 2023, @09:03AM   Printer-friendly
from the no-free-lunch dept.

https://blog.google/technology/ai/an-update-on-web-publisher-controls/ [blog.google]

Today we're announcing Google-Extended, a new control that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of models that power those products. By using Google-Extended to control access to content on a site, a website administrator can choose whether to help these AI models become more accurate and capable over time.

Google-Extended

A standalone product token that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of models that power those products.

User agent token Google-Extended
Full user agent string Google-Extended doesn't have a separate HTTP request user agent string. Crawling is done with existing Google user agent strings; the robots.txt user-agent token is used in a control capacity.

User-agent: Google-Extended
Disallow: /

Just like you could previously modify your robots.txt file to not be included in Googles webcrawling you are now supposed to be able to opt out of being used as language model fodder for their AI. So adding those two lines should apparently do the trick. No word on if other models will adhere to it.

No word on if they will remove content already gathered, don't bet on it, or how this will be punished in the future. The current claim is that it will not effect ranking in the search engine. But that could always change.

I like how they phrase it as you "help these AI models" and by then opting out of that you are not being helpful or nice. You help and they reap all the rewards of your work, that sounds like a good deal .. right!?


Original Submission

posted by janrinok on Wednesday October 04 2023, @04:13AM   Printer-friendly
from the if-only-game-theory-modeled-real-life dept.

Game theory study shows that being uncooperative gives weaker parties the upper hand:

In a time of income inequality and ruthless politics, people with outsized power or an unrelenting willingness to browbeat others often seem to come out ahead.

New research from Dartmouth, however, shows that being uncooperative can help people on the weaker side of a power dynamic achieve a more equal outcome—and even inflict some loss on their abusive counterpart.

[...] Published in the latest issue of PNAS Nexus, the study takes a fresh look at what are known in game theory as "zero-determinant strategies" developed by renowned scientists William Press, now at the University of Texas at Austin, and the late Freeman Dyson at the Institute for Advanced Study in Princeton, N.J.

Zero-determinant strategies dictate that "extortionists" control situations to their advantage by becoming less and less cooperative—though just cooperative enough to keep the other party engaged—and by never being the first to concede when there's a stalemate. Theoretically, they will always outperform their opponent by demanding and receiving a larger share of what's at stake.

[...] "Unbending players who choose not to be extorted can resist by refusing to fully cooperate. They also give up part of their own payoff, but the extortioner loses even more," says Chen, who is now an assistant professor at the Beijing University of Posts and Telecommunications.

"Our work shows that when an extortioner is faced with an unbending player, their best response is to offer a fair split, thereby guaranteeing an equal payoff for both parties," she says. "In other words, fairness and cooperation can be cultivated and enforced by unbending players."

[...] "The empirical evidence to date suggests that people do engage in these extortionate behaviors, especially in asymmetric situations, and that the extorted party often tries to resist it, which is then costly to both parties," Hilbe says.

Journal Reference:
Xingru Chen, Feng Fu, Outlearning extortioners: unbending strategies can foster reciprocal fairness and cooperation, PNAS Nexus, Volume 2, Issue 6, June 2023, pgad176, https://doi.org/10.1093/pnasnexus/pgad176


Original Submission