Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:88 | Votes:246

posted by janrinok on Sunday June 12 2022, @09:21PM   Printer-friendly
from the she's-so-heavy dept.

NASA's second mobile launcher is too heavy, years late, and pushing $1 billion:

Three years ago, NASA awarded a cost-plus contract to the engineering firm Bechtel for the design and construction of a large, mobile launch tower. The 118-meter tower will support the fueling and liftoff of a larger and more capable version of NASA's Space Launch System rocket that may make its debut during the second half of this decade.

When Bechtel won the contract for this mobile launcher, named ML-2, it was supposed to cost $383 million. But according to a scathing new report by NASA's inspector general, the project is already running years behind schedule, the launcher weighs too much, and the whole thing is hundreds of millions of dollars over budget. The new cost estimate for the project is $960 million.

"We found Bechtel's poor performance is the main reason for the significant projected cost increases," the report, signed by Inspector General Paul Martin, states. The report finds that Bechtel underestimated the project's scope and complexity. In turn, Bechtel officials sought to blame some of the project's cost increases on the COVID-19 pandemic.

As of this spring, NASA had already obligated $435.6 million to the project. However, despite these ample funding awards, as of May, design work for the massive launch tower was still incomplete, Martin reports. In fact, Bechtel now does not expect construction to begin until the end of calendar year 2022 at the earliest.

The report cites a litany of mistakes by the contractor, Bechtel, but does not spare NASA from criticism. For example, Martin said that NASA awarded the contract to Bechtel before the specifications for the Space Launch System rocket's upper stage were finalized. (The major upgrade to the rocket will come via a more powerful second stage, known as the Exploration Upper Stage, or EUS). This lack of final requirements to accommodate the EUS hindered design of the mobile launch tower, which must power and fuel the rocket on the ground.


Original Submission

posted by janrinok on Sunday June 12 2022, @04:38PM   Printer-friendly
from the you're-the-power-that-I-need-to-make-it-all-succeed dept.

Quantum computer succeeds where a classical algorithm fails:

[...] Google's quantum computing group [...] used a quantum computer as part of a system that can help us understand quantum systems in general, rather than the quantum computer. And they show that, even on today's error-prone hardware, the system can outperform classical computers on the same problem.

To understand what the new work involves, it helps to step back and think about how we typically understand quantum systems. Since the behavior of these systems is probabilistic, we typically need to measure them repeatedly. The results of these measurements are then imported into a classical computer, which processes them to generate a statistical understanding of the system's behavior. With a quantum computer, by contrast, it can be possible to mirror a quantum state using the qubits themselves, reproduce it as often as needed, and manipulate it as necessary. This method has the potential to provide a route to a more direct understanding of the quantum system at issue.

[...] The first of these ideas describes some property of a quantum system involving an arbitrary number of items—like a quantum computer with n qubits. This is exactly the circumstance described above, where repeated measurements need to be made before a classical computer can reliably identify a property. By contrast, a quantum computer can store a copy of the system in its memory, allowing it to be repeatedly duplicated and processed.

These problems, the authors show, can be solved on a quantum computer in what's called polynomial time, where the number of qubits is raised to a constant power (denoted nk). Using classical hardware, by contrast, the time scales as a constant raised to the power related to the number of qubits. As the number of qubits increases, the time needed for classical hardware rises much faster.

[...] The second task they identify is a quantum principal component analysis, where computers are used to identify the property that has the largest influence on the quantum system's behavior. This was chosen in part because this analysis is thought to be relatively insensitive to the noise introduced by errors in today's quantum processors. Mathematically, the team shows that the number of times you'd need to repeat the measurements for analysis on a classical system grows exponentially with the number of qubits. Using a quantum system, the analysis can be done with a constant number of repeats.

Journal Reference:
Hsin-Yuan Huang et al., Quantum advantage in learning from experiments, Science, 376, 6598, 2022. DOI: 10.1126/science.abn7293


Original Submission

posted by janrinok on Sunday June 12 2022, @11:54AM   Printer-friendly
from the caves-of-steel dept.

Researchers study society's readiness for AI ethical decision making:

With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, "is society ready for AI ethical decision making?" by studying human interaction with autonomous cars.

In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another – the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver's decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.

In their second experiment, 563 human subjects responded to the researchers' questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to "vote" whether to allow the autonomous cars to make ethical decisions. [...]

The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. [...]

[...] "We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society's opinion," said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.

Journal Reference:
Johann Caro-Burnett & Shinji Kaneko, Is Society Ready for AI Ethical Decision Making? Lessons from a Study on Autonomous Cars, Journal of Behavioral and Experimental Economics, 2022. DOI: 10.1016/j.socec.2022.101881


Original Submission

posted by janrinok on Sunday June 12 2022, @07:12AM   Printer-friendly
from the install-a-new-and-different-cpu-to-patch dept.

MIT researchers uncover 'unpatchable' flaw in Apple M1 chips – TechCrunch:

Apple's M1 chips have an "unpatchable" hardware vulnerability that could allow attackers to break through its last line of security defenses, MIT researchers have discovered.

The vulnerability lies in a hardware-level security mechanism utilized in Apple M1 chips called pointer authentication codes, or PAC. This feature makes it much harder for an attacker to inject malicious code into a device's memory and provides a level of defense against buffer overflow exploits, a type of attack that forces memory to spill out to other locations on the chip.

Researchers from MIT's Computer Science and Artificial Intelligence Laboratory, however, have created a novel hardware attack, which combines memory corruption and speculative execution attacks to sidestep the security feature. The attack shows that pointer authentication can be defeated without leaving a trace, and as it utilizes a hardware mechanism, no software patch can fix it.

The attack, appropriately called "Pacman," works by "guessing" a pointer authentication code (PAC), a cryptographic signature that confirms that an app hasn't been maliciously altered. This is done using speculative execution — a technique used by modern computer processors to speed up performance by speculatively guessing various lines of computation — to leak PAC verification results, while a hardware side-channel reveals whether or not the guess was correct.

What's more, since there are only so many possible values for the PAC, the researchers found that it's possible to try them all to find the right one.

In a proof of concept, the researchers demonstrated that the attack even works against the kernel — the software core of a device's operating system — which has "massive implications for future security work on all ARM systems with pointer authentication enabled," says Joseph Ravichandran, a PhD student at MIT CSAIL and co-lead author of the research paper.

[Also Covered By]: Gizmodo

[Paper PDF]: PACMAN: Attacking ARM Pointer Authentication with Speculative Execution


Original Submission

posted by hubie on Sunday June 12 2022, @02:26AM   Printer-friendly
from the you-can-wrap-your-avacado-toast-in-cellulose dept.

Research team including Göttingen University studies kombucha cultures under extraterrestrial conditions:

An international research team including the University of Göttingen has investigated the chances of survival of kombucha cultures under Mars-like conditions. Kombucha is known as a drink, sometimes called tea fungus or mushroom tea, which is produced by fermenting sugared tea using kombucha cultures – a symbiotic culture of bacteria and yeast. Although the simulated Martian environment destroyed the microbial ecology of the kombucha cultures, surprisingly, a cellulose-producing bacterial species survived. The results were published in Frontiers in Microbiology.

[...] The results suggest that the cellulose produced by the bacteria is probably responsible for their survival in extraterrestrial conditions. This also provides the first evidence that bacterial cellulose could be a biomarker for extraterrestrial life and cellulose-based membranes or films could be a good biomaterial for protecting life and producing consumer goods in extraterrestrial settlements.

[...] Another focus was on investigations into changes in antibiotic resistance: the research team was able to show that the total number of antibiotic and metal resistance genes – meaning that these microorganisms might survive despite antibiotics or metals in the environment – were enriched in the exposed cultures. "This result shows that the difficulties associated with antibiotic resistance in medicine in space should be given special attention in the future," the scientists said.

Journal Reference:
Santana de Carvalho et al, The Space-Exposed Kombucha Microbial Community Member Komagataeibacter oboediens Showed Only Minor Changes in Its Genome After Reactivation on Earth, Frontiers in Microbiology, 2022. DOI: 10.3389/fmicb.2022.782175


Original Submission

posted by hubie on Saturday June 11 2022, @09:43PM   Printer-friendly
from the Ventura-upgrade-in-the-sunshine dept.

Apple's macOS Ventura leaves trusty 2015 MacBook Pro behind:

A new version of macOS means a new collection of Macs can no longer run Apple's latest desktop operating system. Perhaps most notably, the new macOS Ventura update won't be available for the 2015 MacBook Pro.

[...] Another notable change compared to the compatibility list for macOS Monterey is the end of major OS updates for Apple's 2013 Mac Pro (aka the "Trashcan"). But given the age of the machine, not to mention its much derided design, I can't imagine as many will be mourning its passing. With its Ventura update, Apple is no longer offering updates for any pre-2017 Macs, which means it's offering up to five years of major macOS updates for these machines.

macOS Ventura is currently only available as a beta update for developers, but is due to launch as a public beta next month. Like previous major macOS updates, expect a full release this fall.

Obsolete macs to be obsoleted!


Original Submission

posted by hubie on Saturday June 11 2022, @04:59PM   Printer-friendly
from the ice-cold-chili-peppers dept.

California-based Huy Fong Inc says the shortage is due to drought affecting its peppers – will it lead to battles in condiment aisles?

A looming Sriracha shortage has hot sauce lovers feeling fiery, after the maker of the popular condiment said it was suspending sales over the summer due to a shortage of chili peppers.

Southern California-based Huy Fong Inc confirmed that its beloved products, including Sriracha Hot Chili Sauce, Chili Garlic and Sambal Oelek, would be affected, according to Bloomberg.

In an April email to customers, the company described the pepper shortage as "severe" and related to the climate. The company sources its peppers from various farms in California, New Mexico and Mexico, and said that weather conditions were affecting the quality of the peppers and deepening the chili pepper shortage.


Original Submission

posted by hubie on Saturday June 11 2022, @12:14PM   Printer-friendly
from the anybody-got-a-razor? dept.

I was browsing my media and decided to rewatch this, as I hadn't looked at it in fifteen years or so.

I was mainly struck by the unalloyed optimism of pretty much everyone who contributed, including Linus, Richard Stallman, Eric Raymond, Alan Cox, Ted T'so, Eric Allman and many other original neckbeards (I use that appellation affectionately, and in a bunch of cases, literally).

In the 20-plus years since the film was released, much has changed.

I think much of the optimism embodied by RMS and the FSF has waned a good deal (and more's the pity), and the complete reversal of Microsoft from Ballmer's "Free Software is communism" to Nadella's embrace of GNU/Linux in both Azure and WSL, to the co-opting of Linux for Google/Android, as well as aging and slow drift towards retirement/death/irrelevance of those who championed Free Software for nearly four decades have really hurt the movement, while boosting Open Source.

I think that refocusing on "free as in beer" instead of "free as in freedom" across the development community may have been inevitable as GNU/Linux (although I guess it could have been GNU/Hurd or one of the BSDs) became mainstream a couple decades after the commoditization of IBM PC-like hardware.

That got me thinking, where does that leave us and "who are the new neckbeards tht can carry the vision of Free Software into the middle of the century?" Are there really any such folks with the passion and drive to champion Free Software moving forward?

Or is Free Software (as originally defined and advocated for by RMS and the FSF) dying a slow death in favor of "Open Source" and more permissive licenses like MIT and Apache?

What will Open Source look like in 2050, 52 years after Bruce Perens and the OSI's Open Source definition?


Original Submission

posted by hubie on Saturday June 11 2022, @07:31AM   Printer-friendly
from the study-experts-assemble! dept.

NASA is assembling a team to gather data on unidentifiable events in the sky:

The team will gather data on "events in the sky that cannot be identified as aircraft or known natural phenomena -- from a scientific perspective," the agency said.

NASA said it was interested in UAPs from a security and safety perspective. There was no evidence UAPs are extraterrestrial in origin, NASA added. The study will begin this fall and is expected to take nine months.

"NASA believes that the tools of scientific discovery are powerful and apply here also," said Thomas Zurbuchen, the associate administrator of the Science Mission Directorate at NASA Headquarters in Washington, DC.

"We have access to a broad range of observations of Earth from space -- and that is the lifeblood of scientific inquiry. We have the tools and team who can help us improve our understanding of the unknown. That's the very definition of what science is. That's what we do."

NASA to Set Up Independent Study on Unidentified Aerial Phenomena:

NASA is commissioning a study team to start early in the fall to examine unidentified aerial phenomena (UAPs) – that is, observations of events in the sky that cannot be identified as aircraft or known natural phenomena – from a scientific perspective. The study will focus on identifying available data, how best to collect future data, and how NASA can use that data to move the scientific understanding of UAPs forward.
[...]
The study is expected to take about nine months to complete. It will secure the counsel of experts in the scientific, aeronautics, and data analytics communities to focus on how best to collect new data and improve observations of UAPs.

"Consistent with NASA's principles of openness, transparency, and scientific integrity, this report will be shared publicly," said Evans. "All of NASA's data is available to the public – we take that obligation seriously – and we make it easily accessible for anyone to see or study."

Statistically, we are probably not alone. But do you think that we will make contact with another 'intelligent' life form in the future or never?


Original Submission

posted by hubie on Saturday June 11 2022, @02:46AM   Printer-friendly
from the sharing-is-caring dept.

You may want to think twice before giving the parking attendant your Tesla-issued NFC card.

Last year, Tesla issued an update that made its vehicles easier to start after being unlocked with their NFC key cards. Now, a researcher has shown how the feature can be exploited to steal cars.

For years, drivers who used their Tesla NFC key card to unlock their cars had to place the card on the center console to begin driving. Following the update, which was reported here last August, drivers could operate their cars immediately after unlocking them with the card. The NFC card is one of three means for unlocking a Tesla; a key fob and a phone app are the other two.k

Martin Herfurt, a security researcher in Austria, quickly noticed something odd about the new feature: Not only did it allow the car to automatically start within 130 seconds of being unlocked with the NFC card, but it also put the car in a state to accept entirely new keys—with no authentication required and zero indication given by the in-car display.
[...]
The official Tesla phone app doesn't permit keys to be enrolled unless it's connected to the owner's account, but despite this, Herfurt found that the vehicle gladly exchanges messages with any Bluetooth Low Energy, or BLE, device that's nearby. So the researcher built his own app, named Teslakee, that speaks VCSec, the same language that the official Tesla app uses to communicate with Tesla cars.

A malicious version of Teslakee that Herfurt designed for proof-of-concept purposes shows how easy it is for thieves to surreptitiously enroll their own key during the 130-second interval.

Related, but different BLE attack: New Bluetooth hack can unlock your Tesla—and all kinds of other devices


Original Submission

posted by hubie on Friday June 10 2022, @10:02PM   Printer-friendly
from the I'm-the-type-that-they-classify-as-quaint dept.

New Chip Can Process and Classify Nearly Two Billion Images per Second - Technology Org:

In traditional neural networks used for image recognition, the image of the target object is first formed on an image sensor, such as the digital camera in a smartphone. Then, the image sensor converts light into electrical signals, and ultimately into binary data, which can then be processed, analyzed, stored, and classified using computer chips. Speeding up these abilities is key to improving any number of applications, such as face recognition, automatically detecting text in photos, or helping self-driving cars recognize obstacles.

[...] The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule.

To address this limitation, [...] have removed the four main time-consuming culprits in the traditional computer chip: the conversion of optical to electrical signals, the need for converting the input data to binary format, a large memory module, and clock-based computations.

They have achieved this through direct processing of light received from the object of interest using an optical deep neural network implemented on a 9.3 square millimeter chip.

[...] "Our chip processes information through what we call 'computation-by-propagation,' meaning that, unlike clock-based systems, computations occur as light propagates through the chip," says Aflatouni. "We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology."

"When current computer chips process electrical signals they often run them through a Graphics Processing Unit, or GPU, which takes up space and energy," says Ashtiani. "Our chip does not need to store the information, eliminating the need for a large memory unit."

"And, by eliminating the memory unit that stores images, we are also increasing data privacy," Aflatouni says. "With chips that read image data directly, there is no need for photo storage and thus, a data leak does not occur."

[...] "We aren't the first to come up with technology that reads optical signals directly," says Geers, "but we are the first to create the complete system within a chip that is both compatible with existing technology and scalable to work with more complex data."

[...] "To understand just how fast this chip can process information, think of a typical frame rate for movies," he continues. "A movie usually plays between 24 and 120 frames per second. This chip will be able to process nearly 2 billion frames per second! For problems that require light speed computations, we now have a solution, but many of the applications may not be fathomable right now."

Source: University of Pennsylvania


Original Submission

posted by janrinok on Friday June 10 2022, @07:18PM   Printer-friendly
from the burnin'-rubber dept.

Car tyres produce vastly more particle pollution than exhausts, tests show:

Emissions from tailpipes in developed countries are much lower in new cars, with those in Europe far below the legal limit.A

Almost 2,000 times more particle pollution is produced by tyre wear than is pumped out of the exhausts of modern cars, tests have shown.

The tyre particles pollute air, water and soil and contain a wide range of toxic organic compounds, including known carcinogens, the analysts say, suggesting tyre pollution could rapidly become a major issue for regulators.

Air pollution causes millions of early deaths a year globally. The requirement for better filters has meant particle emissions from tailpipes in developed countries are now much lower in new cars, with those in Europe far below the legal limit. However, the increasing weight of cars means more particles are being thrown off by tyres as they wear on the road.

The tests also revealed that tyres produce more than 1tn ultrafine particles for each kilometre driven, meaning particles smaller than 23 nanometres. These are also emitted from exhausts and are of special concern to health, as their size means they can enter organs via the bloodstream. Particles below 23nm are hard to measure and are not currently regulated in either the EU or US.

"Tyres are rapidly eclipsing the tailpipe as a major source of emissions from vehicles," said Nick Molden, at Emissions Analytics, the leading independent emissions testing company that did the research. "Tailpipes are now so clean for pollutants that, if you were starting out afresh, you wouldn't even bother regulating them."

[...] Other recent research has suggested tyre particles are a major source of the microplastics polluting the oceans. A specific chemical used in tyres has been linked to salmon deaths in the US and California proposed a ban this month.

"The US is more advanced in their thinking about [the impacts of tyre particles]," said Molden. "The European Union is behind the curve. Overall, it's early days, but this could be a big issue."


Original Submission

posted by janrinok on Friday June 10 2022, @04:31PM   Printer-friendly
from the shameless-self-interest dept.

'Make VPN Detection Tools Mandatory to Fight Geo-Piracy' * TorrentFreak:

The United States is actively exploring options to update copyright law to bring it into line with the current online environment.

Most recently, the Copyright Office is looking into the option of making certain standard technical measures (STMs) mandatory for online platforms. This could include upload filters to block pirated content from being reuploaded.

[...] Most copyright holders are supportive of the idea. They feel that without proper incentives, some online services will fail to address the piracy problem. Opponents of the idea, meanwhile, point out that it may lead to all sorts of problems and may negatively affect free expression.

Much of the discussion thus far has focused on tools and technologies that detect and filter copyright-infringing content. However, this week we spotted another submission that promotes a different type of measure, which isn't necessarily less controversial.

In a letter to the Copyright Office, GeoComply CEO Anna Sainsbury suggests that VPN detection tools can play an important role as well.

"As the U.S. Copyright Office explores potential technologies and solutions to include as part of the Standard Technical Measures under section 512, we respectfully suggest the inclusion of accurate and effective VPN detection tools to ensure the full protection of copyrighted works."

VPN detection tools are already widely used by major streaming services. They include Netflix, which was one of the pioneers on this front. The goal of these tools is to prevent 'geo-piracy', which is carried out by people pretending to be in a location that differs from where they actually are.

[...] The fact that VPNs can also be used for legitimate purposes does not prevent platforms from banning them outright.


Original Submission

posted by janrinok on Friday June 10 2022, @01:47PM   Printer-friendly
from the giving-in-to-demands-or-calling-his-bluff? dept.

Twitter reportedly will give Musk the full "firehose" of user data he demanded

https://arstechnica.com/tech-policy/2022/06/twitter-reportedly-will-give-musk-the-full-firehose-of-user-data-he-demanded/

Twitter now plans to comply with Elon Musk's demand for user data that he says is needed to determine whether the company's spam estimates are accurate, The Washington Post reported Wednesday.

"After a weeks-long impasse, Twitter's board plans to comply with Elon Musk's demands for internal data by offering access to its full 'firehose,' the massive stream of data comprising more than 500 million tweets posted each day, according to a person familiar with the company's thinking, who spoke on the condition of anonymity to describe the state of negotiations," the Post wrote.

Twitter declined comment on the Post report when contacted by Ars today but pointed to its statement from Monday that "Twitter has and will continue to cooperatively share information with Mr. Musk to consummate the transaction in accordance with the terms of the merger agreement."

Whether Twitter has to give all the user data to Musk is under dispute. The Post report comes two days after Musk's legal team sent a letter to Twitter claiming the company violated the merger agreement by refusing to provide the data behind its spam estimates.

Twitter Set to Comply With Elon Musk Demand for Data on Fake Accounts

Twitter set to comply with Elon Musk demand for data on fake accounts:

Elon Musk warned he might walk away from Twitter if it fails to provide the data on spam and fake accounts he seeks.

Twitter is preparing to comply with Elon Musk's demand for data on fake accounts, after the Tesla chief executive threatened to walk away from buying the business if it refused.


Original Submission #1Original Submission #2

posted by janrinok on Friday June 10 2022, @11:04AM   Printer-friendly
from the manifest-destiny dept.

Ad-block developers fear end is near for their extensions:

Seven months from now, assuming all goes as planned, Google Chrome will drop support for its legacy extension platform, known as Manifest v2 (Mv2). This is significant if you use a browser extension to, for instance, filter out certain kinds of content and safeguard your privacy.

Google's Chrome Web Store is supposed to stop accepting Mv2 extension submissions sometime this month. As of January 2023, Chrome will stop running extensions created using Mv2, with limited exceptions for enterprise versions of Chrome operating under corporate policy. And by June 2023, even enterprise versions of Chrome will prevent Mv2 extensions from running.

The anticipated result will be fewer extensions and less innovation, according to several extension developers.

Browser extensions such as Ghostery Privacy Ad Blocker, uBlock Origin, and Privacy Badger, along with scripting extensions including TamperMonkey, which are each designed to block adverts and other content and/or protect one's privacy online, are expected to function less effectively, if they can even make the transition from Mv2 to the new approach: Manifest v3.

"If you asked me if we can have a Manifest v3 version of Privacy Badger, my answer is yes, we can and we will," said Alexei Miagkov, senior staff technologist at the Electronic Frontier Foundation, in a phone interview with The Register. "But the problem is more insidious. It's that Manifest v3 caps the certain capabilities of extensions and cuts off innovation potential."

Google argues otherwise and maintains its platform renovation will meet developers' needs, including those making tools for content blocking and privacy. The internet titan, which declined to comment on the record, maintains that Mv3 aims to improve privacy by limiting extensions' access to sensitive data and that it has been working with extension developers to balance their needs with those of users.

Google points to past endorsements, such as remarks provided by Sofia Lindberg, tech lead of ad amelioration biz Eyeo, which makes Adblock Plus. "We've been very pleased with the close collaboration established between Google's Chrome Extensions Team and our own engineering team to ensure that ad-blocking extensions will still be available after Manifest v3 takes effect."

[...] Google began work on Manifest v3, the successor to Mv2, in late 2018, ostensibly to make extensions more secure, performant, and private. The company's extension platform renovation was necessary – because extension security problems were rampant – and immediately controversial. An ad company making security claims that, coincidentally, hinder user-deployed content and privacy defenses looks like self-interest.

And Mv3 remains the subject of ongoing debate as the extension platform capabilities and APIs continue to be hammered out. But it has been adopted, with some caveats, by other browser makers, including Apple and Mozilla. Makers of Chromium-based browsers inherit Mv3 and Microsoft has already endorsed the new spec.

Others building atop Chromium like Brave, Opera, and Vivaldi have indicated interest in continuing to support Mv2, though it's unclear whether that will be practical beyond June of next year. If Google removes the Mv2 code from Chromium, maintaining the code in a separate Chromium fork may prove to be too much trouble.


Original Submission