Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:38 | Votes:84

posted by hubie on Tuesday August 20, @07:17PM   Printer-friendly

Customers uncertain as app remains downloadable after company's Chapter 7 filing:

Roku has finally axed the Redbox app from its platform. Redbox parent company Chicken Soup for the Soul Entertainment filed for Chapter 11 bankruptcy in June and moved to Chapter 7 in July, signaling the liquidation of its assets. However, the app has remained available but not fully functional in various places, leaving customers wondering if they will still be able to access content they bought. This development, however, mostly squashes any remaining hope of salvaging those purchases.

Redbox is best known for its iconic red kiosks where people could rent movie and TV (and, until 2019, video game) discs. But in an effort to keep up with the digital age, Redbox launched a streaming service in December 2017. At the time, Redbox promised "many" of the same new releases available at its kiosks but also "a growing collection" of other movies and shows. The company claimed that its on-demand streaming service was competitive because it had "newest-release movies" that subscription streaming services didn't have. The service offered streaming rentals as well as purchases.

[...] Roku's move suggests that Redbox customers will not be able to watch items they purchased. Barring an unlikely change—like someone swooping in to buy and resurrect Redbox—it's likely that other avenues for accessing the Redbox app will also go away soon.

[...] Since Redbox filed for bankruptcy, though, there has been some confusion and minimal communication about what will happen to Redbox's services. People online have asked if there's any way to watch content they purchased to own and/or get reimbursed. Some have even reported being surprised after learning that Redbox, owned by Chicken Soup since 2022, was undergoing bankruptcy procedures, pointing to limited updates from Redbox, Chicken Soup, and/or the media.

[...] As Chicken Soup sorts through its debts and liquidation, customers are left without guidance about what to do with their rental DVDs or how they can access movies/shows they purchased. But when it comes to purchases made via streaming services, it's more accurate to consider them rentals, despite them not being labeled as such and costing more than rentals with set time limits. As we've seen before, streaming companies can quickly yank away content that people feel that they paid to own, be it due to licensing disputes, mergers and acquisitions, or other business purposes. In this case, a company's failure has resulted in people no longer being able to access content they already paid for and presumed they'd be able to access for the long haul.

For some, the reality of what it means to "own" a streaming purchase, combined with the unreliability and turbulent nature of today's streaming industry, has strengthened the appeal of physical media. Somewhat ironically, though, Redbox shuttering meant the end of one of the last mainstream places to access DVDs.


Original Submission

posted by hubie on Tuesday August 20, @02:25PM   Printer-friendly
from the he's-more-machine-now-than-man dept.

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death.

His idea? Replace your body parts. All of them. Even your brain.

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around 'functional brain tissue replacement,' the idea of adding youthful tissue to people's brains.

https://www.technologyreview.com/2024/08/16/1096808/arpa-h-jean-hebert-wants-to-replace-your-brain/

See also: Ship of Theseus


Original Submission

posted by hubie on Tuesday August 20, @09:40AM   Printer-friendly
from the Gee,-Wilbur- dept.

The researchers set 20 horses a task consisting of three stages:

A new study showed the animals performed better than expected in a complex reward-based game.

Researchers found that when denied treats for not following the rules of the game, the horses were able to instantly switch strategies to get more rewards.

It shows the animals have the ability to think and plan ahead – something previously considered to be beyond their capacity, scientists from Nottingham Trent University (NTU) said.

[...] Dr Carrie Ijichi, a senior lecturer in equine science at NTU, said: "Horses are not natural geniuses, they are thought of as mediocre, but this study shows they're not average and are, in fact, more cognitively advanced than we give them credit for."

To understand more, the researchers set 20 horses a task consisting of three stages.

In the first stage, the animals touched a piece of card with their nose in order to get a treat.

But things became more complicated when a light was introduced and horses were only allowed a snack if they touched the card while the light was switched off.

The team found that the horses kept blindly touching the card, regardless of whether the light was on or off, and were rewarded for correct responses.

In the final stage of the game, a penalty was put in place where touching the card when the "stop" light was on resulted in a 10-second time-out.

But instead of indiscriminately touching the card, the team found that the horses were engaging with the rules – only making a move at the right time in order to receive their treat.

The researchers said this suggests that rather than failing to grasp the rules of the game, the horses had understood it the whole time but had found a way to play in the second stage that did not require much attention.

[...] The researchers said the findings, published in the journal Applied Animal Behaviour Science, suggests horses have the ability to form an internal model of the world around them to make decisions and predictions, a technique known as model-based learning.

It was previously thought that model-based learning was too complex for horses because they have an underdeveloped pre-frontal cortex, a part of the brain associated with strategic thinking.

Dr Ijichi said this suggests that the hoses "must be using another area of the brain to achieve a similar result".

She said: "This teaches us that we shouldn't make assumptions about animal intelligence or sentience based on whether they are 'built' just like us."

Journal: https://doi.org/10.1016/j.applanim.2024.106339


Original Submission

posted by hubie on Tuesday August 20, @04:52AM   Printer-friendly
from the "best-practices"-means-don't-block-Google dept.

Arthur T Knackerbracket has processed the following story:

An update in Google Chrome's browser extension support is bad news for uBlock Origin.

According to PCWorld, Chrome's shift from Manifest V2 to V3 is deprecating certain features that the popular ad-blocker relies on. The Chrome update "aims to... improve the privacy, security, and performance of extensions," by changing the way it manages API requests. That means with the upcoming Chrome update, uBlock Origin will be automatically disabled.

[...] The popular ad-blocker, which has over 30 million users, reportedly still works. But a disclaimer at the top of extension page says, "This extension may soon no longer be supported because it doesn't follow best practices for Chrome extensions."

Developer Raymond Hill who makes uBlock Origin has scrambled to deploy a fix and now offers uBlock Origin Lite, which is compliant with Manifest V3. It already has 200,000 users, and still has standard ad-blocking capabilities, but is less dynamic in the sense that it requires the user to allow or block permissions on a "per-site basis." In a GitHub post about the new extension, Hill explained that it isn't intended to be a replacement for the original.

"I consider uBO Lite to be too different from uBO to be an automatic replacement," said the developer. "You will have to explicitly find a replacement to uBO according to what you expect from a content blocker. uBO Lite may or may not fulfill your expectations."

uBlock Origin still works on other browsers, so you could always switch to a Chrome alternative like Firefox or Edge. But if you want to stick with Chrome, you have to play by Chrome's rules, and that means getting a different ad-blocker.


Original Submission

posted by janrinok on Tuesday August 20, @12:07AM   Printer-friendly
from the looks-like-no-hefeweizens-in-space dept.

Scientists are exploring how fermentation in microgravity effects various brewing properties:

Virtually every civilization throughout history has relied on fermentation not just for their booze, but for making everything from bread, to pickles, to yogurt. As humanity's technological knowledge expanded, we have adapted those same chemistry principles to pharmaceuticals and biofuels, among many other uses. And while it may not be the first necessity that comes to mind when planning for long-term living in a lunar base, or even on Mars, the process will be crucial to long-term mission success.

To explore how these concepts may change offworld, a team at the University of Florida's Institute of Food and Agricultural Sciences (UF/IFAS) first experimented with making beer in microgravity. Their results, published in the journal Beverages, indicate microgravity may not only speed up fermentation processes—it may also produce higher quality products.

[...] Getting a beer brewer's starter kit up to the International Space Station, however, isn't quite in the cards yet. Instead, the UF team led by undergraduate researcher Pedro Fernandez Mendoza created a tiny microgravity simulator here on Earth. After gathering locally grown barley and mashing it into wort (grain-derived sugary liquid necessary for beers and whiskey), Mendoza and colleagues portioned it out into six samples. They then added the yeast used in lagers, Saccharomyces pastrorianus, to each tube before leaving three of them to act as controls. The other trio were placed in a clinostat—a tool capable of simulating microgravity conditions by constantly rotating its contents around a horizontal axis. Over the course of three days, the team then assessed their fermenting baby-beers at regular intervals on the basis of density, yeast counts, and yeast viability.

After three days, researchers were able to confirm one of their initial hypotheses that microgravity doesn't appear to harmfully affect fermentation. What's more, the fermentation process actually sped up in the clinostat samples as compared to their controls. But there was one additional, unexpected result—microgravity yeast may allow for even higher quality products than simply fermenting here on Earth. Although further investigation is needed, researchers think this might relate to a particular gene in yeast that oversees the levels of ester—fermentation byproducts responsible for both good and bad beer flavors.

Typically, the ratio between high alcohol groups and lager ester amounts ranges between 3-4:1, with higher ratios offering a drier, less aromatic beer. The team recorded their control samples as having a ratio of 1.4:1, while their microgravity beer measured 4.6:1, implying the latter was "less aromatic by this measure." Meanwhile, two esters in particular, isoamyl acetate and 2-phenethyl acetate, showed "significant differences" between microgravity and controls. Higher concentrations of these esters produce a fruity, banana-like flavor in beers that many drinkers often consider undesirable. In the microgravity brews, a "multiple-fold decrease" in ester concentration compared to the standard examples.

"Depending upon the brewery, these compounds may be desirable; however, the presence of these compounds above a detection threshold would usually be considered a defect," the team writes. Given this, their microgravity results offered a final product "that would be considered higher quality due to the reduced esters.

Journal Reference: Pedro Fernandez Mendoza et al, Brewing Beer in Microgravity: The Effect on Rate, Yeast, and Volatile Compounds, Beverages (2024). DOI: 10.3390/beverages10020047


Original Submission

posted by janrinok on Monday August 19, @07:27PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In a major step for the international Deep Underground Neutrino Experiment (DUNE), scientists have detected the first neutrinos using a DUNE prototype particle detector at the U.S. Department of Energy's Fermi National Accelerator Laboratory (Fermilab).

The revolutionary new technology at the heart of DUNE's new prototype detector is LArPix, an innovative end-to-end pixelated sensor and electronics system capable of imaging neutrino events in true-3D that was conceived, designed, and built by a team of Lawrence Berkeley National Laboratory (Berkeley Lab) physicists and engineers and installed at Fermilab earlier this year.

DUNE, currently under construction, will be the most comprehensive neutrino experiment in the world. It will enable scientists to explore new areas of neutrino research and possibly address some of the biggest physics mysteries in the universe, including searching for the origin of matter and learning more about supernovae and black hole formation.

Since DUNE will feature new designs and technology, scientists are testing prototype equipment and components in preparation for the final detector installation. In February, the DUNE team finished the installation of their latest prototype detector in the path of an existing neutrino beamline at Fermilab. On July 10, the team announced that they successfully recorded their first accelerator-produced neutrinos in the prototype detector, a step toward validating the design.

"This is a truly momentous milestone demonstrating the potential of this technology," said Louise Suter, a Fermilab scientist who coordinated the module installation. "It is fantastic to see this validation of the hard work put into designing, building, and installing the detector."

Berkeley Lab leads the engineering integration of the new neutrino detection system, part of DUNE's near detector complex that will be built on the Fermilab site. Its prototype—known as the 2×2 prototype because it has four modules arranged in a square—records particle tracks with liquid-argon time projection chambers.

"DUNE needed a liquid-argon TPC (LArTPC) detector that could tolerate a high-intensity environment, but this was thought to be impossible," said Dan Dwyer, the head of the Berkeley Lab's Neutrino Physics Group and the project's technical lead for the ND-LAr Consortium, which contributed key elements to the new system's design and fabrication. "With the invention of LArPix, our team at LBNL has made this dream a reality. The 2×2 Demonstrator now installed at DUNE combines our true-3D readout with high-coverage light detectors, producing a truly innovative particle detector."

Brooke Russell, formerly a Chamberlain Postdoctoral Fellow at Berkeley Lab and now the Neil and Jane Pappalardo Special Fellow in Physics at MIT, played a crucial role in the development of the 2×2 prototype, which she describes as "a first-of-its-kind detector, with more than 337,000 individual charge-sensitive pixels at roughly 4-millimeter granularity." Berkeley Lab led the design, construction, and testing of the end-to-end pixelated charge readout system during the COVID-19 pandemic.

"Operation of the 2×2 prototype in a neutrino beam will usher in a new era of high-fidelity, inherently 3D LArTPC images for neutrino interaction measurements," Russell said.

The final version of the DUNE near detector will feature 35 liquid argon modules, each larger than those in the prototype. The modules will help navigate the enormous flux of neutrinos expected at the near site.

The 2×2 prototype implements novel technologies that enable a new regime of detailed, cutting-edge neutrino imaging to handle the unique conditions in DUNE. It has a millimeter-sized pixel readout system, developed by a team at Berkeley Lab, that allows for high-precision 3D imaging on a large scale. This, coupled with its modular design, sets the prototype apart from previous neutrino detectors like ICARUS and MicroBooNE.

Now, the 2×2 prototype provides the first accelerator-neutrino data to be analyzed and published by the DUNE collaboration.

DUNE is split between two locations hundreds of miles apart: A beam of neutrinos originating at Fermilab, close to Chicago, will pass through a particle detector located on the Fermilab site, then travel 800 miles through the ground to several huge detectors at the Sanford Underground Research Facility (SURF) in South Dakota.

The DUNE detector at Fermilab will analyze the neutrino beam close to its origin, where the beam is extremely intense. Collaborators expect this near detector to record about 50 interactions per pulse, which will come every second, amounting to hundreds of millions of neutrino detections over DUNE's many expected years of operation. Scientists will also use DUNE to study neutrinos' antimatter counterpart, antineutrinos.

This unprecedented flux of accelerator-made neutrinos and antineutrinos will enable DUNE's ambitious science goals. Physicists will study the particles with DUNE's near and far detectors to learn more about how they change type as they travel, a phenomenon known as neutrino oscillation. By looking for differences between neutrino oscillations and antineutrino oscillations, physicists will seek evidence for a broken symmetry known as CP violation to determine whether neutrinos might be responsible for the prevalence of matter in our universe.

The DUNE collaboration is made up of more than 1,400 scientists and engineers from over 200 research institutions. Nearly 40 of these institutions work on the near detector. Specifically, the hardware development of the 2×2 prototype was led by the University of Bern in Switzerland, DOE's Fermilab, Berkeley Lab, and SLAC National Accelerator Laboratory, with significant contributions from many universities.


Original Submission

posted by janrinok on Monday August 19, @02:42PM   Printer-friendly

https://www.userlandia.com/home/iigs-mhz-myth

There's many legends in computer history. But a legend is nothing but a story. Someone tells it, someone else remembers it, and everybody passes it on. And the Apple IIGS has a legend all its own. Here, in Userlandia, we're going to bust some megahertz myths.

I love the Apple IIGS. It's the fabulous home computer you'd have to be crazy to hate. One look at its spec sheet will tell you why. The Ensoniq synthesizer chip brings 32 voices of polyphonic power to the desktop. Apple's Video Graphics Controller paints beautiful on-screen pictures from a palette of thousands of colors. Seven slots and seven ports provide plenty of potential for powerful peripherals. These ingredients make a great recipe for a succulent home computer. But you can't forget the most central ingredient: the central processing unit. It's a GTE 65SC816 clocked at 2.8 MHz—about 2.72 times faster than an Apple IIe. When the IIGS launched in September 1986 its contemporaries were systems like the Atari 1040ST, the Commodore Amiga 1000, and of course Apple's own Macintosh Plus. These machines all sported a Motorola 68000 clocked between 7 and 8 MHz. If I know anything about which number is bigger than the other number, I'd say that Motorola's CPU is faster.

"Now hold on there," you say! "Megahertz is just the clock speed of the chip—it says nothing about how many instructions are actually executed during those cycles, let alone the time spent reading and writing to RAM!" And you know what, that's true! The Apple II and Commodore 64 with their 6502 and 6510 CPUs clocked at 1 MHz could trade blows with Z80 powered computers running at three times the clock speed. And the IIGS had the 6502's 16-bit descendant: the 65C816. Steve Wozniak thought Western Design Center had something special with that chip.

And so the story begins...


Original Submission

posted by janrinok on Monday August 19, @10:01AM   Printer-friendly
from the pray-I-don't-alter-it-any-further dept.

Blocking the company's AI overviews also blocks its web crawler:

As the US government weighs its options following a landmark "monopolist" ruling against Google last week, online publications increasingly face a bleak future. (And this time, it's not just because of severely diminished ad revenue.) Bloomberg reports that their choice now boils down to allowing Google to use their published content to produce inline AI-generated search "answers" or losing visibility in the company's search engine.

The crux of the problem lies in the Googlebot, the crawler that scours and indexes the live web to produce the results you see when you enter search terms. If publishers block Google from using their content for the AI-produced answers you now see littered at the top of many search results, they also lose the privilege of including their web pages in the standard web results.

The catch-22 has led publications, rival search engines and AI startups to pin their hopes on the Justice Department. On Tuesday, The New York Times reported that the DOJ is considering asking a federal judge to break up parts of the company (spinning off sections like Chrome or Android). Other options it's reportedly weighing include forcing Google to share search data with competitors or relinquishing its default search-engine deals, like the $18 billion one it inked with Apple.

Google uses a separate crawler for its Gemini (formerly Bard) chatbot. But its main crawler covers both AI Overviews and standard searches, leaving web publishers with little (if any) leverage. If you let Google scrape your content for AI Overview answers, readers may consider that the end of the matter without bothering to visit your site (meaning zero revenue from those potential readers). But if you block the Googlebot, you lose search visibility, which likely means significantly less short-term income and a colossal loss of long-term competitive standing.

iFixit CEO Kyle Wiens told Bloomberg, "I can block ClaudeBot [Anthropic's crawler for its Claude chatbot] from indexing us without harming our business. But if I block Googlebot, we lose traffic and customers."

[...] The ball is now in the Justice Department's court to figure out where Google — and, to an extent, the entire web — goes from here. Bloomberg's full story is worth a read.


Original Submission

posted by hubie on Monday August 19, @05:13AM   Printer-friendly

Experts studying material from event 66m years ago find signs to show how Chicxulub impact crater was formed:

When a massive space rock slammed into Earth 66m years ago, it wiped out huge swathes of life and ended the reign of the dinosaurs. Now scientists say they have new insights into what it was made from.

Experts studying material laid down at the time of the event say they have found tell-tale signs to support the idea the Chicxulub impact crater was produced by a carbon-rich, "C-type", asteroid that originally formed beyond the orbit of Jupiter.

Mario Fischer-Gödde, co-author of the research from the University of Cologne, said the team are now keen to look at deposits associated with an impact some suggest was behind a large extinction about 215m years ago.

"Maybe this way we could find out if C-type asteroid impacts would have a higher probability for causing mass extinction events on Earth," he said.

Writing in the journal Science, the researchers report how they studied different types, or isotopes, of ruthenium within a layer of material that settled over the globe after the impact 66m years ago.

"This layer contains traces of the remnants of the asteroid" said Fischer-Gödde.

The team chose to look at ruthenium because the metal is very rare in the Earth's crust.

"The ruthenium that we find in this layer, therefore, is almost 100% derived from the asteroid," said Fischer-Gödde, adding that offers scientists a way to determine the makeup, and hence type, of the impactor itself.

The team found samples of the layer from Denmark, Italy and Spain all showed the same ruthenium isotope composition.

Crucially, said Fischer-Gödde, the result is different to the composition generally found on Earth, ruling out a theory that the presence of ruthenium and other metals such as osmium and platinum, are down to past eruptions of the Deccan Traps volcanoes.

The team also cast doubt on the possibility that the impactor was a comet, saying the ruthenium isotope composition of the samples is different to that of meteorites thought to be fragments of comets that have lost their ice.

[...] Fischer-Gödde said C-type asteroids can today be found in the asteroid belt that sits between Mars and Jupiter because, not long after the formation of the solar system, Jupiter migrated, scattering asteroids in the process.

As a result, he suggests the ill-fated space rock probably came from there.

"Maybe there was a collision of two asteroid bodies in the belt, and then this chunk kind of went on an Earth-crossing orbit. That could be one scenario," he said, although he noted there are other possibilities, including that it came from the Oort cloud that is thought to surround the solar system.

Journal Reference:
    Mario Fischer-Gödde, Jonas Tusch, Steven Goderis et al., Ruthenium isotopes show the Chicxulub impactor was a carbonaceous-type asteroid, Science (DOI: 10.1126/science.adk4868)


Original Submission

posted by janrinok on Monday August 19, @12:27AM   Printer-friendly

https://www.linkedin.com/pulse/reverse-engineering-patent-protection-cautionary-tale-harry-strange/

In 1983, the home video game console market crashed bringing many companies to fold. After the dust settled, Nintendo emerged as a phoenix from the flames with their iconic Famicom, known outside of Japan as the Nintendo Entertainment System (NES). The NES went on to sell in excess of 60 million units and brought about the third generation of console gaming. The success in 2016 of the NES Classic goes to show how popular the console and its games still are. While it may not have been obvious at the time, understanding the scope of IP rights may have helped to make this possible.

For the aspiring home console manufacturer in the early 1980s, there were really only two options when considering what microprocessor to use. The first option was the Zilog Z80, most notably used by the ColecoVision and the Sega Master System. The second option was the MOS Technology 6502, which was being used by the Atari 2600 (in the stripped-down form of the 6507), the Commodore 64, and numerous arcade games. When Masayuki Uemura, then head of Nintendo's R+D2 team, was presented with these options whilst developing the Famicom, he opted for the 6502. Although an official justification for this choice has never been given, a cursory glance at the patents for the two chips reveals that the decision may have been driven, at least in part, by patent protection.

When the NES finally hit US shores in October of 1985, very little was known about the technical specification of the console. All that was known was that the NES was powered by a hitherto unknown processor called the Ricoh 2A03. The most notable thing about the 2A03 was that its instruction set was almost identical to the 6502. Put another way, if you could program for the 6502, you could program for the 2A03. But the 2A03 wasn't a 6502, if it was then Nintendo would have had to get some kind of agreement from MOS Technology in order to use it. No such agreement was in place.


Original Submission

posted by janrinok on Sunday August 18, @07:42PM   Printer-friendly
from the is-it-better-in-Beijing dept.

A trade magazine https://www.automotivetestingtechnologyinternational.com/news/adas-cavs/mercedes-benz-granted-approval-to-test-l4-avs-in-china.html reports,

Mercedes-Benz has been approved to conduct Level 4 automated driving testing on designated roads and highways in Beijing, focusing on the research and development of multi-sensor perception and system performance for advanced autonomous driving systems. This initiative is part of the company's broader technology research efforts in the region, with the goal of exploring the integration of perception and control mechanisms in autonomous vehicles.
[...]
The company's L4 test vehicles are designed to handle most driving tasks independently, without the need for driver intervention. Equipped with an array of sensors and redundant systems for enhanced safety, the company says these vehicles are capable of executing maneuvers in busy urban environments, such as parking, making U-turns, navigating traffic circles and performing unprotected left turns.

On expressways, the vehicles can autonomously change lanes when the vehicle ahead slows down and can pass through toll stations. In extreme situations, the vehicles are programmed to follow a minimal risk strategy, safely stopping in a secure location, according to Mercedes.

For a refresher on the SAE self-driving level scheme, see the chart at https://www.sae.org/blog/sae-j3016-update

Kudos to Mercedes for jumping from Level 2 directly to L4. L3 requires that the driver stay alert and be prepared to take control at any time...something that humans are notoriously bad at doing.

One of my first reactions when I read the SAE scheme (years ago) was: while L3 is a logical development step (in terms of incremental technology improvements), in the real world it should never be allowed on public roads.


Original Submission

posted by hubie on Sunday August 18, @02:54PM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/

Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing—and how the capability to deceive others remotely is getting dramatically easier over time.
[...]
The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free.

"Weird how all the major innovations coming out of tech lately are under the Fraud skill tree," wrote illustrator Corey Brickley in an X thread reacting to an example video of Deep-Live-Cam in action. In another post, he wrote, "Nice remember to establish code words with your parents everyone," referring to the potential for similar tools to be used for remote deception—and the concept of using a safe word, shared among friends and family, to establish your true identity.


Original Submission

posted by hubie on Sunday August 18, @10:10AM   Printer-friendly
from the waiting-to-see-what-the-catch-is dept.

Arthur T Knackerbracket has processed the following story:

In the realm of messaging apps and services, it's pretty easy to get lost in a sea of the same. Just about every service claims to be the most secure, the most user-friendly, and the most private. But are they… really?

The team behind a new messaging app/service reached out to me to introduce their product called Session. According to the Session site, "Session is an end-to-end encrypted messenger that minimizes sensitive metadata, designed and built for people who want absolute privacy and freedom from any form of surveillance."

Of course, I was skeptical, but when I installed the app and set it up, I realized I was dealing with something different. With Session, there's no phone number, account name, or footprint to be had. Session uses an onion routing network to ensure you leave no trace, so it's simply impossible for anyone to create a profile based on metadata or account information. All accounts are completely anonymous, and zero data is collected, which means there's absolutely nothing to leak.

[...] When you install the app, you create an account; the only thing associated with that account is the Account ID. Copy that ID and share it with anyone you'd like to chat with and get to the communication, knowing everything is secured with end-to-end encryption, and none of your personal information is shared or saved.

[...] During account creation, you do have to enter a display name, but you're not required to use your real name. You can also choose between fast and slow notification modes. The fast mode uses Google's notification servers, and the slow mode means Session will occasionally check for new messages in the background. Session recommends using the fast mode, but if I had to guess, I would assume the slow mode to be the most private.

[...] Session is still in its infancy, and few people know about this app/service. For anyone trying to escape the usual concerns that their information will be leaked, you can trust that this is less likely to happen with Session than with many other options. Even if your information was leaked, the only thing hackers could get would be your Account ID, as there's no other information tied to your account. And with all communication secured with E2E, even your chats would be hard to view.


Original Submission

posted by hubie on Sunday August 18, @05:26AM   Printer-friendly
from the livin'-on-the-edge dept.

Arthur T Knackerbracket has processed the following story:

Canonical recently announced a significant policy change regarding Linux adoption in the Ubuntu operating system. The Canonical Kernel Team (CKT), responsible for handling kernel-related issues for any Ubuntu release, will soon begin integrating the latest version of the Linux kernel, even if there is no final stable build out in the wild yet.

As the British company explains, Ubuntu follows a strict, time-based release schedule. Release dates are set six months in advance, and only in "extreme" circumstances can a delay occur. The most recent long-term support version of Ubuntu, 24.04 "Noble Numbat," was released in April 2024.

Meanwhile, developers working on the Linux kernel follow a "loosely time-based release process," with a new major kernel release occurring every two to three months. The actual release date for each new version is described as "fluid," meaning that project leader Linus Torvalds may adjust the upstream development process if a significant bug is discovered.

A stable release cadence is crucial for maintaining a reliable operating system, explains Canonical's Brett Grandbois. Ubuntu isn't just your weird uncle's experimental Linux OS used by hobbyists; it is officially available in multiple editions, including desktop environments, servers, cloud data centers, and IoT devices.

[...] "To provide users with the absolute latest in features and hardware support, Ubuntu will now ship the latest available version of the upstream Linux kernel at the specified Ubuntu release freeze date," Canonical stated, even if that kernel is still in Release Candidate (RC) status and some bugs remain to be resolved before the final release.

This new "aggressive kernel version commitment policy" carries risks, as RC releases are not considered final by Torvalds and his team for a reason. However, Canonical will need to manage these risks by providing official support for the specific Linux release included in the new Ubuntu version. Updating the kernel after the release is done isn't feasible either, as the Linux edition shipped with Ubuntu is a largely optimized kernel with specific features, patches and hardware support provided by Canonical and its OEM partners.


Original Submission

posted by hubie on Sunday August 18, @12:50AM   Printer-friendly
from the road-to-human-extinction-is-paved-with-good-intentions dept.

Bacteria and fungi are evolving to eat plastic but their impact will likely be limited to specific applications, researchers say:

Scientists in Germany have identified a type of fungi that is capable of breaking down synthetic plastics, offering a potential new weapon in the global fight against plastic pollution.

A team at the Leibniz Institute of Freshwater Ecology and Inland Fisheries in Berlin found that certain microfungi can survive exclusively on plastics, degrading them into simpler forms.

While this is a promising breakthrough, especially when it comes to tackling oceanic plastic pollution, experts cautioned that it is not a silver bullet.

Researchers observed that microfungi in Lake Stechlin in northeastern Germany can thrive on synthetic polymers without any other carbon source.

"The most surprising finding of our work is that our fungi could exclusively grow on some of the synthetic polymers and even form biomass," Hans-Peter Grossart, the lead researcher, told Reuters.

[...] Researchers said the fungi's ability to break down plastic may have evolved in response to the overwhelming presence of the synthetic material in their environment.

These fungi are particularly effective at breaking down polyurethane, a common material used in construction foam, among other products.

[...] While the discovery of plastic-eating fungi is a step forward, it is unlikely to solve the plastic pollution problem on its own.

Experts said the most effective way to tackle plastic pollution is to reduce the amount of material entering the environment.

[...] "Care must be taken with potential solutions of this sort, which could give the impression that we should worry less about plastic pollution because any plastic leaking into the environment will quickly, and ideally safely, degrade. Yet, for the vast majority of plastics, this is not the case," he said.

[...] The global production of plastic has skyrocketed from 1.7 million tonnes in 1950 to 400 million tonnes in 2022, according to Statista. And despite increased efforts, only nine per cent of plastic waste is recycled worldwide, reports the UN.

See also:
    •What Could Possibly Go Wrong? - TV Tropes
    •Etymology of "What could (possibly) go wrong?"


Original Submission