Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:34 | Votes:74

posted by janrinok on Saturday September 21, @08:22PM   Printer-friendly
from the where's-my-minerals? dept.

Potential agreement comes despite fears Beijing will choke critical minerals supplies in response:

The US and Japan are close to a deal to curb tech exports to China's chip industry despite alarm in Tokyo about Beijing's threat to retaliate against Japanese companies.

The White House wants to unveil new export controls before November's presidential election, including a measure forcing non-US companies to get licences to sell products to China that would help its tech sector.

Biden administration officials have spent months in intense talks with their counterparts in Japan — and the Netherlands — to establish complementary export control regimes that would mean Japanese and Dutch companies are not targeted by the US "foreign direct product rule".

People in Washington and Tokyo familiar with the talks said the US and Japan were now close to a breakthrough, although a Japanese official cautioned the situation remained "quite fragile" because of fears of Chinese retaliation.

[...] The US export controls are designed to close loopholes in existing rules and add restrictions that reflect the fast progress of Huawei and other Chinese groups in chip production over the past two years.

[...] China said it "firmly opposes the abuse of export controls" and urged "relevant countries" to abide by international economic and trade rules.

Also at ZeroHedge.

Related:


Original Submission

posted by hubie on Saturday September 21, @03:33PM   Printer-friendly
from the arguments-about-dresses dept.

A visual neuroscientist realized he saw green and blue differently to his wife. He designed an interactive site that has received over 1.5m visits:

It started with an argument over a blanket.

"I'm a visual neuroscientist, and my wife, Dr Marissé Masis-Solano, is an ophthalmologist," says Dr Patrick Mineault, designer of the viral web app ismy.blue. "We have this argument about a blanket in our house. I think it's unambiguously green and she thinks it's unambiguously blue."

Mineault, also a programmer, was fiddling with new AI-assisted coding tools, so he designed a simple colour discrimination test.

If you navigate to ismy.blue, you'll see the screen populated with a colour and will be prompted to select whether you think it's green or blue. The shades get more similar until the site tells you where on the spectrum you perceive green and blue in comparison with others who have taken the test.

"I added this feature, which shows you the distribution, and that really clicked with people," says Mineault. "'Do we see the same colours?' is a question philosophers and scientists – everyone really – have asked themselves for thousands of years. People's perceptions are ineffable, and it's interesting to think that we have different views."

Apparently, my blue-green boundary is "bluer" than 78% of others, meaning my green is blue to most people. How can that be true?

Our brains are hard-wired to distinguish colours via retinal cells called cones, according to Julie Harris, professor of psychology at the University of St Andrews, who studies human visual processing. But how do we do more complex things like giving them names or recognising them from memory?

"Higher-level processing in terms of our ability to do things like name colours is much less clear," says Harris, and could involve both cognition and prior experience.

[...] Most differences in colour perception are physiological, like colour blindness, which affects one in 10 men and one in 100 women. Others, however, may be connected to aspects of culture or language.

The Sapir-Whorf hypothesis of linguistic relativity, popularised in the movie Arrival, suggests that language shapes the way we think, and even how we perceive the world. In the 1930s, Benjamin Lee Whorf argued that the world consisted of "a kaleidoscopic flux of impressions organised ... largely by the linguistic systems of our minds", pointing to, for instance, the Inuits' multiple words for "snow" as an example of differences in cultural perceptions.

Although this theory continues to be hotly debated throughout linguistics, psychology and philosophy, language does inform how we communicate ideas. There's no word for "blue" in ancient Greek, for example, which is why Homer described stormy seas as "wine-dark" in The Odyssey. By contrast, Russian has distinct words for light blue and dark blue. However, recent research suggests a greater vocabulary may only be beneficial for remembering colours and not for perceiving them.

Before you fight online about whether a particular shade is aqua or cyan, it's important to note that ismy.blue's results have limitations. The slightest variation in viewing conditions influences colour perception, which is why vision researchers take such care when designing experiments. Factors like the model of your phone or computer, its age, display settings, ambient light sources, time of day and even which colour is presented first in the test will all play a role in your responses.

Night modes in particular increase the redness of a device's screen, causing blues to appear greener. To see if this was influencing test results, Mineault separated the data into two groups: before or after 6pm. The effect was immediately apparent, especially on devices with built-in night modes.

So what's the point of ismy.blue if it's so variable? In the end, it's just entertainment. But if you'd like results with a little more equivalence, Mineault suggests doing the exercise with others on the same device, so that "everybody's in the same lighting and the same place".

[...] One question remains, though: what colour is the blanket?

"We've taken the test a bunch of times," says Mineault. "As soon as there's a little green in there, I call it green"; his wife sees blue.

The solution? Maybe just buy a new one.

See also:
    •Is my blue your blue?
    •Color blindness - Wikipedia


Original Submission

posted by hubie on Saturday September 21, @10:48AM   Printer-friendly
from the act-now-for-a-limited-time dept.

https://arstechnica.com/gadgets/2024/09/amazon-accused-of-using-false-and-misleading-sales-prices-to-sell-fire-tvs/

A lawsuit is seeking to penalize Amazon for allegedly providing "fake list prices and purported discounts" to mislead people into buying Fire TVs.

As reported by Seattle news organization KIRO 7, a lawsuit seeking class-action certification and filed in US District Court for the Western District of Washington on September 12 [PDF] claims that Amazon has been listing Fire TV and Fire TV bundles with "List Prices" that are higher than what the TVs have recently sold for, thus creating "misleading representation that customers are getting a 'Limited time deal.'" The lawsuit accuses Amazon of violating Washington's Consumer Protection Act.
[...]
Camelcamelcamel, which tracks Amazon prices, claims that the cheapest price of the TV on Amazon was $280 in July. The website also claims that the TV's average price is $330.59; the $300 or better deal seems to have been available on dates in August, September, October, November, and December of 2023, as well as in July, August, and September 2024. The TV was most recently sold at the $449.99 "List Price" in October 2023 and for short periods in July and August 2024, per Camelcamelcamel.
[...]
The lawsuit claims that in some cases, the List Price was only available for "an extremely short period, in some instances as short as literally one day."
[...]
Further, Amazon is accused of using these List Price tactics to "artificially" drive Fire TV demand, putting "upward pressure on the prices that" Amazon can charge for the smart TVs.

The legal document points to a similar 2021 case in California [PDF], where Amazon was sued for allegedly deceptive reference prices. It agreed to pay $2 million in penalties and restitution.

Other companies selling electronics have also been scrutinized for allegedly making products seem like they typically and/or recently have sold for more money. For example, Dell Australia received an AUD$10 million fine (about $6.49 million) for "making false and misleading representations on its website about discount prices for add-on computer monitors," per the Australian Competition & Consumer Commission.


Original Submission

posted by hubie on Saturday September 21, @06:05AM   Printer-friendly

https://www.inverse.com/entertainment/denis-villeneuve-rendezvous-with-rama-update

Denis Villeneuve can't stop making movies based on books. The Arrival and Blade Runner 2049 director has delivered two high-budget and polished Dune blockbusters in a row, while a third is on the way. He also has three other book adaptations in the works for when he's finished with Paul Atreides, and the director finally gave a promising update on one of those projects that could be the perfect follow-up to Dune.

In a conversation with Vanity Fair, Villeneuve addressed the three other book-based movies he has in development: Cleopatra, based on the biography by Stacy Schiff, Nuclear War: A Scenario, based on the nonfiction book by Annie Jacobsen, and Rendezvous With Rama, based on the sci-fi novel by Arthur C. Clarke. "I'm working on Rendezvous With Rama and that screenplay is slowly moving forward," he said.

Rendezvous with Rama isn't nearly as well known as Clarke's best-known novel, 2001: A Space Odyssey, but it's definitely well-suited to Villeneuve's mystical, epic style. The first book in a series, Rendezvous with Rama follows a group of human explorers in the distant future as they explore a mysterious alien spaceship that's hurtling towards the sun.


Original Submission

posted by hubie on Saturday September 21, @01:18AM   Printer-friendly

Google says it no longer auctions off ad space in the ways alleged:

A trial under way in federal court in Alexandria, Virginia, will determine if Google's ad tech stack constitutes an illegal monopoly. The first week has included a deep dive into exactly how Google's products work together to conduct behind-the-scenes electronic auctions that place ads in front of consumers in the blink of an eye.

Online advertising has rapidly evolved. Fifteen or so years ago, if you saw an internet display ad, there was a pretty good chance it featured people dancing over their enthusiasm for low mortgage rates, and those ads were foisted on you whether you were looking at real estate or searching for baseball scores.

Now, the algorithms that match ads to your interests are carefully calibrated, sometimes to an almost creepy extent.

Google, for its part, says it has invested billions of dollars to improve the quality of ads that consumers see, and ensure that advertisers can reach the consumers they're seeking.

The Justice Department contends that what Google has also done over the years is rig the automated auctions of ad sales to favor itself over other would-be players in the industry, and also deprived the publishing industry of hundreds of millions of dollars it would have received if the auctions were truly competitive.

[...] In the government's depiction, there are three distinct tools that interact to sell an ad and place it in front of a consumer. There's the ad servers used by publishers to sell space on their websites, particularly the rectangular ads that appear on the top and right-hand side of a web page. Ad networks are used by advertisers to buy ad space across an array of relevant websites.

And in between is the ad exchange, which matches the website publisher to the would-be advertiser by hosting an instant auction.

[...] For years, Google gave its ad exchange, called AdX, the first chance to match a publisher's proposed floor price. For instance, if a publisher wanted to sell a specific ad impression for a minimum of 50 cents, Google's software would give its own ad exchange the first chance to purchase. If Google's ad exchange bid 50 cents, it would win the auction, even if competing ad exchanges down the line were willing to pay more.

Google said the system was necessary to ensure ads loaded quickly. If the computers entertained bids from every ad exchange, it would take too long.

Publishers, dissatisfied with this system, found a workaround to conduct the auctions outside of Google's purview, a process that became known as "header bidding." Internal Google documents introduced at trial described header bidding as an "existential threat" to Google's market share.

Google's response relied on its control of all three components of the process. If publishers conducted an auction outside Google's purview but they still used Google's publisher ad server, called DoubleClick For Publishers, that software forced the winning bid back into Google's Ad Exchange. If Google was willing to match the price that publishers had received under the header-bidding auction, Google would win the auction.

Professor Ramamoorthi Ravi, an expert at Carnegie Mellon University, said rules imposed by Google failed to maximize value for publishers and "seem to have been designed to advantage Google's own products."

[...] Google, for its part, says it hasn't run auctions this way since 2019, and that in the last five years Google's share of the display ad market has begun to erode. It says that tying its buy side, sell side and middleman products together helps them run seamlessly and quickly, and minimizes fraudulent ads or malware risks.

Google also says its innovations over the last 15 years fueled the improvements in matching online ads to consumer interests. Google says it was at the forefront of introducing "real-time bidding," which allowed an advertiser selling shoes, for instance, to be paired up with a consumer whose online profile indicated an interest in purchasing shoes.

Those innovations, according to Google, allowed publishers to sell their available ad space at a premium because the advertiser would know that the ad was going to the eyeballs of someone interested in their product or service.

The Justice Department says that even though Google no longer runs its auctions in the ways described, it helped Google maintain its monopoly in the ad tech market in the years leading up to 2019, and that its existing monopoly allows Google to keep up to 36 cents on the dollar of every ad purchase it brokers when the transaction runs through all of its various products.


Original Submission

posted by janrinok on Friday September 20, @08:34PM   Printer-friendly
from the fake-department-of-fake-liars dept.

https://arstechnica.com/information-technology/2024/09/due-to-ai-fakes-the-deep-doubt-era-is-here/

Given the flood of photorealistic AI-generated images washing over social media networks like X [arstechnica.com] and Facebook [404media.co] these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back [nytimes.com] decades—and analog media long before [wikipedia.org] that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

[...] Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend [bu.edu] years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

Doubt has been a political weapon since ancient times [populismstudies.org]. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

[...] In April, a panel of federal judges [arstechnica.com] highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials.

[...] Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential "cultural singularity [fastcompany.com]," a threshold where truth and fiction in media become indistinguishable.

[...] "Deep doubt" is a new term, but it's not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of [theguardian.com] an upcoming "information apocalypse" due to deepfakes and questioned, "When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?"

[...] Throughout recorded history, historians and journalists have had to evaluate the reliability of sources [wm.edu] based on provenance, context, and the messenger's motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it's reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.

[...] You'll notice that our suggested counters to deep doubt above do not include watermarks, metadata, or AI detectors as ideal solutions. That's because trust does not inherently derive from the authority of a software tool. And while AI and deepfakes have dramatically accelerated the issue, bringing us to this new "deep doubt" era, the necessity of finding reliable sources of information about events you didn't witness firsthand is as old as history itself.

[...] It's likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled [thedailybeast.com] as AI-generated, creating ongoing pain for students in particular.

Throughout history, any form of recorded media, including ancient clay tablets, has been susceptible to forgeries [researchgate.net]. And since the invention of photography, we have never been able to fully trust a camera's output: the camera can lie [nytimes.com].

[...] Credible and reliable sourcing is our most critical tool in determining the value of information, and that's as true today as it was in 3000 BCE [wikipedia.org], when humans first began to create written records.


Original Submission

posted by hubie on Friday September 20, @03:49PM   Printer-friendly

https://cosmographia.substack.com/p/the-black-death-is-far-older-than

In 1338, among a scattering of obscure villages just to the west of Lake Issyk-Kul, Kyrgyzstan, people began dropping dead in droves. Among the many headstones found in the cemeteries of Kara-Djigach and Burana, one can read epitaphs such as "This is the grave of Kutluk. He died of the plague with his wife." Recently, ancient DNA exhumed from these sites has confirmed the presence of the plague bacterium Yersinia pestis, cause of the condition that became known as the Black Death. The strain detected in those remote graveyards of Central Asia has been identified as the most recent common ancestor of the plague that went on to kill as much as 60% of the Eurasian population in the great pandemic of the 14th-century.

[...] In 2018, a team of researchers found ancient traces of the plague bacterium in 4900-year old remains in Sweden. A few years later, traces of the bacterium were found in a 5000-year old skull in Latvia. It was tentatively suggested that these finds correlate with the Neolithic Decline, and might explain the large die off within these farming societies. However the cases were isolated, with some of the infected buried with uninfected, suggesting there wasn't an epidemic comparable to the Black Death outbreaks that would come in later millenniums.

[...] Whether the Neolithic Decline was mostly, or in part, caused by the plague is still up for debate, but one thing is clear: humanity has been battling Yersinia pestis for a long, long time.


Original Submission

posted by hubie on Friday September 20, @11:03AM   Printer-friendly
from the one-step-for-AI-one-giant-leap-for-the-hype-train dept.

https://arstechnica.com/information-technology/2024/09/openais-new-reasoning-ai-models-are-here-o1-preview-and-o1-mini/

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and API users.
[...]
In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"
[...]
AI benchmarks are notoriously unreliable and easy to game; however, independent verification and experimentation from users will show the full extent of o1's advancements over time. On top of that, MIT Research showed earlier this year that some of OpenAI's benchmark claims it touted with GPT-4 last year were erroneous or exaggerated.

One of the examples of o1's abilities that OpenAI shared is perhaps the least consequential and impressive, but it's the most talked about due to a recurring meme where people ask LLMs to count the number of Rs in the word "strawberry." Due to tokenization, where the LLM processes words in data chunks called tokens, most LLMs are typically blind to character-by-character differences in words.
[...]
It's no secret that some people in tech have issues with anthropomorphizing AI models and using terms like "thinking" or "reasoning" to describe the synthesizing and processing operations that these neural network systems perform.

Just after the OpenAI o1 announcement, Hugging Face CEO Clement Delangue wrote, "Once again, an AI system is not 'thinking', it's 'processing', 'running predictions',... just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it's more clever than it is."


Original Submission

posted by martyb on Friday September 20, @06:16AM   Printer-friendly
from the is-there-such-a-thing-as-a-good-patent-troll dept.

Tell Congress: We Can't Afford More Bad Patents:

A key Senate Committee is about to vote on two bills that would bring back some of the worst patents and empower patent trolls.

The Patent Eligibility Restoration Act (PERA), S. 2140, would throw out crucial rules that ban patents on many abstract ideas. Courts will be ordered to approve patents on things like ordering food on a mobile phone or doing basic financial functions online. If PERA Passes, the floodgates will open for these vague software patents that will be used to sue small companies and individuals. This bill even allows for a type of patent on human genes that the Supreme Court rightly disallowed in 2013.

A second bill, the PREVAIL Act, S. 2220, would sharply limit the public's right to challenge patents that never should have been granted in the first place.

Patent trolls—companies that have no product or service of their own, but simply make patent infringement demands on others—are a big problem. They've cost our economy billions of dollars. For a small company, a patent troll demand letter can be ruinous.

We took a big step towards fighting off patent trolls in 2014, when a landmark Supreme Court ruling, the Alice Corp. v. CLS Bank case, established that you can't get a patent by adding "on a computer" to an abstract idea. In 2012, Congress also expanded the ways that a patent can be challenged at the patent office.

These two bills, PERA and PREVAIL, would roll back both of those critical protections against patent trolls. We know that the bill sponsors, Sens. Thom Tillis (R-NC) and Chris Coons (D-DE) are pushing hard for these bills to move forward. We need your help to tell Congress that it's the wrong move.


Original Submission

posted by janrinok on Friday September 20, @01:31AM   Printer-friendly
from the I-spy-with-my-AI-eye dept.

https://www.businessinsider.com/larry-ellison-ai-surveillance-keep-citizens-on-their-best-behavior-2024-9[paywalled].
https://arstechnica.com/information-technology/2024/09/omnipresent-ai-cameras-will-ensure-good-behavior-says-larry-ellison/

Larry Ellison (of Oracle) predicts a future of AI enabled mass-surveillance where everyone lives in a panopticon. Constantly watched and recorded by AI that reports all transgressions.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison's vision bears more than a passing resemblance to the cautionary world portrayed in George Orwell's prescient novel 1984. In Orwell's fiction, the totalitarian government of Oceania uses ubiquitous "telescreens" to monitor citizens constantly, creating a society where privacy no longer exists and independent thought becomes nearly impossible.

(Here's looking at you, kin?)


Original Submission

posted by janrinok on Thursday September 19, @08:45PM   Printer-friendly
from the Don't-ignore-the-observations dept.

https://www.earth.com/news/new-observations-disprove-big-bang-theory-universe-began-tired-light-theory/

We have been getting stories for a while about how JWST observations don't line up with the current Big Bang timelines. I'm certain there will be "Big Bang Band Aid" theories at least until the current crop of Astrophysicists who built their entire career on the semi-biblical "In the Beginning..." theory of where it all started have, themselves, died off. Meanwhile, there is also never a shortage of contrarian theories out there, and one of them is starting to get some support from the JWST observations of the "deep past" - which, maybe, isn't so deep after all.

Current theories for the redshift observed in more distant galaxies rely on the postulate: "photons travel at the speed of light and arrive unchanged at their destination, exactly when they left their source, from their perspective."

There are other theories. One, in particular, explains the observed redshifts with the idea that photons "get tired" on their Billions of light year journeys and lose a little frequency / gain a little wavelength along the way. JWST observations that are seeing mature galaxies back at, and before, the previously presumed start of "it all" may align better with the less well developed tiring photon theory than they do with the Big Bang. Not only does the "tired light" theory directly explain red-shift, but the observations of wavelength shift with respect to galactic rotation seem to be lining up better with "tired light" than "Big Bang," too...

Around the same time, Fritz Zwicky, a well-known astronomer, came up with a different idea.

He proposed that the redshift we see in distant galaxies — basically a shift in the light spectrum towards red — might not be because those galaxies are speeding away.

Instead, he thought that the light photons from these galaxies could be losing energy, or "tiring out," as they travel through space.

This energy loss could make it look like the farther galaxies are moving away from us faster than they actually are.

"[...] But the confidence of some astronomers in the Big Bang theory started to weaken when the powerful James Webb Space Telescope (JWST) saw first light."

What if the Universe isn't expanding at all, but instead is quite a bit bigger than we have been guessing it is?


Original Submission

posted by hubie on Thursday September 19, @04:01PM   Printer-friendly
from the burn-baby-burn dept.

A Tesla Semi's fiery crash on California's Interstate 80 turned into a high-stakes firefight, as emergency responders struggled to douse flames ignited by the vehicle's lithium-ion battery pack:

The National Transportation Safety Board (NTSB) reported that CAL FIRE had to use a jaw-dropping 50,000 gallons of water, alongside fire-retardant airdrops, to put out the blaze. The crash and subsequent fire shut down eastbound lanes of I-80 for a staggering 15 hours, as reported by Breitbart.

The Tesla electric big rig, driven by a Tesla employee, veered off the road on August 19, smashing into a traffic post and a tree before careening down a slope and igniting a post-crash inferno. Fortunately, no one was injured. However, the NTSB's report sheds light on the difficulty of extinguishing fires in electric vehicles. Tesla's infamous "thermal runaway" effect—the tendency of lithium-ion batteries to reignite hours after being "put out"—was a constant concern, but the semi's battery system stayed under control this time.

[...] The blaze and the hazardous materials response that followed created chaos along I-80, a key artery linking Northern California with Nevada. Traffic was rerouted, and the full shutdown stretched late into the evening, causing significant delays.

Previously:


Original Submission

posted by janrinok on Thursday September 19, @11:18AM   Printer-friendly
from the what's-my-DNA-worth-anyway? dept.

Genetic information and ancestry reports of U.S. citizens were among the information stolen in the cyber attack:

23andMe proposes to compensate millions of customers affected by a data breach on the company's platform, offering $30 million as part of the settlement, along with providing users access to a security monitoring system.

The genetic testing service will pay the amount to approximately 6.4 million American users, according to a proposed class action settlement filed in the U.S. District Court for the Northern District of California on Sept. 12. Personal information was exposed last year after a hacker breached the website's security and posted critical user data for sale on the dark web.

[...] According to the settlement proposal, users will be sent a link where they can delete all information related to 23andMe.

[...] In an emailed statement to The Epoch Times, 23andMe Communications Director Andy Kill said that out of the $30 million aggregate amount, "roughly $25 million of the settlement and related legal expenses are expected to be covered by cyber insurance coverage."

Also at USA Today, Fox Business and The Verge.

Previously:


Original Submission

posted by janrinok on Thursday September 19, @06:33AM   Printer-friendly

Pagers kill a dozen, injure thousands... Huh? Pagers?

If you know what a pager is, you're OLD. Or are a Hezbollah terrorist. According to the Washington Post (paywalled), Wall Street Journal, CNN, and just about every outlet, about a dozen people were killed and thousands reportedly injured.

See, kid, back in the stone age we didn't have supercomputers in our pockets acting as telephones, we only had telephones. They were a permanent part of a room. If you weren't home, nobody could call you. But if you were a physician, people need to call you. So they had "pagers", also called "beepers," that alerted you to call the office.

They're not supposed to blow up. This is James Bond stuff. Since the Israelis can listen in to every cell phone call in the area, Hezbollah needed a secure way to communicate, so used pagers. But who loaded them with explosives? How? Pagers weren't big, the explosive must be high tech.

What was 007's tech guy's name?

exploding pagers: actual cyber war?

I remember vague stories heard in the 90s about "viruses" that would take over your computer, then spin your hard drive so fast that it broke.
Then there was the history of stuxnet and the Iran uranium centrifuges.
Just now I saw this story about pagers (of Hezbollah members) exploding https://www.bbc.com/news/articles/cd7xnelvpepo
I suspect a virus that does something to batteries, rather than traditional explosives.

if my suspicion is true... are we looking at a future where high-density batteries are too dangerous for regular people?


Original Submission #1Original Submission #2

posted by hubie on Thursday September 19, @01:43AM   Printer-friendly
from the just-like-frozen-concentrated-orange-juice dept.

The availability of large datasets which are used to train LLMs enabled their rapid development. Intense competition among organizations has made open-sourcing LLMs an attractive strategy that's leveled the competitive field:

Large Language Models (LLMs) have not only fascinated technologists and researchers but have also captivated the general public. Leading the charge, OpenAI ChatGPT has inspired the release of numerous open-source models. In this post, I explore the dynamics that are driving the commoditization of LLMs.

Low switching costs are a key factor supporting the commoditization of Large Language Models (LLMs). The simplicity of transitioning from one LLM to another is largely due to the use of a common language (English) for queries. This uniformity allows for minimal cost when switching, akin to navigating between different e-commerce websites. While LLM providers might use various APIs, these differences are not substantial enough to significantly raise switching costs.

In contrast, transitioning between different database systems involves considerable expense and complexity. It requires migrating data, updating configurations, managing traffic shifts, adapting to different query languages or dialects, and addressing performance issues. Adding long-term memory [4] to LLMs could increase their value to businesses at the cost of making it more expensive to switch providers. However, for uses that require only the basic functions of LLMs and do not need memory, the costs associated with switching remain minimal.

[...] Open source models like Llama and Mistral allow multiple infrastructure providers to enter the market, enhancing competition and lowering the cost of AI services. These models also benefit from community-driven improvements, which in turn benefits the organizations that originally developed them.

Furthermore, open source LLMs serve as a foundation for future research, making experimentation more affordable and reducing the potential for differentiation among competing products. This mirrors the impact of Linux in the server industry, where its rise enabled a variety of providers to offer standardized server solutions at reduced costs, thereby commoditizing server technology.

Previously:


Original Submission