Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:56 | Votes:103

posted by hubie on Thursday November 02 2023, @07:40PM   Printer-friendly
from the money-money-money dept.

https://arstechnica.com/tech-policy/2023/10/google-paid-26b-for-default-contracts-in-2021-google-exec-testified/

On Friday, Google started defending its search business during the Justice Department's monopoly trial. Among the first witnesses called was Google's senior vice president responsible for search, Prabhakar Raghavan, who testified that Google's default agreements with makers of popular mobile phones and web browsers were "the company's biggest cost" in 2021, Bloomberg Law reported.

Raghavan's testimony for the first time revealed that Google paid $26.3 billion in 2021 for default agreements, seemingly investing in default status for its search engine while raking in $146.4 billion in revenue from search advertising that year. Those numbers had increased "significantly" since 2014, Big Tech on Trial reported, when Google's search ad revenue was approximately 46 billion and traffic acquisition cost was approximately $7.1 billion.
[...]
Pichai will likely provide additional insights into how Google's smart investments are responsible for creating the search empire it maintains today, Reuters reported. But he will also likely face the DOJ's inquiries into why Google invests so much in default agreements if it's not a critical part of the tech giant's strategy to stay ahead of the competition.

The DOJ is not likely to back down from its case that default agreements unfairly secured Google's search market dominance. On Friday, Big Tech on Trial reporter Yosef Weitzman—who has been posting updates from the trial on X—suggested that things have gotten tense in the courtroom now that the "DOJ seems emboldened to push for more information to be public after Judge Mehta's comments yesterday that not all numbers need to remain redacted."

According to Weitzman, the DOJ today pushed to "make public the 20 search queries Google makes the most revenue off of, as well as Google's traffic acquisition costs related to search (the total amount of money Google paid to partners in search distribution revenue shares)."

Previously:
Google, DOJ Still Blocking Public Access to Monopoly Trial Docs, NYT Says 20231020
Microsoft CEO Warns of "Nightmare" Future for AI If Google's Search Dominance Continues 20231004


Original Submission

posted by hubie on Thursday November 02 2023, @02:52PM   Printer-friendly
from the mouse-reproductive-studies dept.

Can humans reproduce in space? Mouse breakthrough on ISS a promising sign

This is the first-ever study that shows mammals may be able to thrive in space.

Researchers have successfully grown mouse embryos aboard the International Space Station (ISS) for the first time.

This represents "the first-ever study that shows mammals may be able to thrive in space," the University of Yamanashi and National Research Institute Riken said in a joint statement on Saturday, adding that it is "the world's first experiment that cultured early-stage mammalian embryos under complete microgravity of ISS."

[ . . . . ] frozen mouse embryos were blasted to the ISS aboard a SpaceX Falcon 9 rocket in August 2021. After arriving at the space station, the early-stage rodent embryos were thawed using a special instrument. Following this, astronauts cultured the embryos under microgravity for four days. The samples were then returned to Earth, where Wakayama and colleagues could study and compare them to mouse embryos grown in normal gravity here on terra firma.

And sure enough, according to a paper published in the journal iScience, the team reported that embryos cultured under microgravity conditions developed into blastocysts  —  a cluster of dividing cells made by a fertilized egg — with normal cell numbers. The researchers said in the paper that this "clearly demonstrated that gravity had no significant effect on the blastocyst formation and initial differentiation of mammalian embryos."

The team also found that, if allowed, the blastocysts would grow into mouse fetuses and placentas while showing no significant DNA alterations or changes in gene expression. The survival rate of the embryos grown on the ISS was lower, however, than those cultivated here on Earth.

Sending a frozen human embryo to the ISS would not be as fresh as an embryo created on orbit.


Original Submission

posted by Fnord666 on Thursday November 02 2023, @10:11AM   Printer-friendly
from the do-you-accept-the-principles-or-Robotology? dept.

Robots, AI programs may undermine credibility of religious groups, study finds:

As artificial intelligence expands across more professions, robot preachers and AI programs offer new means of sharing religious beliefs, but they may undermine credibility and reduce donations for religious groups that rely on them, according to research published by the American Psychological Association.

"It seems like robots take over more occupations every year, but I wouldn't be so sure that religious leaders will ever be fully automated because religious leaders need credibility, and robots aren't credible," said lead researcher Joshua Conrad Jackson, PhD, an assistant professor at the University of Chicago in the Booth School of Business.

[...] Jackson and his colleagues conducted an experiment with the Mindar humanoid robot at the Kodai-Ji Buddhist temple in Kyoto, Japan. The robot has a humanlike silicon face with moving lips and blinking eyes on a metal body. It delivers 25-minute Heart Sutra sermons on Buddhist principles with surround sound and multi-media projections.

Mindar, which was created in 2019 by a Japanese robotics team in partnership with the temple, cost almost $1 million to develop, but it might be reducing donations to the temple, according to the study.

The researchers surveyed 398 participants who were leaving the temple after hearing a sermon delivered either by Mindar or a human Buddhist priest. Participants viewed Mindar as less credible and gave smaller donations than those who heard a sermon from the human priest.

[...] "Robots and AI programs can't truly hold any religious beliefs so religious organizations may see declining commitment from their congregations if they rely more on technology than on human leaders who can demonstrate their faith," Jackson said.

Journal Reference:
Jackson, J. C., Yam, K. C., Tang, P. M., Liu, T., & Shariff, A. (2023). Exposure to robot preachers undermines religious commitment. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001443


Original Submission

posted by janrinok on Thursday November 02 2023, @05:26AM   Printer-friendly

Meta's AI research head wants open source licensing to change:

In July, Meta's Fundamental AI Research (FAIR) center released its large language model Llama 2 relatively openly and for free, a stark contrast to its biggest competitors. But in the world of open-source software, some still see the company's openness with an asterisk.

While Meta's license makes Llama 2 free for many, it's still a limited license that doesn't meet all the requirements of the Open Source Initiative (OSI). As outlined in the OSI's Open Source Definition, open source is more than just sharing some code or research. To be truly open source is to offer free redistribution, access to the source code, allow modifications, and must not be tied to a specific product. Meta's limits include requiring a license fee for any developers with more than 700 million daily users and disallowing other models from training on Llama. IEEE Spectrum wrote researchers from Radboud University in the Netherlands claimed Meta saying Llama 2 is open-source "is misleading," and social media posts questioned how Meta could claim it as open-source.

FAIR lead and Meta vice president for AI research Joelle Pineau is aware of the limits of Meta's openness. But, she argues that it's a necessary balance between the benefits of information-sharing and the potential costs to Meta's business. In an interview with The Verge, Pineau says that even Meta's limited approach to openness has helped its researchers take a more focused approach to its AI projects.

"Being open has internally changed how we approach research, and it drives us not to release anything that isn't very safe and be responsible at the onset," Pineau says.

Meta's AI division has worked on more open projects before

One of Meta's biggest open-source initiatives is PyTorch, a machine learning coding language used to develop generative AI models. The company released PyTorch to the open source community in 2016, and outside developers have been iterating on it ever since. Pineau hopes to foster the same excitement around its generative AI models, particularly since PyTorch "has improved so much" since being open-sourced.

She says that choosing how much to release depends on a few factors, including how safe the code will be in the hands of outside developers.

"How we choose to release our research or the code depends on the maturity of the work," Pineau says. "When we don't know what the harm could be or what the safety of it is, we're careful about releasing the research to a smaller group."

It is important to FAIR that "a diverse set of researchers" gets to see their research for better feedback. It's this same ethos that Meta used when it announced Llama 2's release, creating the narrative that the company believes innovation in generative AI has to be collaborative.

[...] Pineau says current licensing schemes were not built to work with software that takes in vast amounts of outside data, as many generative AI services do. Most licenses, both open-source and proprietary, give limited liability to users and developers and very limited indemnity to copyright infringement. But Pineau says AI models like Llama 2 contain more training data and open users to potentially more liability if they produce something considered infringement. The current crop of software licenses does not cover that inevitability.

"AI models are different from software because there are more risks involved, so I think we should evolve the current user licenses we have to fit AI models better," she says. "But I'm not a lawyer, so I defer to them on this point."

People in the industry have begun looking at the limitations of some open-source licenses for LLMs in the commercial space, while some are arguing that pure and true open source is a philosophical debate at best and something developers don't care about as much.

Stefano Maffulli, executive director of OSI, tells The Verge that the group understands that current OSI-approved licenses may fall short of certain needs of AI models. He says OSI is reviewing how to work with AI developers to provide transparent, permissionless, yet safe access to models.


Original Submission

posted by janrinok on Thursday November 02 2023, @12:46AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/information-technology/2023/10/people-are-speaking-with-chatgpt-for-hours-bringing-2013s-her-closer-to-reality/

In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.

In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016.

[...] Last week, we related a story in which AI researcher Simon Willison spent a long time talking to ChatGPT verbally. "I had an hourlong conversation while walking my dog the other day," he told Ars for that report. "At one point, I thought I'd turned it off, and I saw a pelican, and I said to my dog, 'Oh, wow, a pelican!' And my AirPod went, 'A pelican, huh? That's so exciting for you! What's it doing?' I've never felt so deeply like I'm living out the first ten minutes of some dystopian sci-fi movie."

[...] While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."

Previously:
AI Chatbots Can Infer an Alarming Amount of Info About You From Your Responses 20231021
ChatGPT Update Enables its AI to "See, Hear, and Speak," According to OpenAI 20230929
Large Language Models Aren't People So Let's Stop Testing Them as If They Were 20230905
It Costs Just $400 to Build an AI Disinformation Machine 20230904
A Jargon-Free Explanation of How AI Large Language Models Work 20230805
ChatGPT Is Coming to 900,000 Mercedes Vehicles 20230622


Original Submission

posted by janrinok on Wednesday November 01 2023, @08:03PM   Printer-friendly

Why Computer Security Advice Is More Confusing Than It Should Be:

If you find the computer security guidelines you get at work confusing and not very useful, you're not alone. A new study highlights a key problem with how these guidelines are created, and outlines simple steps that would improve them – and probably make your computer safer.

At issue are the computer security guidelines that organizations like businesses and government agencies provide their employees. These guidelines are generally designed to help employees protect personal and employer data and minimize risks associated with threats such as malware and phishing scams.

[...] "The key takeaway here is that the people writing these guidelines try to give as much information as possible," Reaves says. "That's great, in theory. But the writers don't prioritize the advice that's most important. Or, more specifically, they don't deprioritize the points that are significantly less important. And because there is so much security advice to include, the guidelines can be overwhelming – and the most important points get lost in the shuffle."

The researchers found that one reason security guidelines can be so overwhelming is that guideline writers tend to incorporate every possible item from a wide variety of authoritative sources.

"In other words, the guideline writers are compiling security information, rather than curating security information for their readers," Reaves says.

Drawing on what they learned from the interviews, the researchers developed two recommendations for improving future security guidelines.

First, guideline writers need a clear set of best practices on how to curate information so that security guidelines tell users both what they need to know and how to prioritize that information.

Second, writers – and the computer security community as a whole – need key messages that will make sense to audiences with varying levels of technical competence.

[...] "I also want to stress that when there's a computer security incident, we shouldn't blame an employee because they didn't comply with one of a thousand security rules we expected them to follow. We need to do a better job of creating guidelines that are easy to understand and implement."

The study, "Who Comes Up with this Stuff? Interviewing Authors to Understand How They Produce Security Advice," was presented at the USENIX Symposium on Usable Privacy and Security [video].


Original Submission

posted by requerdanos on Wednesday November 01 2023, @06:45PM   Printer-friendly
from the meeting-and-greeting dept.

Meeting Announcement: The next meeting of the SoylentNews governance committee is scheduled for Today, Wednesday, November 1st, 2023 at 21:00 UTC (5pm Eastern) in #governance on SoylentNews IRC. Logs of the meeting will be available afterwards for review, and minutes will be published when complete. This will be 5pm eastern time depending on your daylight saving time status.

The agenda for the upcoming meeting will also be published when available. Minutes and agenda, and other governance committee information are to be found on the SoylentNews Wiki at: https://wiki.staging.soylentnews.org/wiki/Governance

The community is welcome to observe and participate, and so is invited to the meeting.

posted by hubie on Wednesday November 01 2023, @03:20PM   Printer-friendly
from the hunger dept.

https://arstechnica.com/science/2023/10/relish-the-halloween-horror-of-this-purple-fungus-that-mummifies-spiders/

It's Halloween, that time of year when we seek out scary things like vampires, werewolves, ghosts, mummies, and all kinds of similar fictional monsters. But Mother Nature has her own horrors—like the strange species of parasitic purple fungus discovered earlier this year in a Brazilian rainforest that infects trapdoor spiders and gradually "mummifies" its hosts.

There are lots of horrifying parasitic examples in nature, such as the lancet liver fluke, whose complicated life cycle relies on successfully invading successive hosts—snails, ants, and grazing mammals—and altering their hosts' behavior via a temperature-dependent "on/off" switch.
[...]
But fungi are arguably the champions for viscerally gruesome parasitic behavior. According to João Araújo, assistant curator of mycology at the New York Botanical Garden, the newly discovered fungus belongs to the Cordyceps family of "zombifying" parasitic fungi. There are more than 400 different species, each targeting a particular type of insect, whether it be ants, dragonflies, cockroaches, aphids, or beetles. In fact, Cordyceps famously inspired the premise of The Last of Us game and subsequent TV series.


Original Submission

posted by hubie on Wednesday November 01 2023, @10:35AM   Printer-friendly
from the do-they-turn-off-the-algorithms? dept.

Facebook and Instagram are launching subscriptions in most of Europe that will remove adverts from the platforms:

People using the Meta-owned platforms will be able to pay €9.99 (£8.72) per month for an ad-free experience. It will not be available in the UK.

In January, Meta was fined €390m for breaking EU data rules around ads.

The regulator said at the time the firm could not "force consent" by saying consumers must accept how their data is used or leave the platforms.

The subscription tier will be exclusive to people in the EU, European Economic Area and Switzerland from November.

But it will only be accessible for people aged over 18 at first, with the firm looking into how it can serve ads to young people in the EU without breaking the rules.

Meta said its new subscription was about addressing EU concerns, rather than making money.

[...] "We respect the spirit and purpose of these evolving European regulations, and are committed to complying with them."

Users will be given the choice either to continue using the platforms for free - and have their data collected - or to pay and completely opt out of targeted ads by removing them.

[...] The announcement comes after Elon Musk's X, formerly Twitter, introduced an ad-free Premium+ service priced at £16 per month.

[...] TikTok has also been testing a monthly subscription to remove ads - priced at $4.99 - but there is no indication yet that this will be rolled out globally.

Also at WSJ, The Hacker News


Original Submission

posted by hubie on Wednesday November 01 2023, @05:46AM   Printer-friendly
from the who-reads-those-anyway? dept.

https://arstechnica.com/tech-policy/2023/10/sam-bankman-fried-begins-testifying-in-risky-bid-to-beat-ftx-fraud-charges/

Sam Bankman-Fried took the stand in his criminal trial today in an attempt to avoid decades in prison for alleged fraud at cryptocurrency exchange FTX and its affiliate, Alameda Research.

Providing testimony has been called a risky move for Bankman-Fried by many legal observers. After answering questions posted by his own lawyers, Bankman-Fried will have to face cross-examination from federal prosecutors. But after three weeks in which US government attorneys laid out their case, including testimony from former FTX and Alameda executives, Bankman-Fried's legal team announced yesterday that he would take the stand.

Today's testimony was unusual because US District Judge Lewis Kaplan sent the jury home for the day to conduct a hearing on whether certain parts of his testimony are admissible. "That means Bankman-Fried will give some of his testimony to the judge without the jury present. The judge will then decide whether Bankman-Fried is allowed to say the same testimony in front of a jury," The Wall Street Journal wrote in its live coverage. The trial is not being streamed via audio or video.
[...]
"Bankman-Fried said he believed that under FTX's terms of service, sister firm Alameda was allowed in many circumstances to borrow funds from the exchange," the WSJ wrote. Bankman-Fried reportedly said the terms of service were written by FTX lawyers and that he only "skimmed" certain parts.

"I read parts in depth. Parts I skimmed over," Bankman-Fried reportedly said after Kaplan asked if he read the entire terms of service document.

Sassoon asked Bankman-Fried if he had "any conversations with lawyers about Alameda spending customer money that was deposited into FTX bank accounts," according to Bloomberg's live coverage. "I don't recall any conversations that were contemporaneous and phrased that way," Bankman-Fried answered.
[...]
One decision for Kaplan is whether Bankman-Fried will be allowed to blame FTX lawyers when the jury is back in the courtroom. As we previously wrote, this is called an "advice-of-counsel defense" in which SBF could argue that he sought advice from company lawyers, received advice that his conduct was legal, and "relied on that advice in good faith."

Before the trial began, Kaplan issued an order that prohibited Bankman-Fried from using the advice-of-counsel defense in opening statements. But Kaplan left the door open for SBF to use the advice-of-counsel defense "on a case-by-case basis" as the trial continues.


Original Submission

posted by hubie on Wednesday November 01 2023, @01:01AM   Printer-friendly
from the off-grid-cabin-in-the-woods-sounds-good-right-now dept.

Is your smart home spying on you?

International researchers are issuing a dire warning of security and privacy concerns lurking within smart homes. Led by IMDEA Networks and Northeastern University, scientists were able to demonstrate a variety of security and privacy threats due to the local network interactions of Internet of Things (IoT) devices and mobile apps.

As smart homes continue to evolve, they encompass a wide array of consumer-focused IoT devices, including smartphones, smart TVs, virtual assistants, and CCTV cameras. These devices come equipped with cameras, microphones, and various sensors that can perceive activities within our most intimate spaces – our homes. However, can we truly trust these devices to handle and safeguard the sensitive data they collect?

[...] For the study, researchers delved into the intricacies of local network interactions among 93 IoT devices and mobile apps and were able to unveil numerous previously undisclosed security and privacy concerns with real-world implications.

Contrary to the common perception that local networks are secure environments, the study highlights new threats linked to the inadvertent exposure of sensitive data by IoT devices within local networks using standard protocols like UPnP or mDNS. These threats include the revelation of unique device names, UUIDs (Universally Unique Identifiers), and even the geographic location of households. These can be exploited by companies involved in surveillance capitalism without the users' knowledge.

"Analyzing the data collected by IoT Inspector, we found evidence of IoT devices inadvertently exposing at least one PII (Personally Identifiable Information), like unique hardware address (MAC), UUID, or unique device names, in thousands of real world smart homes," explains study co-author Vijay Prakash, PhD student from the New York University Tandon School of Engineering. "Any single PII is useful for identifying a household, but combining all three of them together makes a house very unique and easily identifiable. For comparison, if a person is fingerprinted using the simplest browser fingerprinting technique, they are as unique as one in 1,500 people. If a smart home with all three types of identifiers is fingerprinted, it is as unique as one in 1.12 million smart homes."

Anyone remember these tongue-in-cheek predictions made 75 years ago?


Original Submission

posted by Fnord666 on Tuesday October 31 2023, @08:12PM   Printer-friendly
from the this-is-a-collect-call-from-Alpha-Centauri-will-you-accept-the-charges? dept.

Scientists have devised a new technique for finding and vetting possible radio signals from other civilizations in our galaxy:

Most of todays SETI searches are conducted by Earth-based radio telescopes, which means that any ground or satellite radio interference ranging from Starlink satellites to cellphones, microwaves and even car engines can produce a radio blip that mimics a technosignature of a civilization outside our solar system. Such false alarms have raised and then dashed hopes since the first dedicated SETI program began in 1960.

Currently, researchers vet these signals by pointing the telescope in a different place in the sky, then return a few times to the spot where the signal was originally detected to confirm it wasn't a one-off. Even then, the signal could be something weird produced on Earth.

The new technique, developed by researchers at the Breakthrough Listen project at the University of California, Berkeley, checks for evidence that the signal has actually passed through interstellar space, eliminating the possibility that the signal is mere radio interference from Earth.

[...] I think its one of the biggest advances in radio SETI in a long time, said Andrew Siemion, principal investigator for Breakthrough Listen and director of the Berkeley SETI Research Center (BSRC), which operates the worlds longest running SETI program. Its the first time where we have a technique that, if we just have one signal, potentially could allow us to intrinsically differentiate it from radio frequency interference. Thats pretty amazing, because if you consider something like the Wow! signal, these are often a one-off.

Siemion was referring to a famed 72-second narrowband signal observed in 1977 by a radio telescope in Ohio. The astronomer who discovered the signal, which looked like nothing produced by normal astrophysical processes, wrote Wow! in red ink on the data printout. The signal has not been observed since.

The first ET detection may very well be a one-off, where we only see one signal, Siemion said. And if a signal doesnt repeat, theres not a lot that we can say about that. And obviously, the most likely explanation for it is radio frequency interference, as is the most likely explanation for the Wow! signal. Having this new technique and the instrumentation capable of recording data at sufficient fidelity such that you could see the effect of the interstellar medium, or ISM, is incredibly powerful.

[...] Siemion noted that, in the future, Breakthrough Listen will be employing the so-called scintillation technique, along with sky location, during its SETI observations, including with the Green Bank Telescope in West Virginia the worlds largest steerable radio telescope and the MeerKAT array in South Africa.

[...] This implies that we could use a suitably tuned pipeline to unambiguously identify artificial emission from distant sources vis-a-vis terrestrial interference, de Pater said. Further, even if we didnt use this technique to find a signal, this technique could, in certain cases, confirm a signal originating from a distant source, rather than locally. This work represents the first new method of signal confirmation beyond the spatial reobservation filter in the history of radio SETI.

[...] The technique will be useful only for signals that originate more than about 10,000 light years from Earth, since a signal must travel through enough of the ISM to exhibit detectable scintillation. Anything originating nearby the BLC-1 signal, for example, seemed to be coming from our nearest star, Proxima Centauri would not exhibit this effect.

Journal Reference:
Bryan Brzycki, et. al. On Detecting Interstellar Scintillation in Narrowband Radio SETI, The Astrophysical Journal , DOI 10.3847/1538-4357/acdee0


Original Submission

posted by Fnord666 on Tuesday October 31 2023, @03:27PM   Printer-friendly
from the submarine-patents dept.

Google has rolled out plans to drop the Ogg Theora video codec from its Chrome web browser starting M123. The removal of that open standard for video will trickle down through Chromium and its derivatives.

Chrome will deprecate and remove support for the Theora video codec in desktop Chrome due to emerging security risks. Theora's low (and now often incorrect) usage no longer justifies support for most users.

Notes:
- Zero day attacks against media codecs have spiked.
- Usage has fallen below measurable levels in UKM.
- The sites we manually inspected before levels dropped off were incorrectly preferring Theora over more modern codecs like VP9.
- It's never been supported by Safari or Chrome on Android.
- An ogv.js polyfill exists for the sites that still need Theora support.
- We are not removing support for ogg containers.

Our plan is to begin escalating experiments turning down Theora support in M120. During this time users can reactivate Theora support via chrome://flags/#theora-video-codec if needed.

The tentative timeline for this is (assuming everything goes smoothly):
- ~Oct 23, 2023: begin 50/50 canary dev experiments.
- ~Nov 1-6, 2023: begin 50/50 beta experiments.
- ~Dec 6, 2023: begin 1% stable experiments.
- ~Jan 8, 2024: begin 50% stable experiments.
- ~Jan 16th, 2024: launch at 100%.
- ~Feb 2024: remove code and chrome://flag in M123.
- ~Mar 2024: Chrome 123 will roll to stable.

Google is going with unstable and insecure WebP. In contrast, Ogg Theora is both patent-free and mature. The last CVE for Theora was in 2011. Any word as to which patents affect WebP?


Original Submission

posted by janrinok on Tuesday October 31 2023, @10:41AM   Printer-friendly

'This significantly increases the chances of finding environments where life could, in theory, develop.'

The chances of finding alien life may have just gotten a significant boost.

A new analysis of exoplanets suggests that there is a much greater chance than previously thought of these worlds hosting liquid water, an essential ingredient for life on Earth.

The universe could therefore be filled with more habitable planets than scientists had previously believed, with a greater chance of these worlds possessing environments in which alien life could develop, even if they have icy outer shells.

"We know that the presence of liquid water is essential for life. Our work shows that this water can be found in places we had not much considered," research leader and Rutgers University scientist Lujendra Ojha said in a statement. "This significantly increases the chances of finding environments where life could, in theory, develop."

[...] "Before we started to consider this subsurface water, it was estimated that around one rocky planet [in] every 100 stars would have liquid water," Ojha explained. "The new model shows that, if the conditions are right, this could approach one planet per star. So we are 100 times more likely to find liquid water than we thought."

Because there are about 100 billion stars in the Milky Way galaxy, "that represents really good odds for the origin of life elsewhere in the universe," he added.

How icy worlds could hold on to liquid water

The researchers investigated planets found around the most common type of stars in our galaxy, red dwarfs, which are smaller and cooler than the sun. Not only do red dwarfs, also known as M-dwarfs, make up about 70% of the stars in the Milky Way, but they are also the stars around which the majority of Earth-like rocky worlds have been found.

[...] Not only has this effect made Europa and Enceladus prime candidates for finding life elsewhere in the solar system, but it has implications for life-maintaining environments on worlds orbiting other stars.

NASA will soon explore at least one ice world, albeit within the bounds of the solar system: Its Europa Clipper probe is scheduled to launch toward the Jovian system in 2024 and arrive six years later.

[...] "The prospect of oceans hidden under ice sheets expands our galaxy's potential for more habitable worlds," Méndez said. "The major challenge is to devise ways to detect these habitats by future telescopes."

Journal Reference:
Ojha, L., Troncone, B., Buffo, J. et al. Liquid water on cold exo-Earths via basal melting of ice sheets. Nat Commun 13, 7521 (2022). https://doi.org/10.1038/s41467-022-35187-4


Original Submission

posted by hubie on Tuesday October 31 2023, @05:54AM   Printer-friendly
from the selling-digital-snakes-is-illegal-dept dept.

I'm banned for life from advertising on Meta because I teach Python:

I'm a full-time instructor in Python and Pandas, teaching in-person courses at companies around the world (e.g., Apple and Cisco) and with a growing host of online products, including video courses and a paid newsletter with weekly Pandas exercises. Like many online entrepreneurs, I've experimented with a host of different products over the years, some free and some paid. And like many other online entrepreneurs, I've had some hit products and some real duds.

A number of years ago, I decided to advertise some of my products on Facebook. I ran a bunch of ads, none of which were particularly successful, mostly because I didn't put a lot of effort into them. I decided to try other things, and basically forgot about my advertising account.

It was only a year or so ago that I thought that maybe, just maybe, I should do some advertising on Facebook (now Meta). I went to my advertising page, and was a bit surprised to see that my account had been suspended for violating Meta's advertising rules. I decided that this was weird, but didn't think about it too much more, and went on to do other, more productive things.

Just a few months ago, I again visited my ad management page, and again saw the notice that I was not allowed to advertise because I had violated their rules. This time, for whatever reason, I decided that I was going to look into this further. [...] I got e-mail from Meta saying that they had reviewed my case, I had definitely violated their policy, and now I was banned for life from ever advertising on a Meta platform.

All of this seemed utterly bizarre to me. What could I possibly have said or done that would get me permanently restricted? And is there any way that I can get out of this situation?

[...] The good news? I got an answer right away from a friend on LinkedIn. He told me that he also had problems advertising his Python training courses on Meta platforms because — get this — Meta thought that he was dealing in live animals, which is forbidden.

That's right: I teach courses in Python and Pandas. Never mind that the first is a programming language and the second is a library for data analysis in Python. Meta's AI system noticed that I was talking about Python and Pandas, assumed that I was talking about the animals (not the technology), and banned me. The appeal that I asked for wasn't reviewed by a human, but was reviewed by another bot, which (not surprisingly) made a similar assessment.

[...] The first friend looked into it, and found that there was nothing to be done. That's because Meta has a data-retention policy of only 180 days, and because my account was suspended more than one year before I asked people to look into it, all of the evidence is now gone. Which means that there's no way to reinstate my advertising account.

[...] The fact that both the original judgment and the appeal were handed by AI is pretty ridiculous.


Original Submission