Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
The Android 15 update didn’t include a battery charging limit feature that some had expected, but it got everyone talking about the best way to maximize your battery life. With seven-year update commitments from some Android phone manufacturers, the onus is now on us to stop our batteries from degrading too far.
What was most striking to us about the results of this poll was that less than 12% of you don’t use any battery management tricks at all. With battery degradation being a slow and subtle process, we assumed that many more people wouldn’t consider it. But hey, we should have given our tech-savvy readers more credit.
While it’s clear that you do indeed take battery health seriously, there isn’t a consensus amongst the responses on the best way to address it. Almost 33% of you follow the advice to only charge the battery to 80% full, while slightly more of you use adaptive charging. The latter approach intelligently slows the charging process based on usage patterns to reduce battery wear and extend its overall lifespan. This could include fast charging to around 80%, then slowly topping up from there when you’re not in a hurry to unplug. 7.6% of you suggest you use two or more of these tricks, which is a real dedication to getting the most out of your battery.
If you don’t use any battery management tricks and you’re keen to start now, it’s worth checking out the battery settings in your device. It only takes 30 seconds and could keep your battery healthy for another six months.
janrinok: I charge my 'phone when the battery is getting low, usually overnight, and in the morning it is 100%. I purchased my 'phone in 2016 and it is still using the original battery. It might (and I am not certain) just be beginning to lose a fraction of its charge over a 2 day period. Do you use a special charging cycle? How long does a battery normally last you? And do you replace the battery or just upgrade to the latest and greatest on the market?
Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn't Change That:
Some people just don't know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition.
They suffered their latest defeat in Pennsylvania, where a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use.
[...] We've seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don't do it for the royalties. You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.
Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created "reading rooms" for some of their standards, and they show us that the SDOs' idea of "making available" is "making available as if it was 1999." The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn't bad enough, these reading rooms are inaccessible to print-disabled people altogether..
It's a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it's a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share.
A brief, jargon-free explainer on the freer future of the social web:
Idealist nerds have a long history of giving terribly confusing names to potentially revolutionary technology. So it goes with Fediverse, a portmanteau of "Federation" and "Universe," and the potential future of the social internet. But what does that mean?
Put simply, the Fediverse is the collective name for a bunch of different social networks and platforms that are connected to one another. Users on any of these services can follow users on any other one and respond to, like, and share posts.
There are a lot of articles and websites that explain this concept in detail, but most of them get bogged down in technical language pretty quickly. I'd like to avoid that, so here's my good faith attempt to explain what the Fediverse is in plain English.
First, though, let's talk about email.
Email is decentralized (and why that matters for the Fediverse)
Anyone with an email address can email anyone else. Gmail users, for example, aren't limited to talking with other Gmail users—they can send messages to Outlook users, Yahoo Mail users, and even people who are running their own email servers in their basement. Basically, anyone with an email address can write anyone else with an email address. To put it another way, email is decentralized.
There is no one company or institution that is in charge of email—there are many different email providers, all of which are compatible with each other. This is because email is an open protocol, one that anyone who wants to can build a service for.
The largest social media networks do not work this way right now. You can't follow an X user via Facebook, for example, or subscribe to a Reddit community from Tumblr. That's why all of those websites are full of screenshots from the other ones—people want to share posts from other sites but there's no good way to do so. That's a problem the Fediverse seeks to remedy.
The Fediverse is an attempt to make social networks more like email—that is, to allow users on different services to follow and interact with each other anywhere they want, without signing up for a million different accounts.
[...] This is the promise of the Fediverse: You use whatever social network you want to use and connect with people on whatever social network they want to use. And there are a few other perks. When I quit using Twitter a couple years ago (before it became X), I left all of my followers behind. That's not how it works with the Fediverse: You can switch from one service to another and take your followers with you. That's the kind of freedom you can't get from a centralized system.
A number of companies and enthusiasts are working on other ways to connect with the Fediverse. Wordpress offers a plugin that allows bloggers to share their posts, for example—replies show up as comments. Flipboard, the news reading app, recently added the option to follow Fediverse users from within the app, and email newsletter platform Ghost is also working on similar functionality. And there are hacks to connect other, non-Fediverse networks—you can connect Bluesky to the Fediverse with a bit of work, for example.
[...] All of this is possible because the Fediverse is based on an open protocol that anyone can build on. The hope is that, over time, more services will offer integrations and social networking will become as open as email. Is that what will happen for sure? I don't know. And the Fediverse, like anything that exists on the internet, has its share of problems. Moderation, for example, is a huge challenge, and bigger platforms moving into the space could make it harder.
I'm only scratching the service with this explanation—there's so much more I could dig into. For the most part, though, when you hear "the Fediverse" you'll now know what it means: a series of social networks and platforms that are connected to each other. You'll hopefully hear a lot more about it in the years to come.
It's been a month and a half since Darl McBride kicked the bucket (who?), and nary a mention in the press. But then, perhaps most Linux followers today where not alive or old enough to have experienced Mr. McBride's assault on Linux that could have very well ended it's life as Open Source. Of course I'm talking about way back in the Stone Age when SCO sued IBM, Red Hat, Novell, and others for ownership of the Linux kernel. Those of us who were around followed the now defuncted Groklaw for the latest dirt on this legal entanglement that is now for the most part forgotten.
From the wikipedia link:
McBride has been controversial in the information technology industry for his role as the CEO of SCO in asserting broad claims of intellectual property ownership of the various UNIX operating systems derivatives developed by IBM under a license originally granted by AT&T Corporation. Open source, free software and Linux developers and supporters, and the computer industry at large have been outspoken and highly critical and skeptical of McBride and SCO's claims in these areas.
Ty Mattingly, a former Novell Executive Vice President and co-worker of McBride was quoted as saying, "Congratulations. In a few short months you've dethroned Bill Gates as the most hated man in the industry."[6] McBride claimed he received death threats as a result of the SCO-IBM lawsuits, and had a package of worms mailed to his home, prompting him to carry a firearm and to employ multiple bodyguards.[4] During an interview, when asked about the popularity of the lawsuit against IBM, McBride answered: "We're either right or we're not. If we're wrong, we deserve people throwing rocks at us."[7]
Under McBride's leadership, SCO saw a surge in stock price from under $2 in March 2003 to over $20 just six months later. Following several adverse rulings issued by the United States District Court in Utah, SCO's stock value dropped to under $1. On April 27, 2007, NASDAQ served notice that the company would be delisted if SCO's stock price did not increase above $1 for a minimum of 10 consecutive days over the course of 180 business days, ending October 22, 2007.[8]
These prices are a joke, right?
Google is finally selling refurbished Pixel phones on its own web store like many other phone makers do. This is great to see because the last thing this world needs is another 500 grams of e-waste in a landfill when it can be repaired and resold. It would be even better if Google were serious about it.
I love the idea of buying a refurbished phone. You get a warranty from the manufacturer and you can trust that anything wrong with the hardware was addressed by someone who knows what they are doing so it works like it is supposed to work. You don't get that when you buy a used phone, so a lot of folks will spend a little more for a phone that was made whole once returned.
There's a right way to do it, and then there is the way Google is doing it.
A look at the phones currently for sale highlights the problems with Google's strategy. We see the Pixel 6a, Pixel 6, Pixel 6 Pro, Pixel 7, and Pixel 7 Pro. You might already notice the first issue: three of these five phones will not get updated to Android 16.
The Pixel 6 series phones will stop seeing platform updates in October 2024, or within days after this article was written. They will still get security updates until 2026, which I think are more important than version updates, but most people don't see it that way. Buying a "good as new" phone directly from Google and not having access to some of the new features we see just a year later with the new OS upgrade will sour a lot of buyers on the program.
[...] The other, and potentially bigger, problem is the prices. I wouldn't buy a refurbished phone from Google for the amount it is asking because it's too much.
You can buy a brand new Pixel 7 Pro cheaper than what Google is asking for a refurbished Pixel 6 Pro. Brand new, still in the wrapper, and a model year newer. Oh, and it will be updated to Android 16, too. If you want to spend $539 buy the Pixel 7 Pro for $399 and some good earbuds instead of spending it on a refurbished phone that's a year older.
It's easy to laugh and think this is just Google being Google again, but this is a serious problem. By pricing these phones so high, people aren't going to buy them. Google isn't going to keep them forever, and in the end, they end up being stripped of any material that's easy to get while the rest goes into a landfill.
[...] Google can afford to sell these refurbs dirt cheap and it gets even more eyeballs on Google's services so Google makes even more money selling ads. I don't know why the company won't do it, but it should.
The advance was incremental at best. So why did so many think it was a breakthrough?
There's little doubt that some of the most important pillars of modern cryptography will tumble spectacularly once quantum computing, now in its infancy, matures sufficiently. Some experts say that could be in the next couple decades. Others say it could take longer. No one knows.
The uncertainty leaves a giant vacuum that can be filled with alarmist pronouncements that the world is close to seeing the downfall of cryptography as we know it. The false pronouncements can take on a life of their own as they're repeated by marketers looking to peddle post-quantum cryptography snake oil and journalists tricked into thinking the findings are real. And a new episode of exaggerated research has been playing out for the past few weeks.
The last time the PQC—short for post-quantum cryptography—hype train gained this much traction was in early 2023, when scientists presented findings that claimed, at long last, to put the quantum-enabled cracking of the widely used RSA encryption scheme within reach. The claims were repeated over and over, just as claims about research released in September have for the past three weeks.
A few weeks after the 2023 paper came to light, a more mundane truth emerged that had escaped the notice of all those claiming the research represented the imminent demise of RSA—the research relied on Schnorr's algorithm (not to be confused with Shor's algorithm). The algorithm, based on 2021 analysis of cryptographer Peter Schnorr, had been widely debunked two years earlier. Specifically, critics said, there was no evidence supporting the authors' claims of Schnorr's algorithm achieving polynomial time, as opposed to the glacial pace of subexponential time achieved with classical algorithms.
Once it became well-known that the validity of the 2023 paper rested solely on Schnorr's algorithm, that research was also debunked.
Three weeks ago, panic erupted again when the South China Morning Post reported that scientists in that country had discovered a "breakthrough" in quantum computing attacks that posed a "real and substantial threat" to "military-grade encryption." The news outlet quoted paper co-author Wang Chao of Shanghai University as saying, "This is the first time that a real quantum computer has posed a real and substantial threat to multiple full-scale SPN [substitution–permutation networks] structured algorithms in use today."
Among the many problems with the article was its failure to link to the paper—reportedly published in September in the Chinese-language academic publication Chinese Journal of Computers—at all. Citing Wang, the paper said that the paper wasn't being published for the time being "due to the sensitivity of the topic." Since then, the South China Morning Post article has been quietly revised to remove the "military-grade encryption" reference.
With no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn't published in September, as the news article reported, but it was written by the same researchers and referenced the "D-Wave Advantage"—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title.
Some of the follow-on articles bought the misinformation hook, line, and sinker, repeating incorrectly that the fall of RSA was upon us. People got that idea because the May paper claimed to have used a D-Wave system to factor a 50-bit RSA integer. Other publications correctly debunked the claims in the South China Morning Post but mistakenly cited the May paper and noted the inconsistencies between what it claimed and what the news outlet reported.
Over the weekend, someone unearthed the correct paper, which, as it turns out, had been available on the Chinese Journal of Computers website the whole time. Most of the paper is written in Chinese. This abstract was fortunately written in English. It reports using a D-Wave-enabled quantum annealer to find "integral distinguishers up to 9-rounds" in the encryption algorithms known as PRESENT, GIFT-64, and RECTANGLE. All three are symmetric encryption algorithms built on a SPN—short for substitution-permutation network structure.
"This marks the first practical attack on multiple full-scale SPN structure symmetric cipher algorithms using a real quantum computer," the paper states. "Additionally, this is the first instance where quantum computing attacks on multiple SPN structure symmetric cipher algorithms have achieved the performance of the traditional mathematical methods."
The main contribution in the September paper is the process the researchers used to find integral distinguishers in up to nine rounds of the three previously mentioned algorithms.
[...] The paper makes no reference to AES or RSA and never claims to break anything. Instead, it describes a way to use D-Wave-enabled quantum annealing to find the integral distinguisher. Classical attacks have had the optimized capability to find the same integral distinguishers for years. David Jao, a professor specializing in PQC at the University of Waterloo in Canada, likened the research to finding a new lock-picking technique. The end result is the same, but the method is new.
[...] This isn't the first time the South China Morning Post has fueled undue panic about the imminent fall of widely used encryption algorithms. Last year's hype train, mentioned earlier in this article, was touched off by coverage by the same publication that claimed researchers found a factorization method that could break a 2,048-bit RSA key using a quantum system with just 372 qubits. People who follow PQC should be especially wary when seeking news there.
The coverage of the September paper is especially overblown because symmetric encryption, unlike RSA and other asymmetric siblings, is are widely belived to be safe from quantum computing, as long as bit sizes are sufficient. PQC experts are confident that AES-256 will resist all known quantum attacks.
[...] As a reminder, current estimates are that quantum cracking of a single 2048-bit RSA key would require a computer with 20 million qubits running in superposition for about eight hours. For context, quantum computers maxed out at 433 qubits in 2022 and 1,000 qubits last year. (A qubit is a basic unit of quantum computing, analogous to the binary bit in classical computing. Comparisons between qubits in true quantum systems and quantum annealers aren't uniform.) So even when quantum computing matures sufficiently to break vulnerable algorithms, it could take decades or longer before the majority of keys are cracked.
The upshot of this latest episode is that while quantum computing will almost undoubtedly topple many of the most widely used forms of encryption used today, that calamitous event won't happen anytime soon. It's important that industries and researchers move swiftly to devise quantum-resistant algorithms and implement them widely. At the same time, people should take steps not to get steamrolled by the PQC hype train.
More follow up on this story with a good explanation of what was actually achieved.
AT&T obtained subsidies for duplicate users and non-users, will pay $2.3 million:
AT&T improperly obtained money from a government-run broadband discount program by submitting duplicate requests and by claiming subsidies for thousands of subscribers who weren't using AT&T's service. AT&T obtained funding based on false certifications it made under penalty of perjury.
AT&T on Friday agreed to pay $2.3 million in a consent decree with the Federal Communications Commission's Enforcement Bureau. That includes a civil penalty of $1,921,068 and a repayment of $378,922 to the US Treasury.
The settlement fully resolves the FCC investigation into AT&T's apparent violations, the consent decree said. "AT&T admits for the purpose of this Consent Decree and for Commission civil enforcement purposes" that the findings described by the FCC "contain a true and accurate description of the facts underlying the Investigation," the document said.
In addition to the civil penalty and repayment, AT&T agreed to a compliance plan designed to prevent further violations. AT&T last week reported quarterly revenue of $30.2 billion.
AT&T made the excessive reimbursement claims to the Emergency Broadband Benefit Program (EBB), which the US formed in response to the COVID-19 pandemic, and to the EBB's successor program, the Affordable Connectivity Program (ACP). The FCC said its rules "are vital to protecting these Programs and their resources from waste, fraud, and abuse."
We contacted AT&T today and asked for an explanation of what caused the violations. Instead, AT&T provided Ars with a statement that praised itself for participating in the federal discount programs.
"When the federal government acted during the COVID-19 pandemic to stand up the Emergency Broadband Benefit program, and then the Affordable Connectivity Program, we quickly implemented both programs to provide more low-cost Internet options for our customers. We take compliance with federal programs like these seriously and appreciate the collaboration with the FCC to reach a solution on this matter," AT&T said.
A Russia court claims that Google currently owes them 2000000 0000000000 0000000000 0000000000, or 2 undecillion if you will -- a number with 36 trailing zeros, rubles.
This is due to a fine of 100 000 rubles per day exponentially growing since sometimes in 2021 due to Google blocking various Russian sites from Youtube.
The slight problem is that the fine is now so large that it is many, many, times the entire worlds combined GDP. I don't think they should expect to get paid anytime soon.
Arthur T Knackerbracket has processed the following story:
Scientists in China have pulled off a remarkable feat worthy of Victor Frankenstein: reviving pigs’ brains up to 50 minutes after a complete loss of blood circulation. The macabre accomplishment could someday lead to advances in keeping people’s brains intact and healthy for longer while resuscitating them.
[...] Past studies have suggested that liver function plays a key role in how well the rest of the body does during a cardiac arrest. People with pre-existing liver disease, for instance, seem to have a higher risk of dying from a cardiac arrest. So the researchers, based primarily at Sun Yat-Sen University, decided to test whether keeping the livers of Tibetan minipigs functionally alive could have a positive effect on their brains’ survivability after resuscitation.
All of the pigs had blood flow to their brains stopped, but some were hooked up to a life support system that kept their liver’s circulation going. The scientists then tried to revive the pigs’ brains after a certain period of time using the same life support system. Afterward, the pigs were euthanized and compared to a control group of pigs that had their blood flow left alone.
When the pigs had blood flow to both organs shut down, their brains were substantially more damaged following resuscitation, the researchers found. But the brains of pigs that had their livers supported tended to fare much better, with fewer signs of injury and a restoration of electrical activity that lasted up to six hours. The researchers were also able to restore brain activity in these pigs up to 50 minutes after blood flow to the brain was stopped.
[...] Of course, this doesn’t mean that scientists can now return anyone back from the dead perfectly intact with just a little boost to their liver. There are many damaging bodily changes that occur soon after a cardiac arrest, not just those in the brain and liver. And certainly more research will have to be done to confirm the team’s conclusions that the liver is critical to restored brain function. But if this work does continue to pay off, it could someday lead to practical interventions that improve the odds of successful resuscitation in people.
Journal Reference: DOI: https://doi.org/10.1038/s44321-024-00140-z
Arthur T Knackerbracket has processed the following story:
If you believe Mark Zuckerberg, Meta's AI large language model (LLM) Llama 3 is open source.
It's not, despite what he says. The Open Source Initiative (OSI) spells it out in the Open Source Definition, and Llama 3's license – with clauses on litigation and branding – flunks it on several grounds.
Meta, unfortunately, is far from unique in wanting to claim that some of its software and models are open source. Indeed, the concept has its own name: open washing.
This is a deceptive practice in which companies or organizations present their products, services, or processes as "open" when they are not truly open in the spirit of transparency, access to information, participation, and knowledge sharing. This term is modeled after "greenwashing" and was coined by Michelle Thorne, an internet and climate policy scholar, in 2009.
With the rise of AI, open washing has become commonplace, as shown in a recent study. Andreas Liesenfeld and Mark Dingemanse of Radboud University's Center for Language Studies surveyed 45 text and text-to-image models that claim to be open. The pair found that while a handful of lesser-known LLMs, such as AllenAI's OLMo and BigScience Workshop + HuggingFace with BloomZ could be considered open, most are not. Would it surprise you to know that according to the study, the big-name ones from Google, Meta, and Microsoft aren't? I didn't think so.
But why do companies do this? Once upon a time, companies avoided open source like the plague. Steve Ballmer famously proclaimed in 2001 that "Linux is a cancer," because: "The way the license is written, if you use any open source software, you have to make the rest of your software open source." But that was a long time ago. Today, open source is seen as a good thing. Open washing enables companies to capitalize on the positive perception of open source and open practices without actually committing to them. This can help improve their public image and appeal to consumers who value transparency and openness.
[...] That's not to say all the big-name AI companies are lying about their open source street cred. For example, IBM's Granite 3.0 LLMs really are open source under the Apache 2 license.
Why is this important? Why do people like me insist that we properly use the term open source? It's not like, after all, the OSI is a government or regulatory organization. It's not. It's just a nonprofit that has created some very useful guidelines.
[...] If we need to check every license for every bit of code, "developers are going to go to legal reviews every time you want to use a new library. Companies are going to be scared to publish things on the internet if they're not clear about the liabilities they're encountering when that source code becomes public."
Lorenc continued: "You might think this is only a big company problem, but it's not. It's a shared problem. Everybody who uses open source is going to be affected by this. It could cause entire projects to stop working. Security bugs aren't going to get fixed. Maintenance is going to get a lot harder. We must act together to preserve and defend the definition of open source. Otherwise, the lawyers are going to have to come back. No one wants the lawyers to come back."
I must add that I know a lot of IP lawyers. They do not need or want these headaches. Real open source licenses make life easier for everyone: businesses, programmers, and lawyers. Introducing "open except for someone who might compete with us" or "open except for someone who might deploy the code on a cloud" is just asking for trouble.
In the end, open washing will dirty the legal, business, and development work for everyone. Including, ironically, the shortsighted companies now supporting this approach. After all, almost all their work, especially in AI, is ultimately based on open source.
From the Hollywood Reporter: Apple is turning the classic computer game Oregon Trail into a big budget action-comedy movie.
Grab your wagons and oxen, and get ready to ford a river: A movie adaptation of the popular grade school computer game Oregon Trail is in development at Apple.
The studio landed the film pitch, still in early development, that has Will Speck and Josh Gordon attached to direct and produce. EGOT winners Benj Pasek and Justin Paul will provide original music and produce via their Ampersand production banner. Sources tell The Hollywood Reporter that the movie will feature a couple of original musical numbers in the vein of Barbie.
Sounds like a good day to die of dysentery.
The other thing is that in order to learn, children need to have fun. But they have fun by really being pushed to explore and create and make new things that are personally meaningful. So you need open-ended environments that allow children to explore and express themselves.
(This Inventor Is Molding Tomorrow's Inventors, IEEE Spectrum)
IEEE Spectrum has a (short) sit-down with Marina Umaschi Bers, co-creator of the ScratchJr programming language and the KIBO robotics kits. Both tools are aimed at teaching kids to code and develop STEAM skills from a very young age. Other examples of such tools are the mTiny robot toy and the Micro:bit computer.
Being able to code is a new literacy, remarks Professor Bers, and like with reading and writing, we need to adapt our learning tools to children's developmental ages. One idea here is that of a "coding playground" where it's not about following step-by-step plans, but about inventing games, pretend play and creating anything children can imagine. She is currently working on a project to bring such a playground outside: integrating motors, sensors and other devices in physical playgrounds, "to bolster computational thinking through play".
Given how fast complicated toys are being thrown aside by young children, in contrast to a simple ball -- or a meccano set, for the engineering types -- I have my doubts.
At least one of the tools mentioned above is aimed at toddlers. So put aside your model steam locomotive, oh fellow 'lentil, and advice me: from what age do you think we should try to steer kids into "coding" or "developing", and which tools should or could be used for this?
Feel free to wax nostalgic about toys of days past, of course, in this, one of your favorite playgrounds: the soylentnews comment editor.
A new study reveals it would take far longer than the lifespan of our universe for a typing monkey to randomly produce Shakespeare. So, while the Infinite Monkey Theorem is true, it is also somewhat misleading.
A monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of Shakespeare purely by chance, according to the Infinite Monkey Theorem.
This widely known thought experiment is used to help us understand the principles of probability and randomness, and how chance can lead to unexpected outcomes. The idea has been referenced in pop culture from "The Simpsons" to "The Hitchhiker's Guide to the Galaxy" and on TikTok.
However, a new study reveals it would take an unbelievably huge amount of time—far longer than the lifespan of our universe, for a typing monkey to randomly produce Shakespeare. So, while the theorem is true, it is also somewhat misleading.
More information: Stephen Woodcock et al, A numerical evaluation of the Finite Monkeys Theorem, Franklin Open (2024). DOI: 10.1016/j.fraope.2024.100171
[Source]: University of Technology, Sydney
[Journal Ref]: A numerical evaluation of the Finite Monkeys Theorem
Voyager 1 Ghosts NASA, Forcing Use of Backup Radio Dormant Since 1981:
Voyager 1 can't seem to catch a break. The interstellar traveler recently recovered from a thruster glitch that nearly ended its mission, and now NASA's aging probe stopped sending data to ground control due to an unknown issue.
On Monday, NASA revealed that Voyager 1 recently experienced a brief pause in communication after turning off one of its radio transmitters. The space agency is now relying on a second radio transmitter that hasn't been used since 1981 to communicate with Voyager 1 until engineers can figure out the underlying issue behind the glitch.
The flight team behind the mission first realized something was amiss with Voyager 1's communication when the spacecraft failed to respond to a command. On October 16, the team used NASA's Deep Space Network (DSN)—a global array of giant radio antennas—to beam instructions to Voyager 1, directing it to turn on one of its heaters.
Voyager 1 should've sent back engineering data for the team to determine how the spacecraft responded to the command. This process normally takes a couple of days, as the command takes about 23 hours to travel more than 15 billion miles (24 billion kilometers) to the spacecraft and another 23 hours for the flight team to receive a signal back. Instead, the command seems to have triggered the spacecraft's fault protection system, which autonomously responds to onboard issues affecting the mission.
The spacecraft's fault protection system lowered the rate at which its radio transmitter was sending back data to use less power, according to NASA. However, while conserving the spacecraft's power, this mode also changes the X-band radio signal, a frequency range within the electromagnetic spectrum that the DSN's antennas listen for.
The flight team was able to locate the signal a day later but then, on October 19, communication with Voyager 1 stopped entirely. Voyager 1's fault protection system appeared to have been triggered twice more, and it turned off the X-band transmitter altogether. The spacecraft switched to a second radio transmitter called the S-band, which uses less power but transmits a significantly fainter signal. Voyager 1's S-band transmitter hadn't been used since 1981, and the flight team was not sure its signal could be detected due to the spacecraft being much farther away today than it was 43 years ago.
Still, the team of engineers didn't want to risk sending another signal to the X-band transmitter, and decided to give it a shot. On October 22, the team sent a command to confirm whether the S-band transmitter is working and was finally able to reconnect with Voyager 1 two days later. NASA engineers are currently working to determine what may have triggered the spacecraft's fault protection system in its attempt to resolve the issue.
Voyager 1 launched in 1977, less than a month after its twin probe, Voyager 2, began its journey to space. The spacecraft took a faster route, exiting the asteroid belt earlier than its twin and making close encounters with Jupiter and Saturn. Along the way, it discovered two Jovian moons, Thebe and Metis, as well as five new moons and a new ring called the G-ring around Saturn. Voyager 1 ventured into interstellar space in August 2012, becoming the first spacecraft to cross the boundary of our solar system.
The spacecraft has been flying for 47 years, and all that time in deep space has taken a toll on the interstellar probe. NASA engineers have had to come up with creative ways to keep the iconic mission alive. The team recently switched to a different set of thrusters than the one the spacecraft had been relying on, which became clogged with silicon dioxide over the years, using a delicate procedure to preserve Voyager 1's power. Earlier this year, the team of engineers also fixed a communication glitch that had been causing Voyager 1 to transmit gibberish to ground control.
Voyager 1 is no spring chicken, and its upkeep has not been an easy task over the years, especially from billions of miles away. But all-in-all, humanity's long-standing interstellar probe is well worth the effort.
Arthur T Knackerbracket has processed the following story:
After countless years pondering the idea, the FCC in 2022 announced that it would start politely asking the nation’s lumbering telecom monopolies to affix a sort of “nutrition label” on to broadband connections. The labels will clearly disclose the speed and latency (ping) of your connection, any hidden fees users will encounter, and whether the connection comes with usage caps or “overage fees.”
Initially just a voluntary measure, bigger ISPs had to start using the labels back in April. Smaller ISPs had to start using them as of October 10. In most instances they’re supposed to look something like this [image].
As far as regulatory efforts go, it’s not the worst idea. Transparency is lacking in broadband land, and U.S. broadband and cable companies have a 30+ year history of ripping off consumers with an absolute cavalcade of weird restrictions, fees, surcharges, and connection limitations.
Here’s the thing though: transparently knowing you’re being ripped off doesn’t necessarily stop you from being ripped off. A huge number of Americans live under a broadband monopoly or duopoly, meaning they have no other choice in broadband access. As such, Comcast or AT&T or Verizon can rip you off, and you have absolutely no alternative options that allow you to vote with your wallet.
That wouldn’t be as much of a problem if U.S. federal regulators had any interest in reining in regional telecom monopoly power, but they don’t. In fact, members of both parties are historically incapable of even admitting monopoly harm exists. Democrats are notably better at at least trying to do something, even if that something often winds up being decorative regulatory theater.
The other problem: with the help of a corrupt Supreme Court, telecoms and their Republican and libertarian besties are currently engaged in an effort to dismantle what’s left of the FCC’s consumer protection authority under the pretense this unleashes “free market innovation.” It, of course, doesn’t; regional monopolies like Comcast just double down on all of their worst impulses, unchecked.
If successful, even fairly basic efforts like this one won’t be spared, as the FCC won’t have the authority to enforce much of anything.
It’s all very demonstrative of a U.S. telecom industry that’s been broken by monopoly power, a lack of competition, and regulatory capture. As a result, even the most basic attempts at consumer protection are constantly undermined by folks who’ve dressed up greed as some elaborate and intellectual ethos.