Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Much of mathematics is driven by intuition, by a deep-rooted sense of what should be true. But sometimes instinct can lead a mathematician astray. Early evidence might not represent the bigger picture; a statement might seem obvious, only for some hidden subtlety to reveal itself.
Unexpectedly, three mathematicians have now shown that a well-known hypothesis in probability theory called the bunkbed conjecture falls into this category. The conjecture — which is about the different ways you can navigate the mathematical mazes called graphs when they’re stacked on top of each other like bunk beds — seemed natural, even self-evident. “Anything our brain tells us suggests the conjecture should be true,” said Maria Chudnovsky, a graph theorist at Princeton University who was not involved in the new work.
[...] In the mid-1980s, a Dutch physicist named Pieter Kasteleyn wanted to mathematically prove an assertion about how liquids flow throughout porous solids. His work led him to pose the bunkbed conjecture.
[...] The bunkbed conjecture says that the probability of finding the path on the bottom bunk is always greater than or equal to the probability of finding the path that jumps to the top bunk. It doesn’t matter what graph you start with, or how many vertical posts you draw between the bunks, or which starting and ending vertices you choose.
For decades, mathematicians thought this had to be true. Their intuition told them that moving around on just one bunk should be easier than moving between two — that the extra vertical jump required to get from the lower to the upper bunk should significantly limit the number of available paths.
[...] In June, Lawrence Hollom of the University of Cambridge disproved a version of the bunkbed problem in a different context. Instead of dealing with graphs, this formulation of the conjecture asked about objects called hypergraphs. In a hypergraph, an edge is no longer defined as the connection between a pair of vertices, but rather as the connection between any number of vertices.
[...] In the meantime, Pak says, it’s clear that mathematicians need to engage in a more active discussion about the nature of mathematical proof. He and his colleagues ultimately didn’t have to rely on controversial computational methods; they were able to disprove the conjecture with total certainty. But as computer- and AI-based lines of attack become more common in mathematics research, some mathematicians are debating whether the field’s norms will eventually have to change. “It’s a philosophical question,” Alon said. “How do we view proofs that are only true with high probability?”
Arthur T Knackerbracket has processed the following story:
Intel posted a $16.6 billion loss in the third quarter – the largest in the silicon veteran's history – as it booked more than $18 billion in restructuring and impairment related charges.
While the loss was obviously not what Intel wanted, revenues for the quarter came in at the upper end of forecasts at $13.3 billion – up four percent from last quarter, though still down six percent from last year.
Executives expect Intel to continue to rebound in the fourth quarter, forecasting revenues of between $13.3 and $14.3 billion – a decrease of 7.14 and 13.6 percent year over year. In spite of this, the prospect of another flat to positive quarter of sequential revenue growth was enough to send the embattled chipmaker's share price skyward.
Intel's share price surged by up to 15 percent in after-hours trading on what investors saw as a positive outlook. However the biz has fallen a long way from its glory days, and faces numerous challenges with respect to ongoing restructuring costs which are expected to challenge profitability again in the third quarter.
During the third quarter, Intel faced in excess of 18.5 billion dollars in charges associated in part with its plan to cut 10 billion in annual spending – announced amid mass layoffs last quarter.
As you might expect, cutting 15,000 staff by the end of the year will save Intel a boatload of cash in the future. But in the short term it must write many severance and early retirement checks.
According to CEO Pat Gelsinger, the bulk of the layoffs occurred during the quarter – but even this only accounted for $2.2 billion of the charges. Another $528 million of restructure-related costs were somewhat vaguely attributed to "non-cash charges."
The largest of the losses were instead driven by the decision to write off $9.9 billion worth of deferred tax assets accumulated over the past three years of losses, CFO David Zinsner explained.
Chipzilla also faced $3.1 billion of impaired charges related to Intel 7 manufacturing equipment – which cannot be used for more advanced process nodes like Intel 18A that rely on more modern extreme ultraviolet lithography equipment.
Intel also booked a $2.9 billion charge associated with the "impairment goodwill for certain reporting units," that was largely attributed to autonomous driving tech unit Mobileye.
Combined, these factors contributed to a $16.6 billion quarterly loss. Zinsner warned more red ink will come in Q4.
Ever the optimist, Gelsinger couldn't help but put a positive spin on the ordeal. He declared that that Intel is well on its way to "completing what will be one of the most seminal restructurings in the history [of the company]. The steps that we took in our financial restructuring this quarter was very critical to be able to bring us to a point where we can say we have capacity to drive long-term shareholder return."
During the third quarter, Intel saw modest revenue growth across its various divisions – at least on a sequential basis. Intel Products – which includes Client, Datacenter, and Networking – grew 3.3 percent as a whole while Intel foundry was up 2.2 percent compared to last quarter.
However, compared to this time last year, revenues remain depressed with Product and Foundry down 2 percent and 8 percent respectively – showing that the chipmaker is still a long way from recovery
Digging a little deeper we see that this decline isn't entirely uniform. While client computing revenues were down seven percent year over year to $7.3 billion in Q3, datacenter sales grew by nine percent compared to last year, bringing in $3.3 billion.
Intel's networking group also saw modest gains during the quarter, with revenues up four percent year on year to $1.5 billion.
When Microsoft launched its Copilot+ AI PC initiative over the summer, one of the flagship features was Recall, a feature that would log months' worth of your PC usage, with the stated goal of helping you remember things you did and find them again. But if you've heard of Recall, it's probably because of the problems that surfaced in preview builds of Windows before the feature could launch: It stored all of its data in plaintext, and it was relatively trivial for other users on the PC (or for malicious software) to access the database and screenshots, potentially exposing huge amounts of user data.
[...] Microsoft has officially announced that the Recall preview is being delayed yet again and that it will begin rolling out to testers in December.
"We are committed to delivering a secure and trusted experience with Recall. To ensure we deliver on these important updates, we're taking additional time to refine the experience before previewing it with Windows Insiders," said Microsoft Windows Insider Senior Program Manager Brandon LeBlanc in a statement provided to The Verge.
[...] When it does start to roll out, Recall will still require a Copilot+ PC, which gets some AI-related features not available to typical Windows 11 PCs. To meet the Copilot+ requirements, PCs must have at least 16GB of RAM and 256GB of storage, plus a neural processing unit (NPU) that can perform at least 40 trillion operations per second (TOPS).
"This is the dystopian nightmare that we've kind of entered in":
A former jockey who was left paralyzed from the waist down after a horse riding accident was able to walk again thanks to a cutting-edge piece of robotic tech: a $100,000 ReWalk Personal exoskeleton.
When one of its small parts malfunctioned, however, the entire device stopped working. Desperate to gain his mobility back, he reached out to the manufacturer, Lifeward, for repairs. But it turned him away, claiming his exoskeleton was too old, 404 media reports.
"After 371,091 steps my exoskeleton is being retired after 10 years of unbelievable physical therapy," Michael Straight posted on Facebook earlier this month. "The reasons why it has stopped is a pathetic excuse for a bad company to try and make more money."
According to Straight, the issue was caused by a piece of wiring that had come loose from the battery that powered a wristwatch used to control the exoskeleton. This would cost peanuts for Lifeward to fix up, but it refused to service anything more than five years old, Straight said.
[...] As this infuriating case shows, advanced medical devices can change the lives of people living with severe disabilities — but the flipside is that they also make their owners dependent on the whims of the devices' manufacturers, who often operate in ruthless self-interest.
[...] That some of these manufacturers can come and go isn't the point, though. As 404 notes, the issue is the nefarious practices that many of them use to make their devices difficult to fix without their help.
[...] "This is the dystopian nightmare that we've kind of entered in, where the manufacturer perspective on products is that their responsibility completely ends when it hands it over to a customer," Nathan Proctor, head of the right to repair project at the US Public Interest Research Group, told 404. "That's not good enough for a device like this, but it's also the same thing we see up and down with every single product."
"People need to be able to fix things, there needs to be a plan in place," he added. "A $100,000 product you can only use as long as the battery lasts, that's enraging."
ProPublica is reporting on a number of ad networks posting fake/deceptive/scam ads on Meta properties.
Reporting highlights from TFA:
- Deceptive Political Ads: Eight deceptive advertising networks have placed over 160,000 election and social issues ads across more than 340 Facebook pages in English and Spanish.
- Harmed Users: Some of the people who clicked on ads were unwittingly signed up for monthly credit card charges or lost health coverage, among other consequences.
- Spotty Enforcement: Meta removed some ads after first approving them, but it failed to catch others with similar or identical content — or to stop networks from launching new pages and ads.
More from TFA:
In December, the verified Facebook page of Adam Klotz, a Fox News meteorologist, started running strange video ads.
Some featured the distinctive voice of former President Donald Trump promising "$6,400 with your name on it, no payback required" just for clicking the ad and filling out a form.
In other ads with the same offer, President Joe Biden's well-known cadence assured viewers that "this isn't a loan with strings attached."
There was no free cash. The audio was generated by AI. People who clicked were taken to a form asking for their personal information, which was sold to telemarketers who could target them for legitimate offers — or scams.
[...]
Klotz's page had been co-opted by a sprawling ad account network that has operated on Facebook for years, churning out roughly 100,000 misleading election and social issues ads despite Meta's stated commitment to crack down on harmful content, according to an investigation and analysis by ProPublica and Columbia Journalism School's Tow Center for Digital Journalism, as well as research by the Tech Transparency Project, a nonpartisan nonprofit that researches large tech platforms. The organizations combined data and shared their analyses. TTP's report was produced independently of ProPublica and Tow's investigation and was shared with ProPublica prior to publication.The network, which uses the name Patriot Democracy on many of its ad accounts, is one of eight deceptive Meta advertising operations identified by ProPublica and Tow. These networks have collectively controlled more than 340 Facebook pages, as well as associated Instagram and Messenger accounts. Most were created by the advertising networks, with some pages masquerading as government entities. Others were verified pages of people with public roles, like Klotz, who had been hacked. The networks have placed more than 160,000 election and social issues ads on these pages in English and Spanish. Meta showed the ads to users nearly 900 million times across Facebook and Instagram.
[..]
Most of these networks are run by lead-generation companies, which gather and sell people's personal information. People who clicked on some of these ads were unwittingly signed up for monthly credit card charges, among many other schemes. Some, for example, were conned by an unscrupulous insurance agent into changing their Affordable Care Act health plans. While the agent earns a commission, the people who are scammed can lose their health insurance or face unexpected tax bills because of the switch.The ads run by the networks employ tactics that Meta has banned, including the undisclosed use of deepfake audio and video of national political figures and promoting misleading claims about government programs to bait people into sharing personal information. Thousands of ads illegally displayed copies of state and county seals and the images of governors to trick users. "The State has recently approved that Illinois residents under the age of 89 may now qualify for up to $35,000 of Funeral Expense Insurance to cover any and all end-of-life expenses!" read one deceptive ad featuring a photo of Gov. JB Pritzker and the Illinois state seal.
There's much, much more, so please do read TFA.
What say you Soylentils? What's the solution to this sort of thing? My solution was to leave Facebook (in 2014) and never return.
Arthur T Knackerbracket has processed the following story:
The Android 15 update didn’t include a battery charging limit feature that some had expected, but it got everyone talking about the best way to maximize your battery life. With seven-year update commitments from some Android phone manufacturers, the onus is now on us to stop our batteries from degrading too far.
What was most striking to us about the results of this poll was that less than 12% of you don’t use any battery management tricks at all. With battery degradation being a slow and subtle process, we assumed that many more people wouldn’t consider it. But hey, we should have given our tech-savvy readers more credit.
While it’s clear that you do indeed take battery health seriously, there isn’t a consensus amongst the responses on the best way to address it. Almost 33% of you follow the advice to only charge the battery to 80% full, while slightly more of you use adaptive charging. The latter approach intelligently slows the charging process based on usage patterns to reduce battery wear and extend its overall lifespan. This could include fast charging to around 80%, then slowly topping up from there when you’re not in a hurry to unplug. 7.6% of you suggest you use two or more of these tricks, which is a real dedication to getting the most out of your battery.
If you don’t use any battery management tricks and you’re keen to start now, it’s worth checking out the battery settings in your device. It only takes 30 seconds and could keep your battery healthy for another six months.
janrinok: I charge my 'phone when the battery is getting low, usually overnight, and in the morning it is 100%. I purchased my 'phone in 2016 and it is still using the original battery. It might (and I am not certain) just be beginning to lose a fraction of its charge over a 2 day period. Do you use a special charging cycle? How long does a battery normally last you? And do you replace the battery or just upgrade to the latest and greatest on the market?
Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn't Change That:
Some people just don't know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition.
They suffered their latest defeat in Pennsylvania, where a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use.
[...] We've seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don't do it for the royalties. You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.
Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created "reading rooms" for some of their standards, and they show us that the SDOs' idea of "making available" is "making available as if it was 1999." The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn't bad enough, these reading rooms are inaccessible to print-disabled people altogether..
It's a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it's a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share.
A brief, jargon-free explainer on the freer future of the social web:
Idealist nerds have a long history of giving terribly confusing names to potentially revolutionary technology. So it goes with Fediverse, a portmanteau of "Federation" and "Universe," and the potential future of the social internet. But what does that mean?
Put simply, the Fediverse is the collective name for a bunch of different social networks and platforms that are connected to one another. Users on any of these services can follow users on any other one and respond to, like, and share posts.
There are a lot of articles and websites that explain this concept in detail, but most of them get bogged down in technical language pretty quickly. I'd like to avoid that, so here's my good faith attempt to explain what the Fediverse is in plain English.
First, though, let's talk about email.
Email is decentralized (and why that matters for the Fediverse)
Anyone with an email address can email anyone else. Gmail users, for example, aren't limited to talking with other Gmail users—they can send messages to Outlook users, Yahoo Mail users, and even people who are running their own email servers in their basement. Basically, anyone with an email address can write anyone else with an email address. To put it another way, email is decentralized.
There is no one company or institution that is in charge of email—there are many different email providers, all of which are compatible with each other. This is because email is an open protocol, one that anyone who wants to can build a service for.
The largest social media networks do not work this way right now. You can't follow an X user via Facebook, for example, or subscribe to a Reddit community from Tumblr. That's why all of those websites are full of screenshots from the other ones—people want to share posts from other sites but there's no good way to do so. That's a problem the Fediverse seeks to remedy.
The Fediverse is an attempt to make social networks more like email—that is, to allow users on different services to follow and interact with each other anywhere they want, without signing up for a million different accounts.
[...] This is the promise of the Fediverse: You use whatever social network you want to use and connect with people on whatever social network they want to use. And there are a few other perks. When I quit using Twitter a couple years ago (before it became X), I left all of my followers behind. That's not how it works with the Fediverse: You can switch from one service to another and take your followers with you. That's the kind of freedom you can't get from a centralized system.
A number of companies and enthusiasts are working on other ways to connect with the Fediverse. Wordpress offers a plugin that allows bloggers to share their posts, for example—replies show up as comments. Flipboard, the news reading app, recently added the option to follow Fediverse users from within the app, and email newsletter platform Ghost is also working on similar functionality. And there are hacks to connect other, non-Fediverse networks—you can connect Bluesky to the Fediverse with a bit of work, for example.
[...] All of this is possible because the Fediverse is based on an open protocol that anyone can build on. The hope is that, over time, more services will offer integrations and social networking will become as open as email. Is that what will happen for sure? I don't know. And the Fediverse, like anything that exists on the internet, has its share of problems. Moderation, for example, is a huge challenge, and bigger platforms moving into the space could make it harder.
I'm only scratching the service with this explanation—there's so much more I could dig into. For the most part, though, when you hear "the Fediverse" you'll now know what it means: a series of social networks and platforms that are connected to each other. You'll hopefully hear a lot more about it in the years to come.
It's been a month and a half since Darl McBride kicked the bucket (who?), and nary a mention in the press. But then, perhaps most Linux followers today where not alive or old enough to have experienced Mr. McBride's assault on Linux that could have very well ended it's life as Open Source. Of course I'm talking about way back in the Stone Age when SCO sued IBM, Red Hat, Novell, and others for ownership of the Linux kernel. Those of us who were around followed the now defuncted Groklaw for the latest dirt on this legal entanglement that is now for the most part forgotten.
From the wikipedia link:
McBride has been controversial in the information technology industry for his role as the CEO of SCO in asserting broad claims of intellectual property ownership of the various UNIX operating systems derivatives developed by IBM under a license originally granted by AT&T Corporation. Open source, free software and Linux developers and supporters, and the computer industry at large have been outspoken and highly critical and skeptical of McBride and SCO's claims in these areas.
Ty Mattingly, a former Novell Executive Vice President and co-worker of McBride was quoted as saying, "Congratulations. In a few short months you've dethroned Bill Gates as the most hated man in the industry."[6] McBride claimed he received death threats as a result of the SCO-IBM lawsuits, and had a package of worms mailed to his home, prompting him to carry a firearm and to employ multiple bodyguards.[4] During an interview, when asked about the popularity of the lawsuit against IBM, McBride answered: "We're either right or we're not. If we're wrong, we deserve people throwing rocks at us."[7]
Under McBride's leadership, SCO saw a surge in stock price from under $2 in March 2003 to over $20 just six months later. Following several adverse rulings issued by the United States District Court in Utah, SCO's stock value dropped to under $1. On April 27, 2007, NASDAQ served notice that the company would be delisted if SCO's stock price did not increase above $1 for a minimum of 10 consecutive days over the course of 180 business days, ending October 22, 2007.[8]
These prices are a joke, right?
Google is finally selling refurbished Pixel phones on its own web store like many other phone makers do. This is great to see because the last thing this world needs is another 500 grams of e-waste in a landfill when it can be repaired and resold. It would be even better if Google were serious about it.
I love the idea of buying a refurbished phone. You get a warranty from the manufacturer and you can trust that anything wrong with the hardware was addressed by someone who knows what they are doing so it works like it is supposed to work. You don't get that when you buy a used phone, so a lot of folks will spend a little more for a phone that was made whole once returned.
There's a right way to do it, and then there is the way Google is doing it.
A look at the phones currently for sale highlights the problems with Google's strategy. We see the Pixel 6a, Pixel 6, Pixel 6 Pro, Pixel 7, and Pixel 7 Pro. You might already notice the first issue: three of these five phones will not get updated to Android 16.
The Pixel 6 series phones will stop seeing platform updates in October 2024, or within days after this article was written. They will still get security updates until 2026, which I think are more important than version updates, but most people don't see it that way. Buying a "good as new" phone directly from Google and not having access to some of the new features we see just a year later with the new OS upgrade will sour a lot of buyers on the program.
[...] The other, and potentially bigger, problem is the prices. I wouldn't buy a refurbished phone from Google for the amount it is asking because it's too much.
You can buy a brand new Pixel 7 Pro cheaper than what Google is asking for a refurbished Pixel 6 Pro. Brand new, still in the wrapper, and a model year newer. Oh, and it will be updated to Android 16, too. If you want to spend $539 buy the Pixel 7 Pro for $399 and some good earbuds instead of spending it on a refurbished phone that's a year older.
It's easy to laugh and think this is just Google being Google again, but this is a serious problem. By pricing these phones so high, people aren't going to buy them. Google isn't going to keep them forever, and in the end, they end up being stripped of any material that's easy to get while the rest goes into a landfill.
[...] Google can afford to sell these refurbs dirt cheap and it gets even more eyeballs on Google's services so Google makes even more money selling ads. I don't know why the company won't do it, but it should.
The advance was incremental at best. So why did so many think it was a breakthrough?
There's little doubt that some of the most important pillars of modern cryptography will tumble spectacularly once quantum computing, now in its infancy, matures sufficiently. Some experts say that could be in the next couple decades. Others say it could take longer. No one knows.
The uncertainty leaves a giant vacuum that can be filled with alarmist pronouncements that the world is close to seeing the downfall of cryptography as we know it. The false pronouncements can take on a life of their own as they're repeated by marketers looking to peddle post-quantum cryptography snake oil and journalists tricked into thinking the findings are real. And a new episode of exaggerated research has been playing out for the past few weeks.
The last time the PQC—short for post-quantum cryptography—hype train gained this much traction was in early 2023, when scientists presented findings that claimed, at long last, to put the quantum-enabled cracking of the widely used RSA encryption scheme within reach. The claims were repeated over and over, just as claims about research released in September have for the past three weeks.
A few weeks after the 2023 paper came to light, a more mundane truth emerged that had escaped the notice of all those claiming the research represented the imminent demise of RSA—the research relied on Schnorr's algorithm (not to be confused with Shor's algorithm). The algorithm, based on 2021 analysis of cryptographer Peter Schnorr, had been widely debunked two years earlier. Specifically, critics said, there was no evidence supporting the authors' claims of Schnorr's algorithm achieving polynomial time, as opposed to the glacial pace of subexponential time achieved with classical algorithms.
Once it became well-known that the validity of the 2023 paper rested solely on Schnorr's algorithm, that research was also debunked.
Three weeks ago, panic erupted again when the South China Morning Post reported that scientists in that country had discovered a "breakthrough" in quantum computing attacks that posed a "real and substantial threat" to "military-grade encryption." The news outlet quoted paper co-author Wang Chao of Shanghai University as saying, "This is the first time that a real quantum computer has posed a real and substantial threat to multiple full-scale SPN [substitution–permutation networks] structured algorithms in use today."
Among the many problems with the article was its failure to link to the paper—reportedly published in September in the Chinese-language academic publication Chinese Journal of Computers—at all. Citing Wang, the paper said that the paper wasn't being published for the time being "due to the sensitivity of the topic." Since then, the South China Morning Post article has been quietly revised to remove the "military-grade encryption" reference.
With no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn't published in September, as the news article reported, but it was written by the same researchers and referenced the "D-Wave Advantage"—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title.
Some of the follow-on articles bought the misinformation hook, line, and sinker, repeating incorrectly that the fall of RSA was upon us. People got that idea because the May paper claimed to have used a D-Wave system to factor a 50-bit RSA integer. Other publications correctly debunked the claims in the South China Morning Post but mistakenly cited the May paper and noted the inconsistencies between what it claimed and what the news outlet reported.
Over the weekend, someone unearthed the correct paper, which, as it turns out, had been available on the Chinese Journal of Computers website the whole time. Most of the paper is written in Chinese. This abstract was fortunately written in English. It reports using a D-Wave-enabled quantum annealer to find "integral distinguishers up to 9-rounds" in the encryption algorithms known as PRESENT, GIFT-64, and RECTANGLE. All three are symmetric encryption algorithms built on a SPN—short for substitution-permutation network structure.
"This marks the first practical attack on multiple full-scale SPN structure symmetric cipher algorithms using a real quantum computer," the paper states. "Additionally, this is the first instance where quantum computing attacks on multiple SPN structure symmetric cipher algorithms have achieved the performance of the traditional mathematical methods."
The main contribution in the September paper is the process the researchers used to find integral distinguishers in up to nine rounds of the three previously mentioned algorithms.
[...] The paper makes no reference to AES or RSA and never claims to break anything. Instead, it describes a way to use D-Wave-enabled quantum annealing to find the integral distinguisher. Classical attacks have had the optimized capability to find the same integral distinguishers for years. David Jao, a professor specializing in PQC at the University of Waterloo in Canada, likened the research to finding a new lock-picking technique. The end result is the same, but the method is new.
[...] This isn't the first time the South China Morning Post has fueled undue panic about the imminent fall of widely used encryption algorithms. Last year's hype train, mentioned earlier in this article, was touched off by coverage by the same publication that claimed researchers found a factorization method that could break a 2,048-bit RSA key using a quantum system with just 372 qubits. People who follow PQC should be especially wary when seeking news there.
The coverage of the September paper is especially overblown because symmetric encryption, unlike RSA and other asymmetric siblings, is are widely belived to be safe from quantum computing, as long as bit sizes are sufficient. PQC experts are confident that AES-256 will resist all known quantum attacks.
[...] As a reminder, current estimates are that quantum cracking of a single 2048-bit RSA key would require a computer with 20 million qubits running in superposition for about eight hours. For context, quantum computers maxed out at 433 qubits in 2022 and 1,000 qubits last year. (A qubit is a basic unit of quantum computing, analogous to the binary bit in classical computing. Comparisons between qubits in true quantum systems and quantum annealers aren't uniform.) So even when quantum computing matures sufficiently to break vulnerable algorithms, it could take decades or longer before the majority of keys are cracked.
The upshot of this latest episode is that while quantum computing will almost undoubtedly topple many of the most widely used forms of encryption used today, that calamitous event won't happen anytime soon. It's important that industries and researchers move swiftly to devise quantum-resistant algorithms and implement them widely. At the same time, people should take steps not to get steamrolled by the PQC hype train.
More follow up on this story with a good explanation of what was actually achieved.
AT&T obtained subsidies for duplicate users and non-users, will pay $2.3 million:
AT&T improperly obtained money from a government-run broadband discount program by submitting duplicate requests and by claiming subsidies for thousands of subscribers who weren't using AT&T's service. AT&T obtained funding based on false certifications it made under penalty of perjury.
AT&T on Friday agreed to pay $2.3 million in a consent decree with the Federal Communications Commission's Enforcement Bureau. That includes a civil penalty of $1,921,068 and a repayment of $378,922 to the US Treasury.
The settlement fully resolves the FCC investigation into AT&T's apparent violations, the consent decree said. "AT&T admits for the purpose of this Consent Decree and for Commission civil enforcement purposes" that the findings described by the FCC "contain a true and accurate description of the facts underlying the Investigation," the document said.
In addition to the civil penalty and repayment, AT&T agreed to a compliance plan designed to prevent further violations. AT&T last week reported quarterly revenue of $30.2 billion.
AT&T made the excessive reimbursement claims to the Emergency Broadband Benefit Program (EBB), which the US formed in response to the COVID-19 pandemic, and to the EBB's successor program, the Affordable Connectivity Program (ACP). The FCC said its rules "are vital to protecting these Programs and their resources from waste, fraud, and abuse."
We contacted AT&T today and asked for an explanation of what caused the violations. Instead, AT&T provided Ars with a statement that praised itself for participating in the federal discount programs.
"When the federal government acted during the COVID-19 pandemic to stand up the Emergency Broadband Benefit program, and then the Affordable Connectivity Program, we quickly implemented both programs to provide more low-cost Internet options for our customers. We take compliance with federal programs like these seriously and appreciate the collaboration with the FCC to reach a solution on this matter," AT&T said.
A Russia court claims that Google currently owes them 2000000 0000000000 0000000000 0000000000, or 2 undecillion if you will -- a number with 36 trailing zeros, rubles.
This is due to a fine of 100 000 rubles per day exponentially growing since sometimes in 2021 due to Google blocking various Russian sites from Youtube.
The slight problem is that the fine is now so large that it is many, many, times the entire worlds combined GDP. I don't think they should expect to get paid anytime soon.
Arthur T Knackerbracket has processed the following story:
Scientists in China have pulled off a remarkable feat worthy of Victor Frankenstein: reviving pigs’ brains up to 50 minutes after a complete loss of blood circulation. The macabre accomplishment could someday lead to advances in keeping people’s brains intact and healthy for longer while resuscitating them.
[...] Past studies have suggested that liver function plays a key role in how well the rest of the body does during a cardiac arrest. People with pre-existing liver disease, for instance, seem to have a higher risk of dying from a cardiac arrest. So the researchers, based primarily at Sun Yat-Sen University, decided to test whether keeping the livers of Tibetan minipigs functionally alive could have a positive effect on their brains’ survivability after resuscitation.
All of the pigs had blood flow to their brains stopped, but some were hooked up to a life support system that kept their liver’s circulation going. The scientists then tried to revive the pigs’ brains after a certain period of time using the same life support system. Afterward, the pigs were euthanized and compared to a control group of pigs that had their blood flow left alone.
When the pigs had blood flow to both organs shut down, their brains were substantially more damaged following resuscitation, the researchers found. But the brains of pigs that had their livers supported tended to fare much better, with fewer signs of injury and a restoration of electrical activity that lasted up to six hours. The researchers were also able to restore brain activity in these pigs up to 50 minutes after blood flow to the brain was stopped.
[...] Of course, this doesn’t mean that scientists can now return anyone back from the dead perfectly intact with just a little boost to their liver. There are many damaging bodily changes that occur soon after a cardiac arrest, not just those in the brain and liver. And certainly more research will have to be done to confirm the team’s conclusions that the liver is critical to restored brain function. But if this work does continue to pay off, it could someday lead to practical interventions that improve the odds of successful resuscitation in people.
Journal Reference: DOI: https://doi.org/10.1038/s44321-024-00140-z
Arthur T Knackerbracket has processed the following story:
If you believe Mark Zuckerberg, Meta's AI large language model (LLM) Llama 3 is open source.
It's not, despite what he says. The Open Source Initiative (OSI) spells it out in the Open Source Definition, and Llama 3's license – with clauses on litigation and branding – flunks it on several grounds.
Meta, unfortunately, is far from unique in wanting to claim that some of its software and models are open source. Indeed, the concept has its own name: open washing.
This is a deceptive practice in which companies or organizations present their products, services, or processes as "open" when they are not truly open in the spirit of transparency, access to information, participation, and knowledge sharing. This term is modeled after "greenwashing" and was coined by Michelle Thorne, an internet and climate policy scholar, in 2009.
With the rise of AI, open washing has become commonplace, as shown in a recent study. Andreas Liesenfeld and Mark Dingemanse of Radboud University's Center for Language Studies surveyed 45 text and text-to-image models that claim to be open. The pair found that while a handful of lesser-known LLMs, such as AllenAI's OLMo and BigScience Workshop + HuggingFace with BloomZ could be considered open, most are not. Would it surprise you to know that according to the study, the big-name ones from Google, Meta, and Microsoft aren't? I didn't think so.
But why do companies do this? Once upon a time, companies avoided open source like the plague. Steve Ballmer famously proclaimed in 2001 that "Linux is a cancer," because: "The way the license is written, if you use any open source software, you have to make the rest of your software open source." But that was a long time ago. Today, open source is seen as a good thing. Open washing enables companies to capitalize on the positive perception of open source and open practices without actually committing to them. This can help improve their public image and appeal to consumers who value transparency and openness.
[...] That's not to say all the big-name AI companies are lying about their open source street cred. For example, IBM's Granite 3.0 LLMs really are open source under the Apache 2 license.
Why is this important? Why do people like me insist that we properly use the term open source? It's not like, after all, the OSI is a government or regulatory organization. It's not. It's just a nonprofit that has created some very useful guidelines.
[...] If we need to check every license for every bit of code, "developers are going to go to legal reviews every time you want to use a new library. Companies are going to be scared to publish things on the internet if they're not clear about the liabilities they're encountering when that source code becomes public."
Lorenc continued: "You might think this is only a big company problem, but it's not. It's a shared problem. Everybody who uses open source is going to be affected by this. It could cause entire projects to stop working. Security bugs aren't going to get fixed. Maintenance is going to get a lot harder. We must act together to preserve and defend the definition of open source. Otherwise, the lawyers are going to have to come back. No one wants the lawyers to come back."
I must add that I know a lot of IP lawyers. They do not need or want these headaches. Real open source licenses make life easier for everyone: businesses, programmers, and lawyers. Introducing "open except for someone who might compete with us" or "open except for someone who might deploy the code on a cloud" is just asking for trouble.
In the end, open washing will dirty the legal, business, and development work for everyone. Including, ironically, the shortsighted companies now supporting this approach. After all, almost all their work, especially in AI, is ultimately based on open source.