Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Which musical instrument can you play, or which would you like to learn to play?

  • piano or other keyboard
  • guitar
  • violin or fiddle
  • brass or wind instrument
  • drum or other percussion
  • er, yes, I am a professional one-man band
  • I usually play mp3 or OSS equivalents, you insensitive clod
  • Other (please specify in the comments)

[ Results | Polls ]
Comments:22 | Votes:61

posted by hubie on Friday November 01, @08:42PM   Printer-friendly

The advance was incremental at best. So why did so many think it was a breakthrough?

There's little doubt that some of the most important pillars of modern cryptography will tumble spectacularly once quantum computing, now in its infancy, matures sufficiently. Some experts say that could be in the next couple decades. Others say it could take longer. No one knows.

The uncertainty leaves a giant vacuum that can be filled with alarmist pronouncements that the world is close to seeing the downfall of cryptography as we know it. The false pronouncements can take on a life of their own as they're repeated by marketers looking to peddle post-quantum cryptography snake oil and journalists tricked into thinking the findings are real. And a new episode of exaggerated research has been playing out for the past few weeks.

The last time the PQC—short for post-quantum cryptography—hype train gained this much traction was in early 2023, when scientists presented findings that claimed, at long last, to put the quantum-enabled cracking of the widely used RSA encryption scheme within reach. The claims were repeated over and over, just as claims about research released in September have for the past three weeks.

A few weeks after the 2023 paper came to light, a more mundane truth emerged that had escaped the notice of all those claiming the research represented the imminent demise of RSA—the research relied on Schnorr's algorithm (not to be confused with Shor's algorithm). The algorithm, based on 2021 analysis of cryptographer Peter Schnorr, had been widely debunked two years earlier. Specifically, critics said, there was no evidence supporting the authors' claims of Schnorr's algorithm achieving polynomial time, as opposed to the glacial pace of subexponential time achieved with classical algorithms.

Once it became well-known that the validity of the 2023 paper rested solely on Schnorr's algorithm, that research was also debunked.

Three weeks ago, panic erupted again when the South China Morning Post reported that scientists in that country had discovered a "breakthrough" in quantum computing attacks that posed a "real and substantial threat" to "military-grade encryption." The news outlet quoted paper co-author Wang Chao of Shanghai University as saying, "This is the first time that a real quantum computer has posed a real and substantial threat to multiple full-scale SPN [substitution–permutation networks] structured algorithms in use today."

Among the many problems with the article was its failure to link to the paper—reportedly published in September in the Chinese-language academic publication Chinese Journal of Computers—at all. Citing Wang, the paper said that the paper wasn't being published for the time being "due to the sensitivity of the topic." Since then, the South China Morning Post article has been quietly revised to remove the "military-grade encryption" reference.

With no original paper to reference, many news outlets searched the Chinese Journal of Computers for similar research and came up with this paper. It wasn't published in September, as the news article reported, but it was written by the same researchers and referenced the "D-Wave Advantage"—a type of quantum computer sold by Canada-based D-Wave Quantum Systems—in the title.

Some of the follow-on articles bought the misinformation hook, line, and sinker, repeating incorrectly that the fall of RSA was upon us. People got that idea because the May paper claimed to have used a D-Wave system to factor a 50-bit RSA integer. Other publications correctly debunked the claims in the South China Morning Post but mistakenly cited the May paper and noted the inconsistencies between what it claimed and what the news outlet reported.

Over the weekend, someone unearthed the correct paper, which, as it turns out, had been available on the Chinese Journal of Computers website the whole time. Most of the paper is written in Chinese. This abstract was fortunately written in English. It reports using a D-Wave-enabled quantum annealer to find "integral distinguishers up to 9-rounds" in the encryption algorithms known as PRESENT, GIFT-64, and RECTANGLE. All three are symmetric encryption algorithms built on a SPN—short for substitution-permutation network structure.

"This marks the first practical attack on multiple full-scale SPN structure symmetric cipher algorithms using a real quantum computer," the paper states. "Additionally, this is the first instance where quantum computing attacks on multiple SPN structure symmetric cipher algorithms have achieved the performance of the traditional mathematical methods."

The main contribution in the September paper is the process the researchers used to find integral distinguishers in up to nine rounds of the three previously mentioned algorithms.

[...] The paper makes no reference to AES or RSA and never claims to break anything. Instead, it describes a way to use D-Wave-enabled quantum annealing to find the integral distinguisher. Classical attacks have had the optimized capability to find the same integral distinguishers for years. David Jao, a professor specializing in PQC at the University of Waterloo in Canada, likened the research to finding a new lock-picking technique. The end result is the same, but the method is new.

[...] This isn't the first time the South China Morning Post has fueled undue panic about the imminent fall of widely used encryption algorithms. Last year's hype train, mentioned earlier in this article, was touched off by coverage by the same publication that claimed researchers found a factorization method that could break a 2,048-bit RSA key using a quantum system with just 372 qubits. People who follow PQC should be especially wary when seeking news there.

The coverage of the September paper is especially overblown because symmetric encryption, unlike RSA and other asymmetric siblings, is are widely belived to be safe from quantum computing, as long as bit sizes are sufficient. PQC experts are confident that AES-256 will resist all known quantum attacks.

[...] As a reminder, current estimates are that quantum cracking of a single 2048-bit RSA key would require a computer with 20 million qubits running in superposition for about eight hours. For context, quantum computers maxed out at 433 qubits in 2022 and 1,000 qubits last year. (A qubit is a basic unit of quantum computing, analogous to the binary bit in classical computing. Comparisons between qubits in true quantum systems and quantum annealers aren't uniform.) So even when quantum computing matures sufficiently to break vulnerable algorithms, it could take decades or longer before the majority of keys are cracked.

The upshot of this latest episode is that while quantum computing will almost undoubtedly topple many of the most widely used forms of encryption used today, that calamitous event won't happen anytime soon. It's important that industries and researchers move swiftly to devise quantum-resistant algorithms and implement them widely. At the same time, people should take steps not to get steamrolled by the PQC hype train.

More follow up on this story with a good explanation of what was actually achieved.


Original Submission

posted by hubie on Friday November 01, @04:00PM   Printer-friendly

AT&T obtained subsidies for duplicate users and non-users, will pay $2.3 million:

AT&T improperly obtained money from a government-run broadband discount program by submitting duplicate requests and by claiming subsidies for thousands of subscribers who weren't using AT&T's service. AT&T obtained funding based on false certifications it made under penalty of perjury.

AT&T on Friday agreed to pay $2.3 million in a consent decree with the Federal Communications Commission's Enforcement Bureau. That includes a civil penalty of $1,921,068 and a repayment of $378,922 to the US Treasury.

The settlement fully resolves the FCC investigation into AT&T's apparent violations, the consent decree said. "AT&T admits for the purpose of this Consent Decree and for Commission civil enforcement purposes" that the findings described by the FCC "contain a true and accurate description of the facts underlying the Investigation," the document said.

In addition to the civil penalty and repayment, AT&T agreed to a compliance plan designed to prevent further violations. AT&T last week reported quarterly revenue of $30.2 billion.

AT&T made the excessive reimbursement claims to the Emergency Broadband Benefit Program (EBB), which the US formed in response to the COVID-19 pandemic, and to the EBB's successor program, the Affordable Connectivity Program (ACP). The FCC said its rules "are vital to protecting these Programs and their resources from waste, fraud, and abuse."

We contacted AT&T today and asked for an explanation of what caused the violations. Instead, AT&T provided Ars with a statement that praised itself for participating in the federal discount programs.

"When the federal government acted during the COVID-19 pandemic to stand up the Emergency Broadband Benefit program, and then the Affordable Connectivity Program, we quickly implemented both programs to provide more low-cost Internet options for our customers. We take compliance with federal programs like these seriously and appreciate the collaboration with the FCC to reach a solution on this matter," AT&T said.


Original Submission

posted by hubie on Friday November 01, @11:14AM   Printer-friendly
from the so-many-0s dept.

https://novayagazeta.eu/articles/2024/10/29/russian-court-orders-google-to-pay-blocked-russian-channels-sum-with-36-zeros-en-news

A Russia court claims that Google currently owes them 2000000 0000000000 0000000000 0000000000, or 2 undecillion if you will -- a number with 36 trailing zeros, rubles.

This is due to a fine of 100 000 rubles per day exponentially growing since sometimes in 2021 due to Google blocking various Russian sites from Youtube.

The slight problem is that the fine is now so large that it is many, many, times the entire worlds combined GDP. I don't think they should expect to get paid anytime soon.


Original Submission

posted by hubie on Friday November 01, @06:29AM   Printer-friendly
from the that's-"Fronk-in-steen" dept.

Arthur T Knackerbracket has processed the following story:

Scientists in China have pulled off a remarkable feat worthy of Victor Frankenstein: reviving pigs’ brains up to 50 minutes after a complete loss of blood circulation. The macabre accomplishment could someday lead to advances in keeping people’s brains intact and healthy for longer while resuscitating them.

[...] Past studies have suggested that liver function plays a key role in how well the rest of the body does during a cardiac arrest. People with pre-existing liver disease, for instance, seem to have a higher risk of dying from a cardiac arrest. So the researchers, based primarily at Sun Yat-Sen University, decided to test whether keeping the livers of Tibetan minipigs functionally alive could have a positive effect on their brains’ survivability after resuscitation.

All of the pigs had blood flow to their brains stopped, but some were hooked up to a life support system that kept their liver’s circulation going. The scientists then tried to revive the pigs’ brains after a certain period of time using the same life support system. Afterward, the pigs were euthanized and compared to a control group of pigs that had their blood flow left alone.

When the pigs had blood flow to both organs shut down, their brains were substantially more damaged following resuscitation, the researchers found. But the brains of pigs that had their livers supported tended to fare much better, with fewer signs of injury and a restoration of electrical activity that lasted up to six hours. The researchers were also able to restore brain activity in these pigs up to 50 minutes after blood flow to the brain was stopped.

[...] Of course, this doesn’t mean that scientists can now return anyone back from the dead perfectly intact with just a little boost to their liver. There are many damaging bodily changes that occur soon after a cardiac arrest, not just those in the brain and liver. And certainly more research will have to be done to confirm the team’s conclusions that the liver is critical to restored brain function. But if this work does continue to pay off, it could someday lead to practical interventions that improve the odds of successful resuscitation in people.

Journal Reference: DOI: https://doi.org/10.1038/s44321-024-00140-z


Original Submission

posted by hubie on Friday November 01, @01:40AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

If you believe Mark Zuckerberg, Meta's AI large language model (LLM) Llama 3 is open source.

It's not, despite what he says. The Open Source Initiative (OSI) spells it out in the Open Source Definition, and Llama 3's license – with clauses on litigation and branding – flunks it on several grounds.

Meta, unfortunately, is far from unique in wanting to claim that some of its software and models are open source. Indeed, the concept has its own name: open washing. 

This is a deceptive practice in which companies or organizations present their products, services, or processes as "open" when they are not truly open in the spirit of transparency, access to information, participation, and knowledge sharing. This term is modeled after "greenwashing" and was coined by Michelle Thorne, an internet and climate policy scholar, in 2009.

With the rise of AI, open washing has become commonplace, as shown in a recent study. Andreas Liesenfeld and Mark Dingemanse of Radboud University's Center for Language Studies surveyed 45 text and text-to-image models that claim to be open. The pair found that while a handful of lesser-known LLMs, such as AllenAI's OLMo and BigScience Workshop + HuggingFace with BloomZ could be considered open, most are not. Would it surprise you to know that according to the study, the big-name ones from Google, Meta, and Microsoft aren't? I didn't think so.

But why do companies do this? Once upon a time, companies avoided open source like the plague. Steve Ballmer famously proclaimed in 2001 that "Linux is a cancer," because: "The way the license is written, if you use any open source software, you have to make the rest of your software open source." But that was a long time ago. Today, open source is seen as a good thing. Open washing enables companies to capitalize on the positive perception of open source and open practices without actually committing to them. This can help improve their public image and appeal to consumers who value transparency and openness.

[...] That's not to say all the big-name AI companies are lying about their open source street cred. For example, IBM's Granite 3.0 LLMs really are open source under the Apache 2 license.

Why is this important? Why do people like me insist that we properly use the term open source? It's not like, after all, the OSI is a government or regulatory organization. It's not. It's just a nonprofit that has created some very useful guidelines.

[...] If we need to check every license for every bit of code, "developers are going to go to legal reviews every time you want to use a new library. Companies are going to be scared to publish things on the internet if they're not clear about the liabilities they're encountering when that source code becomes public."

Lorenc continued: "You might think this is only a big company problem, but it's not. It's a shared problem. Everybody who uses open source is going to be affected by this. It could cause entire projects to stop working. Security bugs aren't going to get fixed. Maintenance is going to get a lot harder. We must act together to preserve and defend the definition of open source. Otherwise, the lawyers are going to have to come back. No one wants the lawyers to come back."

I must add that I know a lot of IP lawyers. They do not need or want these headaches. Real open source licenses make life easier for everyone: businesses, programmers, and lawyers. Introducing "open except for someone who might compete with us" or "open except for someone who might deploy the code on a cloud" is just asking for trouble.

In the end, open washing will dirty the legal, business, and development work for everyone. Including, ironically, the shortsighted companies now supporting this approach. After all, almost all their work, especially in AI, is ultimately based on open source.


Original Submission

posted by Fnord666 on Thursday October 31, @08:58PM   Printer-friendly
from the you-have-died-of-dysentery dept.

From the Hollywood Reporter: Apple is turning the classic computer game Oregon Trail into a big budget action-comedy movie.

Grab your wagons and oxen, and get ready to ford a river: A movie adaptation of the popular grade school computer game Oregon Trail is in development at Apple.

The studio landed the film pitch, still in early development, that has Will Speck and Josh Gordon attached to direct and produce. EGOT winners Benj Pasek and Justin Paul will provide original music and produce via their Ampersand production banner. Sources tell The Hollywood Reporter that the movie will feature a couple of original musical numbers in the vein of Barbie.

Sounds like a good day to die of dysentery.


Original Submission

posted by janrinok on Thursday October 31, @04:13PM   Printer-friendly

The other thing is that in order to learn, children need to have fun. But they have fun by really being pushed to explore and create and make new things that are personally meaningful. So you need open-ended environments that allow children to explore and express themselves.

(This Inventor Is Molding Tomorrow's Inventors, IEEE Spectrum)

IEEE Spectrum has a (short) sit-down with Marina Umaschi Bers, co-creator of the ScratchJr programming language and the KIBO robotics kits. Both tools are aimed at teaching kids to code and develop STEAM skills from a very young age. Other examples of such tools are the mTiny robot toy and the Micro:bit computer.

Being able to code is a new literacy, remarks Professor Bers, and like with reading and writing, we need to adapt our learning tools to children's developmental ages. One idea here is that of a "coding playground" where it's not about following step-by-step plans, but about inventing games, pretend play and creating anything children can imagine. She is currently working on a project to bring such a playground outside: integrating motors, sensors and other devices in physical playgrounds, "to bolster computational thinking through play".

Given how fast complicated toys are being thrown aside by young children, in contrast to a simple ball -- or a meccano set, for the engineering types -- I have my doubts.

At least one of the tools mentioned above is aimed at toddlers. So put aside your model steam locomotive, oh fellow 'lentil, and advice me: from what age do you think we should try to steer kids into "coding" or "developing", and which tools should or could be used for this?

Feel free to wax nostalgic about toys of days past, of course, in this, one of your favorite playgrounds: the soylentnews comment editor.


Original Submission

posted by janrinok on Thursday October 31, @11:30AM   Printer-friendly
from the not-to-be dept.

A new study reveals it would take far longer than the lifespan of our universe for a typing monkey to randomly produce Shakespeare. So, while the Infinite Monkey Theorem is true, it is also somewhat misleading.

A monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of Shakespeare purely by chance, according to the Infinite Monkey Theorem.

This widely known thought experiment is used to help us understand the principles of probability and randomness, and how chance can lead to unexpected outcomes. The idea has been referenced in pop culture from "The Simpsons" to "The Hitchhiker's Guide to the Galaxy" and on TikTok.

However, a new study reveals it would take an unbelievably huge amount of time—far longer than the lifespan of our universe, for a typing monkey to randomly produce Shakespeare. So, while the theorem is true, it is also somewhat misleading.

More information: Stephen Woodcock et al, A numerical evaluation of the Finite Monkeys Theorem, Franklin Open (2024). DOI: 10.1016/j.fraope.2024.100171

[Source]: University of Technology, Sydney

[Journal Ref]: A numerical evaluation of the Finite Monkeys Theorem


Original Submission

posted by janrinok on Thursday October 31, @06:42AM   Printer-friendly
from the can-you-hear-me-now? dept.

Voyager 1 Ghosts NASA, Forcing Use of Backup Radio Dormant Since 1981:

Voyager 1 can't seem to catch a break. The interstellar traveler recently recovered from a thruster glitch that nearly ended its mission, and now NASA's aging probe stopped sending data to ground control due to an unknown issue.

On Monday, NASA revealed that Voyager 1 recently experienced a brief pause in communication after turning off one of its radio transmitters. The space agency is now relying on a second radio transmitter that hasn't been used since 1981 to communicate with Voyager 1 until engineers can figure out the underlying issue behind the glitch.

The flight team behind the mission first realized something was amiss with Voyager 1's communication when the spacecraft failed to respond to a command. On October 16, the team used NASA's Deep Space Network (DSN)—a global array of giant radio antennas—to beam instructions to Voyager 1, directing it to turn on one of its heaters.

Voyager 1 should've sent back engineering data for the team to determine how the spacecraft responded to the command. This process normally takes a couple of days, as the command takes about 23 hours to travel more than 15 billion miles (24 billion kilometers) to the spacecraft and another 23 hours for the flight team to receive a signal back. Instead, the command seems to have triggered the spacecraft's fault protection system, which autonomously responds to onboard issues affecting the mission.

The spacecraft's fault protection system lowered the rate at which its radio transmitter was sending back data to use less power, according to NASA. However, while conserving the spacecraft's power, this mode also changes the X-band radio signal, a frequency range within the electromagnetic spectrum that the DSN's antennas listen for.

The flight team was able to locate the signal a day later but then, on October 19, communication with Voyager 1 stopped entirely. Voyager 1's fault protection system appeared to have been triggered twice more, and it turned off the X-band transmitter altogether. The spacecraft switched to a second radio transmitter called the S-band, which uses less power but transmits a significantly fainter signal. Voyager 1's S-band transmitter hadn't been used since 1981, and the flight team was not sure its signal could be detected due to the spacecraft being much farther away today than it was 43 years ago.

Still, the team of engineers didn't want to risk sending another signal to the X-band transmitter, and decided to give it a shot. On October 22, the team sent a command to confirm whether the S-band transmitter is working and was finally able to reconnect with Voyager 1 two days later. NASA engineers are currently working to determine what may have triggered the spacecraft's fault protection system in its attempt to resolve the issue.

Voyager 1 launched in 1977, less than a month after its twin probe, Voyager 2, began its journey to space. The spacecraft took a faster route, exiting the asteroid belt earlier than its twin and making close encounters with Jupiter and Saturn. Along the way, it discovered two Jovian moons, Thebe and Metis, as well as five new moons and a new ring called the G-ring around Saturn. Voyager 1 ventured into interstellar space in August 2012, becoming the first spacecraft to cross the boundary of our solar system.

The spacecraft has been flying for 47 years, and all that time in deep space has taken a toll on the interstellar probe. NASA engineers have had to come up with creative ways to keep the iconic mission alive. The team recently switched to a different set of thrusters than the one the spacecraft had been relying on, which became clogged with silicon dioxide over the years, using a delicate procedure to preserve Voyager 1's power. Earlier this year, the team of engineers also fixed a communication glitch that had been causing Voyager 1 to transmit gibberish to ground control.

Voyager 1 is no spring chicken, and its upkeep has not been an easy task over the years, especially from billions of miles away. But all-in-all, humanity's long-standing interstellar probe is well worth the effort.


Original Submission

posted by janrinok on Thursday October 31, @01:59AM   Printer-friendly
from the transparently-terrible dept.

Arthur T Knackerbracket has processed the following story:

After countless years pondering the idea, the FCC in 2022 announced that it would start politely asking the nation’s lumbering telecom monopolies to affix a sort of “nutrition label” on to broadband connections. The labels will clearly disclose the speed and latency (ping) of your connection, any hidden fees users will encounter, and whether the connection comes with usage caps or “overage fees.”

Initially just a voluntary measure, bigger ISPs had to start using the labels back in April. Smaller ISPs had to start using them as of October 10. In most instances they’re supposed to look something like this [image].

As far as regulatory efforts go, it’s not the worst idea. Transparency is lacking in broadband land, and U.S. broadband and cable companies have a 30+ year history of ripping off consumers with an absolute cavalcade of weird restrictions, fees, surcharges, and connection limitations.

Here’s the thing though: transparently knowing you’re being ripped off doesn’t necessarily stop you from being ripped off. A huge number of Americans live under a broadband monopoly or duopoly, meaning they have no other choice in broadband access. As such, Comcast or AT&T or Verizon can rip you off, and you have absolutely no alternative options that allow you to vote with your wallet.

That wouldn’t be as much of a problem if U.S. federal regulators had any interest in reining in regional telecom monopoly power, but they don’t. In fact, members of both parties are historically incapable of even admitting monopoly harm exists. Democrats are notably better at at least trying to do something, even if that something often winds up being decorative regulatory theater.

The other problem: with the help of a corrupt Supreme Court, telecoms and their Republican and libertarian besties are currently engaged in an effort to dismantle what’s left of the FCC’s consumer protection authority under the pretense this unleashes “free market innovation.” It, of course, doesn’t; regional monopolies like Comcast just double down on all of their worst impulses, unchecked.

If successful, even fairly basic efforts like this one won’t be spared, as the FCC won’t have the authority to enforce much of anything.

It’s all very demonstrative of a U.S. telecom industry that’s been broken by monopoly power, a lack of competition, and regulatory capture. As a result, even the most basic attempts at consumer protection are constantly undermined by folks who’ve dressed up greed as some elaborate and intellectual ethos.


Original Submission

posted by janrinok on Wednesday October 30, @09:15PM   Printer-friendly
from the millions-billions-trillions-next-you-gonna-start-talking-real-money-here dept.

Our last meeting of the state visit, in the Great Hall of the People, was with Li Keqiang, the premier of the State Council and the titular head of China's government. If anyone in the American group had any doubts about China's view of its relationship with the United States, Li's monologue would have removed them. He began with the observation that China, having already developed its industrial and technological base, no longer needed the United States. He dismissed U.S. concerns over unfair trade and economic practices, indicating that the U.S. role in the future global economy would merely be to provide China with raw materials, agricultural products, and energy to fuel its production of the world's cutting-edge industrial and consumer products.

H.R. McMaster: How China Sees The World. The Atlantic Monthly, May 2020.

China has the world's largest manufacturing sector, accounting for about 31% of total global manufacturing output. The EU's manufacturing sector has a global production share of 20%, while the United States accounts for about 17%.

In the United States, around 12.3 million people work in manufacturing; for the EU this number is 29.7 million. China's manufacturing sector employs over 120 million people.

Some of the hallmarks of the Biden Administration are its Inflation Reduction Act (IRA), the Bipartisan Infrastructure Law (BIL), and the CHIPS and Science Act. Both IRA and CHIPS act combined aim to inject close to a trillion dollars into specific sections of the manufacturing sector, while the INVEST in America Act (BIL) adds another 1.2 trillion dollar investment into transportation and road projects and electric grid renewal.

On June 12 of this year, the Joint Economic Committee of the US Senate held a hearing, titled "Made in America: The Boom in U.S. Manufacturing Investment". For that hearing, 4 witnesses were called.

For two of the witnesses, the future for US manufacturing looks bright, with the help of the acts mentioned above. If we want to remain a rich country, we need to invest in advanced manufacturing, they claim.

Rich countries are countries which have accumulated superior knowledge for producing highly-complex leading edge technologies. With that go successful enterprises and high pay, high quality jobs, and a more diversified economy. Diverting money into manufacturing does not need to harm other sectors of the economy: look at Silicon Valley. Now the global software powerhouse, it started with manufacturing transistors.

The Acts passed during this Administration do help: the private sector invested approximately $80bn into manufacturing construction in 2019; in 2024 that has increased to an annualized $220bn.

The other two witnesses -- both connected to the Cato Institute -- have a different outlook though. Policies where you target specific sectors of the economy rarely work, they claim. They are bound to stimulate waste and corruption, and direct funds away from companies who really could use them: and that's without even talking about fiscal deficits and such. Better turn those funds towards generalized tax reductions, which will ultimately stimulate more investment into the broader economy.

Also, one should take into account the reaction of the outside world here: the EU is already working on its own industrial policies, largely in response to the Biden Administration's Acts. We might very well end-up in a zero-sum game, where the only benefactors are a range of companies which are artificially kept alive with grant money.

Now, dear reader, it may come as a surprise to you, but you have just been urgently asked -- by a prominent US Presidential Candidate -- to give your advice on a New Industrial Policy for America.

What will you say? How will you argue?


Original Submission

posted by hubie on Wednesday October 30, @04:28PM   Printer-friendly
from the sun-is-the-same-in-a-relative-way dept.

Arthur T Knackerbracket has processed the following story:

A group of researchers in the UK affiliated with the BSS (British Sleep Society) published a paper this week calling for the permanent abolition of Daylight Saving Time (DST) and adherence to Greenwich Mean Time (GMT), in large part because modern evidence suggests having that extra hour of sunlight in the evenings is worse for our health than we thought back in the 1970s when the concept was all the rage in Europe.

Not only does GMT more closely align with the natural day/light cycle in the UK, the boffins assert, but decades of research into sleep and circadian rhythms have been produced since DST was enacted that have yet to be considered.

The human circadian rhythm, the 24-hour cycle our bodies go through, drives a lot about our health beyond sleep. It regulates hormone release, gene expression, metabolism, mood (who isn't grumpier when waking up in January?), and the like. In short, it's important. Messing with that rhythm by forcing ourselves out of bed earlier for several months out of the year can have lasting effects, the researchers said.

According to their review of recent research, having light trigger our circadian rhythms in the mornings to wake us up is far more important than an extra hour of light in the evenings. In fact, contrary to the belief that an extra hour of light in the evenings is beneficial, it might actually cause health problems by, again, mucking about with the human body's understanding of what time it is and how we ought to feel about it.

"Disruption of the daily synchronization of our body clocks causes disturbances in our physiology and behavior … which leads to negative short and long-term physical and mental health outcomes," the authors said. 

That, and we've just plain fooled ourselves into thinking it benefits us in any real way.

[...] And for the love of sleep, the researchers beg, don't spring forward permanently.

"Mornings are the time when our body clocks have the greatest need for light to stay in sync," said Dr Megan Crawford, lead author and senior lecturer in psychology at University of Strathclyde. "At our latitudes there is simply no spare daylight to save during the winter months and given the choice between natural light in the morning and natural light in the afternoon, the scientific evidence favors light in the morning."


Original Submission

posted by hubie on Wednesday October 30, @11:43AM   Printer-friendly
from the one-step-forward-or-a-security-nightmare dept.

Arthur T Knackerbracket has processed the following story:

The US Consumer Financial Protection Bureau (CFPB) has finalized a rule that requires banks, credit card issuers, and most other financial firms to provide consumers with access to their personal financial data - and to help them transfer it, generally at no cost.

When the rule eventually takes effect – anywhere from 2026 to 2030 depending upon financial firm assets and revenue, with the largest institutions having the least time to comply – it aims to make financial services more competitive by allowing consumers to move to different vendors more easily.

Under this "open banking" rule, covered financial firms: "shall make available to a consumer, upon request, information in the control or possession of the covered person concerning the consumer financial product or service that the consumer obtained from such covered person, subject to certain exceptions."

This data has to be made available in an electronic form usable by consumers and authorized third parties.

[...] "Too many Americans are stuck in financial products with lousy rates and service," said CFPB director Rohit Chopra in a statement. "Today's action will give people more power to get better rates and service on bank accounts, credit cards, and more."

The rule also outlines required privacy protections for personal financial data, specifying that third parties can only use data for a requested product or service.

"They cannot secretly collect, use, or retain consumers’ data for their own unrelated business reasons – for example, by offering consumers a loan using consumer data that they also use for targeted advertising," the CFPB said.

The rule prohibits data providers from making consumer data available to third parties through screen scraping and it requires that businesses delete consumer data when a person revokes access.

[...] "Enabling secure transfer or sharing of financial data will incentivize financial companies to compete for customers by providing better interest rates and customer service.


Original Submission

posted by hubie on Wednesday October 30, @06:55AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Despite an official ban on Russian government workers using the iPhone, an unreliable report says that sales have risen dramatically.

It was in 2023 that Russia's Federal Security Service (FSB) tried banning government staff from using iPhones. Purportedly, it was because the FSB believed the US was using the iPhone for eavesdropping.

Now according to Reuters, local Russian sources are saying the ban rather failed. While the figures have yet to be confirmed by any other source, the Vedomosti business daily claims that purchases of iPhones from January 2024 to September would four times higher year over year.

[...] That rather dispels any idea that Russian officials are rebelling against the ban en masse. But it also points to how the original ban was seemingly far from a blanket one.

Equally, that destroys the idea that the FSB can be serious in its allegations of iPhone wiretapping. It's always been more likely that any ban is a retaliation for how Apple has ceased directly doing business in Russia since the start of the war with Ukraine.


Original Submission

posted by hubie on Wednesday October 30, @02:10AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Researchers at the University of Chicago and Argonne National Lab have developed a new type of optical memory that stores data by transferring light from rare-earth element atoms embedded in a solid material to nearby quantum defects. They published their study in Physical Review Research.

The problem that the researchers aim to solve is the diffraction limit of light in standard CDs and DVDs. Current optical storage has a hard cap on data density because each single bit can't be smaller than the wavelength of the reading/writing laser.

The researchers propose bypassing this limit by stuffing the material with rare-earth emitters, such as magnesium oxide (MgO) crystals. The trick, called wavelength multiplexing, involves having each emitter use a slightly different wavelength of light. They theorized that this would allow cramming far more data into the same storage footprint.

The researchers first had to tackle the physics and model all the requirements to build a proof of concept. They simulated a theoretical solid material filled with rare-earth atoms that absorb and re-emit light. The models then showed how the nearby quantum defects could capture and store the returned light.

One of the fundamental discoveries was that when a defect absorbs the narrow wavelength energy from those nearby atoms, it doesn't just get excited – its spin state flips. Once it flips, it is nearly impossible to revert, meaning those defects could legitimately store data for a long time.

While it's a promising first step, some crucial questions still need answers. For example, verifying how long those excited states persist is essential. Details were also light on capacity estimates – the scientists touted "ultra-high-density" but didn't provide any projections against current disc capacities. Yet, despite the remaining hurdles, the researchers are hyped, calling it a "huge first step."

Of course, turning all this into an actual commercial storage product will likely take years of additional research and development.


Original Submission