Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
ICE buys location info from data brokers to evade warrant requirements. It slurps data from utility companies to locate immigrants who need electricity, heat, and internet connections but would rather not be hassled for trying to exist and earn a living by providing this same information directly to the US government.
The operative theory appears to be that immigrants here illegally aren’t protected by the US Constitution. But that’s simply not true. Rights are extended to people living in our borders, whether or not they’re US citizens. However, none of that is going to matter if Trump succeeds in deploying his mass deportation plans — ones that are long on rhetoric and short on actual planning at the moment.
Rest assured, the round-ups will outpace the planning following Trump’s re-ascension. ICE tends to be very proactive when it feels the person in the Oval Office has its back. “Going forward” means “starting now,” as this New Yorker article written by Ronan Farrow points out. Be sure you don’t overlook what’s being said in the last sentence of the article’s opening paragraph.
Nice. So, we’re doing business with a company that will only allow the US government to target US citizens. While it’s great that it’s preventing outside countries from doing this (which is a claim I’m not inclined to believe), it’s definitely shitty that it’s offering a bespoke version to the DHS and ICE for the express purpose of plug-and-play domestic surveillance.
Then there’s the first sentence of the paragraph, which indicates this was in motion two months ahead of the election, which suggests two equally disturbing things. Either the outgoing administration was fine with expanded domestic surveillance or DHS felt it should get the ball rolling because the victor of the 2024 election was likely to be supportive of expanded domestic surveillance. Perhaps the purchasing department was just running a Trump re-election parlay. Or maybe the DHS felt pretty confident Kamala Harris wouldn’t object to mass surveillance, even while she argued against mass deportations. Not great!
The DHS may have handled the macro, but ICE jumped on the micro as soon as it became clear who was headed to the Oval Office in January.
More surveillance and a shit-ton of more money for the private companies handling federal prisoners and federal detainees, some of which have already expressed their pleasure over this year’s presidential election during earnings calls.
As Farrow’s article points out, the federal government has “struggled” both in terms of oversight and accountability when it comes to expanded surveillance powers and surveillance tech rollouts. One could credibly argue you can’t call a terminal lack of interest in oversight and accountability a “struggle.” But no one can argue this turn of events — one that aligns government agencies’ thirst for expanded power with technical advances that make this sort of thing cheaper and easier than it’s ever been — is going to make America great again. It’s just going to make America something it’s really never been: you know, East Germany, the USSR, China, etc.
And we’re all going to pay the price, and not just in terms of the additional taxes that will be needed to gird the infrastructional loins of Trump’s mass deportation plans. If this moves forward, America will be the worst it’s ever been — a nation hollowed out deliberately by bigots who think the nation can only be great if nearly half of its population lives in fear of being forcibly ejected. And while that happens, tech companies that aid and abet this atrocity will make billions off the misery of millions.
https://newatlas.com/automotive/mercedes-reinvents-brakes-ev-in-drive/
In the simplest terms, nearly every modern car on the planet uses disk brakes: a rotor attached to a hub with a caliper with brake pads fixed to the control arm at each wheel. The driver presses the brake pedal and hydraulic fluid is pushed down the brake lines into the caliper, expanding the pistons and pushing the brake pads against the rotor, slowing down the rotation of the rotor connected to the hub, thus slowing down the wheel.
There are other systems, like drum brakes, air brakes, band brakes, the Flintstones method, et cetera, that have also been around since the dawn of the automotive industry. The concept almost always remains the same: using friction to slow down. And so it doesn't go unsaid, yes, there are compression brake systems as well, but that's entirely different.
Mercedes-Benz has put a new spin on an age-old concept with what it calls "in-drive brakes" for electric vehicles. The system being developed at the company's research and development department in Sindelfingen, Germany, integrates the brakes right into the drivetrain, in an arrangement that works very much like a transmission brake. It resembles clutch plates – but with a unique twist.
There are no calipers, instead a circular brake pad connected directly to the output shaft of the electric motor is pressed against a stationary water-cooled ring, all of which is in an enclosed system.
According to Mercedes, the in-drive brake system shouldn't require servicing for the life of the vehicle, potentially saving the owner thousands of dollars in brake repairs and replacements. Even the brake dust is collected in a small inner compartment that won't require emptying.
Brake dust is a major contributor to pollution, particularly in urban areas with lots of stop-and-go traffic. And if you've ever driven down a long, steep grade like the Grapevine, just north of Los Angeles, California, you're no stranger to the smell of brake dust – and the discomfort in your nasal passages. EV motors inherently act as a brake when the accelerator is released, as EV motors have the ability to regenerate electricity back into the batteries, slowing the vehicle down in the process. An actual brake system is still needed, however.
Though the in-drive brake is still undergoing testing, Mercedes reckons that brake fade will be a non-issue as the system is water-cooled. Given the in-drive brake system relocates all the necessary "slow down" bits away from the wheels, unsprung weight (weight that isn't carried by the chassis, and instead spins or moves with the wheels, creating gyroscopic forces) is significantly reduced, making the vehicle both handle better and improve the ride. Wheels could also be made more aerodynamically efficient without the constraints of rotors and calipers.
ISPs tell FCC that mistreated users would switch to one of their many other options:
Lobby groups for Internet service providers claim that ISPs' customer service is so good already that the government shouldn't consider any new regulations to mandate improvements. They also claim ISPs face so much competition that market forces require providers to treat their customers well or lose them to competitors.
Cable lobby group NCTA-The Internet & Television Association told the Federal Communications Commission in a filing that "providing high-quality products and services and a positive customer experience is a competitive necessity in today's robust communications marketplace. To attract and retain customers, NCTA's cable operator members continuously strive to ensure that the customer support they provide is effective and user-friendly. Given these strong marketplace imperatives, new regulations that would micromanage providers' customer service operations are unnecessary."
Lobby groups filed comments in response to an FCC review of customer service that was announced last month, before the presidential election. While the FCC's current Democratic leadership is interested in regulating customer service practices, the Republicans who will soon take over opposed the inquiry.
USTelecom, which represents telcos such as AT&T and Verizon, said that "the competitive broadband marketplace leaves providers of broadband and other communications services no choice but to provide their customers with not only high-quality broadband, but also high-quality customer service."
"If a provider fails to efficiently resolve an issue, they risk losing not only that customer—and not just for the one service, but potentially for all of the bundled services offered to that customer—but also any prospective customers that come across a negative review online. Because of this, broadband providers know that their success is dependent upon providing and maintaining excellent customer service," USTelecom wrote.
While the FCC Notice of Inquiry said that providers should "offer live customer service representative support by phone within a reasonable timeframe," USTelecom's filing touted the customer service abilities of AI chatbots. "AI chat agents will only get better at addressing customers' needs more quickly over time—and if providers fail to provide the customer service and engagement options that their customers expect and fail to resolve their customers' concerns, they may soon find that the consumer is no longer a customer, having switched to another competitive offering," the lobby group said.
The lobby groups' description may surprise the many Internet users suffering from little competition and poor customer service, such as CenturyLink users who had to go without service for over a month because of the ISP's failure to fix outages. The FCC received very different takes on the state of ISP customer service from regulators in California and Oregon.
The Mt. Hood Cable Regulatory Commission in northwest Oregon, where Comcast is the dominant provider, told the FCC that local residents complain about automated customer service representatives; spending hours on hold while attempting to navigate automated voice systems; billing problems including "getting charged after cancelling service, unexpected price increases, and being charged for equipment that was returned," and service not being restored quickly after outages.
The California Public Utilities Commission (CPUC) told the FCC that it performed a recent analysis finding "that only a fraction of California households enjoy access to a highly competitive market for [broadband Internet service], with only 26 percent of households having a choice between two or more broadband providers utilizing either cable modem or fiber optic technologies." The California agency said the result "suggests that competitive forces alone are insufficient to guarantee service quality for customers who depend upon these services."
CPUC said its current rulemaking efforts for California "will establish standards for service outages, repair response time, and access to live representatives." The agency told the FCC that if it adopts new customer service rules for the whole US, it should "permit state and local governments to set customer service standards that exceed the adopted standards."
The FCC also received a filing from several advocacy groups focused on accessibility for people with disabilities. The groups asked for rules "establishing baseline standards to ensure high-quality DVC [direct video calling for American Sign Language users] across providers, requiring accommodations for consumers returning rental equipment, and ensuring accessible cancellation processes." The groups said that "providers should be required to maintain dedicated, well-trained accessibility teams that are easily reachable via accessible communication channels, including ASL support."
"We strongly caution against relying solely on emerging AI technologies without mandating live customer service support," the groups said.
The FCC's Notice of Inquiry on customer service was approved 3–2 in a party-line vote on October 10. FCC Chairwoman Jessica Rosenworcel said that hundreds of thousands of customers file complaints each year "because they have run into issues cancelling their service, are saddled with unexpected charges, are upset by unexplained outages, and are frustrated with billing issues they have not been able to resolve on their own. Many describe being stuck in 'doom loops' that make it difficult to get a real person on the line to help with service that needs repair or to address charges they believe are a mistake."
If the FCC leadership wasn't changing hands, the Notice of Inquiry could be the first step toward a rulemaking. "We cannot ignore these complaints, especially not when we know that it is possible to do better... We want to help improve the customer experience, understand what tools we have to do so, and what gaps there may be in the law that prevent consumers from having the ability to resolve routine problems quickly, simply, and easily," Rosenworcel said.
But the proceeding won't go any further under incoming Chairman Brendan Carr, a Republican chosen by President-elect Donald Trump. Carr dissented from the Notice of Inquiry, saying that the potential actions explored by the FCC exceed its authority and that the topic should be handled instead by the Federal Trade Commission.
Carr said the FCC should work instead on "freeing up spectrum and eliminating regulatory barriers to deployment" and that the Notice of Inquiry is part of "the Biden-Harris Administration's efforts to deflect attention away from the necessary course correction."
Carr has made it clear that he is interested in regulating broadcast media and social networks more than the telecom companies the FCC traditionally focuses on. Carr wrote a chapter for the conservative Heritage Foundation's Project 2025 in which he criticized the FCC for "impos[ing] heavy-handed regulation rather than relying on competition and market forces to produce optimal outcomes."
With Carr at the helm, ISPs are likely to get what they're asking for: No new regulations and elimination of at least some current rules. "Rather than saddling communications providers with unnecessary, unlawful, and potentially harmful regulation, the Commission should encourage the pro-consumer benefits of competition by reducing the regulatory burdens and disparities that are currently unfairly skewing the marketplace," the NCTA told the FCC, arguing that cable companies face more onerous regulations than other communications providers.
How do they say that with a straight face?
OpenAI denies deleting evidence, asks why NYT didn’t back up data:
OpenAI keeps deleting data that could allegedly prove the AI company violated copyright laws by training ChatGPT on authors' works. Apparently largely unintentional, the sloppy practice is seemingly dragging out early court battles that could determine whether AI training is fair use.
Most recently, The New York Times accused OpenAI of unintentionally erasing programs and search results that the newspaper believed could be used as evidence of copyright abuse.
The NYT apparently spent more than 150 hours extracting training data, while following a model inspection protocol that OpenAI set up precisely to avoid conducting potentially damning searches of its own database. This process began in October, but by mid-November, the NYT discovered that some of the data gathered had been erased due to what OpenAI called a "glitch."
Looking to update the court about potential delays in discovery, the NYT asked OpenAI to collaborate on a joint filing admitting the deletion occurred. But OpenAI declined, instead filing a separate response calling the newspaper's accusation that evidence was deleted "exaggerated" and blaming the NYT for the technical problem that triggered the data deleting.
OpenAI denied deleting "any evidence," instead admitting only that file-system information was "inadvertently removed" after the NYT requested a change that resulted in "self-inflicted wounds." According to OpenAI, the tech problem emerged because NYT was hoping to speed up its searches and requested a change to the model inspection set-up that OpenAI warned "would yield no speed improvements and might even hinder performance."
The AI company accused the NYT of negligence during discovery, "repeatedly running flawed code" while conducting searches of URLs and phrases from various newspaper articles and failing to back up their data. Allegedly the change that NYT requested "resulted in removing the folder structure and some file names on one hard drive," which "was supposed to be used as a temporary cache for storing OpenAI data, but evidently was also used by Plaintiffs to save some of their search results (apparently without any backups)."
Once OpenAI figured out what happened, data was restored, OpenAI said. But the NYT alleged that the only data that OpenAI could recover did "not include the original folder structure and original file names" and therefore "is unreliable and cannot be used to determine where the News Plaintiffs' copied articles were used to build Defendants' models."
In response, OpenAI suggested that the NYT could simply take a few days and re-run the searches, insisting, "contrary to Plaintiffs' insinuations, there is no reason to think that the contents of any files were lost." But the NYT does not seem happy about having to retread any part of model inspection, continually frustrated by OpenAI's expectation that plaintiffs must come up with search terms when OpenAI understands its models best.
OpenAI claimed that it has consulted on search terms and been "forced to pour enormous resources" into supporting the NYT's model inspection efforts while continuing to avoid saying how much it's costing. Previously, the NYT accused OpenAI of seeking to profit off these searches, attempting to charge retail prices instead of being transparent about actual costs.
Now, OpenAI appears to be more willing to conduct searches on behalf of NYT that it previously sought to avoid. In its filing, OpenAI asked the court to order news plaintiffs to "collaborate with OpenAI to develop a plan for reasonable, targeted searches to be executed either by Plaintiffs or OpenAI."
How that might proceed will be discussed at a hearing on December 3. OpenAI said it was committed to preventing future technical issues and was "committed to resolving these issues efficiently and equitably."
This isn't the only time that OpenAI has been called out for deleting data in a copyright case.
In May, book authors, including Sarah Silverman and Paul Tremblay, told a US district court in California that OpenAI admitted to deleting the controversial AI training data sets at issue in that litigation. Additionally, OpenAI admitted that "witnesses knowledgeable about the creation of these datasets have apparently left the company," authors' court filing said. Unlike the NYT, book authors seem to suggest that OpenAI's deleting appeared potentially suspicious.
"OpenAI's delay campaign continues," the authors' filing said, alleging that "evidence of what was contained in these datasets, how they were used, the circumstances of their deletion and the reasons for" the deletion "are all highly relevant."
The judge in that case, Robert Illman, wrote that OpenAI's dispute with authors has so far required too much judicial intervention, noting that both sides "are not exactly proceeding through the discovery process with the degree of collegiality and cooperation that might be optimal." Wired noted similarly the NYT case is "not exactly a lovefest."
As these cases proceed, plaintiffs in both cases are struggling to decide on search terms that will surface the evidence they seek. While the NYT case is bogged down by OpenAI seemingly refusing to conduct any searches yet on behalf of publishers, the book author case is differently being dragged out by authors failing to provide search terms. Only four of the 15 authors suing have sent search terms, as their deadline for discovery approaches on January 27, 2025.
NYT judge rejects key part of fair use defense
OpenAI's defense primarily hinges on courts agreeing that copying authors' works to train AI is a transformative fair use that benefits the public, but the judge in the NYT case, Ona Wang, rejected a key part of that fair use defense late last week.
To win their fair use argument, OpenAI was trying to modify a fair use factor regarding "the effect of the use upon the potential market for or value of the copyrighted work" by invoking a common argument that the factor should be modified to include the "public benefits the copying will likely produce."
Part of this defense tactic sought to prove that the NYT's journalism benefits from generative AI technologies like ChatGPT, with OpenAI hoping to topple NYT's claim that ChatGPT posed an existential threat to its business. To that end, OpenAI sought documents showing that the NYT uses AI tools, creates its own AI tools, and generally supports the use of AI in journalism outside the court battle.
On Friday, however, Wang denied OpenAI's motion to compel this kind of evidence. Wang deemed it irrelevant to the case despite OpenAI's claims that if AI tools "benefit" the NYT's journalism, that "benefit" would be relevant to OpenAI's fair use defense.
"But the Supreme Court specifically states that a discussion of 'public benefits' must relate to the benefits from the copying," Wang wrote in a footnote, not "whether the copyright holder has admitted that other uses of its copyrights may or may not constitute fair use, or whether the copyright holder has entered into business relationships with other entities in the defendant's industry."
This likely stunts OpenAI's fair use defense by cutting off an area of discovery that OpenAI previously fought hard to pursue. It essentially leaves OpenAI to argue that its copying of NYT content specifically serves a public good, not the act of AI training generally.
In February, Ars forecasted that the NYT might have the upper hand in this case because the NYT already showed that sometimes ChatGPT would reproduce word-for-word snippets of articles. That will likely make it harder to convince the court that training ChatGPT by copying NYT articles is a transformative fair use, as Google Books famously did when copying books to create a searchable database.
For OpenAI, the strategy seems to be to erect as strong a fair use case as possible to defend its most popular release. And if the court sides with OpenAI on that question, it won't really matter how much evidence the NYT surfaces during model inspection. But if the use is not seen as transformative and then the NYT can prove the copying harms its business—without benefiting the public—OpenAI could risk losing this important case when the verdict comes in 2025. And that could have implications for book authors' suit as well as other litigation, expected to drag into 2026.
It's not our fault.
https://spectrum.ieee.org/semiconductor-fabrication
In 1970, Bill Harding envisioned a fully automated wafer-fabrication line that would produce integrated circuits in less than one day. Not only was such a goal gutsy 54 years ago, it would be bold even in today's billion-dollar fabs, where the fabrication time of an advanced IC is measured in weeks, not days. Back then, ICs, such as random-access memory chips, were typically produced in a monthlong stop-and-go march through dozens of manual work stations.
At the time, Harding was the manager of IBM's Manufacturing Research group, in East Fishkill, N.Y. The project he would lead to make his vision a reality, all but unknown today, was called Project SWIFT. To achieve such an amazingly short turnaround time required a level of automation that could only be accomplished by a paradigm shift in the design of integrated-circuit manufacturing lines. Harding and his team accomplished it, achieving advances that would eventually be reflected throughout the global semiconductor industry. Many of SWIFT's groundbreaking innovations are now commonplace in today's highly automated chip fabrication plants, but SWIFT's incredibly short turnaround time has never been equaled.
SWIFT averaged 5 hours to complete each layer of its fabrication process, while the fastest modern fabs take 19 hours per processing layer, and the industry average is 36 hours. Although today's integrated circuits are built with many more layers, on larger wafers the size of small pizzas, and the processing is more complex, those factors do not altogether close the gap. Harding's automated manufacturing line was really, truly, swift.
I encountered Harding for the first time in 1962, and hoped it would be the last. IBM was gearing up to produce its first completely solid-state computer, the System/360. It was a somewhat rocky encounter. "What the hell good is that?" he bellowed at me as I demonstrated how tiny, unpackaged semiconductor dice could be automatically handled in bulk for testing and sorting.
Supreme Court wants US input on whether ISPs should be liable for users' piracy:
The Supreme Court signaled it may take up a case that could determine whether Internet service providers must terminate users who are accused of copyright infringement. In an order issued today, the court invited the Department of Justice's solicitor general to file a brief "expressing the views of the United States."
In Sony Music Entertainment v. Cox Communications, the major record labels argue that cable provider Cox should be held liable for failing to terminate users who were repeatedly flagged for infringement based on their IP addresses being connected to torrent downloads. There was a mixed ruling at the US Court of Appeals for the 4th Circuit as the appeals court affirmed a jury's finding that Cox was guilty of willful contributory infringement but reversed a verdict on vicarious infringement "because Cox did not profit from its subscribers' acts of infringement."
That ruling vacated a $1 billion damages award and ordered a new damages trial. Cox and Sony are both seeking a Supreme Court review. Cox wants to overturn the finding of willful contributory infringement, while Sony wants to reinstate the $1 billion verdict.
The Supreme Court asking for US input on Sony v. Cox could be a precursor to the high court taking up the case. For example, the court last year asked the solicitor general to weigh in on Texas and Florida laws that restricted how social media companies can moderate their platforms. The court subsequently took up the case and vacated lower-court rulings, making it clear that content moderation is protected by the First Amendment.
Cox has said that letting the piracy ruling stand "would force ISPs to terminate Internet service to households or businesses based on unproven allegations of infringing activity, and put them in a position of having to police their networks." Cox said that ISPs "have no way of verifying whether a bot-generated notice is accurate" and that even if the notices are accurate, terminating an account would punish every user in a household where only one person may have illegally downloaded copyrighted files.
Record labels urged the court to reinstate the vicarious infringement verdict. "As the District Court explained, the jury had ample evidence that Cox profited from its subscribers' infringement, including evidence 'that when deciding whether to terminate a subscriber for repeat infringement, Cox considered the subscriber's monthly payments,' and 'Cox repeatedly declined to terminate infringing subscribers' Internet service in order to continue collecting their monthly fees,'" the record labels' petition said.
Another potentially important copyright case involves the record labels and Grande, an ISP owned by Astound Broadband. The conservative-leaning US Court of Appeals for the 5th Circuit ruled last month that Grande violated the law by failing to terminate subscribers accused of being repeat infringers. The 5th Circuit also ordered a new trial on damages because it said a $46.8 million award was too high. Grande and the record labels are both seeking en banc rehearings of the 5th Circuit panel ruling.
But will they listen? And note the phrasing: whether Internet service providers must terminate users who are accused of copyright infringement.
X says The Onion can't have Alex Jones' Infowars accounts:
X claims in court filings that it still owns Infowars' X accounts and can't be sold without its permission.
Well that's a bit of a spanner in the works. Previously : The Onion Buys InfoWars - No Seriously! - SoylentNews:
The satirical website The Onion purchased InfoWars on Thursday, a capstone on years of litigation and bankruptcy proceedings following InfoWars founder Alex Jones' defamation of families associated with the Sandy Hook Elementary School massacre.
Those families backed The Onion's bid to purchase InfoWars' intellectual property, including its website, customer lists and inventory, certain social media accounts and the production equipment used to put Jones on the air. The Connecticut families agreed to forgo a portion of their recovery to increase the overall value of The Onion's bid, enabling its success.
MORE: Alex Jones still must pay $1B judgment: Judge
The families said the purchase would put an end to Jones' misinformation campaign.
"We were told this outcome would be nearly impossible, but we are no strangers to impossible fights. The world needs to see that having a platform does not mean you are above accountability -- the dissolution of Alex Jones' assets and the death of Infowars is the justice we have long awaited and fought for," said Robbie Parker, whose daughter Emilie was killed in the Sandy Hook shooting.
In 2022, the families that brought the case against Jones in Connecticut secured a $1.4 billion verdict in their defamation lawsuit. A Texas bankruptcy court ruled on the liquidation of Jones' assets in June of this year, handing over control to an independent trustee tasked with selling them off to generate the greatest possible value for the families.
"From day one, these families have fought against all odds to bring true accountability to Alex Jones and his corrupt business. Our clients knew that true accountability meant an end to Infowars and an end to Jones' ability to spread lies, pain and fear at scale. After surviving unimaginable loss with courage and integrity, they rejected Jones' hollow offers for allegedly more money if they would only let him stay on the air because doing so would have put other families in harm's way," said Chris Mattei, attorney for the Connecticut plaintiffs and partner at Koskoff Koskoff & Bieder.
Jones had filed for bankruptcy last year in a bid to avoid paying the billion-dollar judgment, but a judge ruled he still had to settle with the Sandy Hook families.
Bankruptcy often staves off legal judgments but not if they are the result of willful and malicious injury. U.S. Bankruptcy Court Judge Christopher Lopez in Houston decided that standard was satisfied in Jones' case.
"[I]n Jones's case, the language of the jury instruction confirms that the damages awarded flow from the allegation of intent to harm the Plaintiffs – not allegations of recklessness," Lopez wrote in his ruling.
Jones had claimed on his InfoWars show that the shooting at Sandy Hook Elementary School -- which killed 26 people, including 20 elementary students -- was performed by actors following a script written by government officials to bolster the push for gun control.
New filing shows electricity demand would be flat without the industry:
Ever since data centers started spreading across the Virginia landscape like an invasive pest, one important question has remained unanswered: How much does the industry's insatiable demand for energy impact other utility customers? Under pressure from the SCC [Virginia State Corporation Commission], this month Dominion Energy Virginia finally provided the answer we feared: Ordinary Virginia customers are subsidizing Big Tech with both their money and their health.
Dominion previously hid data centers among the rest of its customer base, making it impossible to figure out if residents were paying more than their fair share of the costs of building new generation and transmission lines. Worse, if data centers are the reason for burning more fossil fuels, then they are also responsible for residents being subjected to pollution that is supposed to be eliminated under the 2020 Virginia Clean Economy Act (VCEA). The VCEA calls for most coal plants in the state to be closed by the end of this year – which is not happening – and sets rigorous conditions before utilities can build any new fossil fuel plants.
[...] Even before the 2024 IRP was filed, though, the SCC directed the utility to file a supplement. It was obvious the IRP would project higher costs and increased use of fossil fuels. How much of that, the SCC demanded to know, is attributable to data centers?
A lot, as it turns out. Though Dominion continues to obfuscate key facts, the document it filed on November 15 shows future data center growth will drive up utility spending by about 20%. Dominion did not take the analysis further to show the effect on residential rates.
The filing also shows that but for new data centers, peak demand would actually decrease slightly over the next few years, from 17,353 MW this year to 17,280 MW in 2027, before beginning a gentle rise to 17,818 MW in 2034 and 18,608 MW in 2039.
In other words, without data centers, electricity use in Dominion territory would scarcely budge over the next decade. Indeed, the slight decrease over the next three years is especially interesting because near-term numbers tend to be the most reliable, with projections getting more speculative the further out you look.
Surprised? You're not alone. We've heard for years that electric vehicles and building electrification will drive large increases in energy demand. When Dominion talks about the challenges of load growth, it cites these factors along with data centers, suggesting that ordinary people are part of the problem. We're not.
[...] In addition to showing what the energy mix might look like without data centers, the SCC directed Dominion to identify which of its approximately 200 planned transmission projects were needed solely because of data centers. The 4-page table in Dominion's supplemental filing reveals that about half of the projects are solely data center-driven, with two or three dozen more serving a mix of customers that includes data centers. I tried to add up the numbers but lost track at a billion dollars' worth of projects needed solely for data centers – and I was still on the second page.
[...] Still, most of the data center growth lies ahead of us, as does Dominion's plans for new fossil fuel and nuclear generation. With state leaders avidly chasing more data centers in the name of economic development, ordinary Virginians are left to watch the assault on their energy supply, their water, and their environment and wonder: Is anyone going to fix this?
Teen Mathematicians Tie Knots Through a Mind-Blowing Fractal:
In the fall of 2021, Malors Espinosa set out to devise a special type of math problem. As with any good research question, it would have to be thought-provoking, its solution nontrivial — something others would want to study. But an additional constraint stumped him. Malors, then a graduate student in mathematics at the University of Toronto, wanted high school students to be able to prove it.
For years, Malors had been running summer workshops for local high schoolers, teaching them about basic ideas in mathematical research and showing them how to write proofs. But a few of his students seemed ready to do more — to find out what it means to do math when there is no answer key. They just needed the right question to guide them.
[...]
Menger's statement didn't distinguish between homeomorphic curves. His proof only guaranteed, for instance, that the circle could be found in his sponge — not that all homeomorphic knots could be, their loops and tangles still intact. Malors wanted to prove that you could find every knot within the sponge.
It seemed like the right mashup to excite young mathematicians. They'd recently had fun learning about knots in his seminar. And who doesn't love a fractal? The question was whether the problem would be approachable. "I really hoped there was an answer," Malors said.
There was. After just a few months of weekly Zoom meetings with Malors, three of his high school students — Joshua Broden, Noah Nazareth and Niko Voth — were able to show that all knots can indeed be found inside the Menger sponge. Moreover, they found that the same can likely be said of another related fractal, too.
"It's a clever way of putting things together," said Radmila Sazdanovic, a topologist at North Carolina State University who was not involved in the work. In revisiting Menger's century-old theorem, she added, Malors — who usually does research in the disparate field of number theory — had apparently asked a question that no one thought to ask before. "This is a very, very original idea," she said.
[...]
Broden, Nazareth and Voth had taken several of Malors' summer workshops over the years. When he first taught them about knots in an earlier workshop, "it blew 14-year-old me's mind," said Voth.
But the Menger problem would be their first time moving beyond school workbooks with answer keys. "It was a little bit nerve-racking, because it was the first time I was doing something where truly nobody has the answer, not even Malors," said Nazareth. Maybe there was no answer at all.
Their goal was essentially to thread a microscopic sewing needle through a cloud of dust — the material that remained of the sponge after many removals. They would have to stick the pin in the right places, tie the knotted tangles with immaculate precision, and never leave the sponge. If their thread ended up floating in the empty holes of the sponge for any knot, it was game over.
Not an easy task. But there was a way to simplify it. Knots can be depicted on a flat piece of paper as special diagrams called arc presentations. To create one, you start with information about how the strands of your knot pass in front of or behind each other. Then you apply a set of rules to translate this information into a series of points on a grid. Every row and column of the grid will contain exactly two points.
NASA Found An Abandoned Military Base Hidden Under Greenland's Ice:
Camp Century was built as a test towards launching nukes from the Arctic.
Arthur T Knackerbracket has processed the following story:
Deep within the ice sheet of Greenland lies a US military secret that hasn't been seen since the 1960s, but a NASA flyover earlier this year has provided an unprecedented look at the buried Cold War relic.
Camp Century, constructed in 1959 by the US Army Corps of Engineers, was built directly into the ice sheet of Greenland, giving it an interior reminiscent of Echo Base on the frozen world of Hoth in The Empire Strikes Back. At the heart of the facility was the PM-2A portable nuclear reactor, which provided power for the sprawling "city under the ice" that included housing for 200 soldiers, a theater, gym, post exchange, library, and even a chapel.
The camp was ostensibly built as a scientific outpost, and work at the facility did contribute to modern climate models thanks to ice core drilling – but its true purpose was Project Iceworm, the US Army's plan to deploy hundreds of cold-hardened Minuteman nuclear missiles capable of striking the Soviet Union across Greenland's frozen tundra.
That never came to fruition, leading to Camp Century's abandonment in 1967, after which the installation was buried under accumulating ice and snow. Camp Century is now believed to be at least 30 meters, or 100 feet, below the surface.
The only real look at Camp Century over the decades has been via ground-penetrating radar that provided, at best, a two-dimensional confirmation that some of the thousands of feet of tunnels, and whatever contents were left behind, are still down there.
That all changed in April, NASA reported this week, when the space agency's Earth Observatory unexpectedly picked up an anomaly while doing an ice sheet survey using an aircraft equipped with NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR).
Camp Century, shown as the green blob, in an image captured by Chad Greene during the mapping flight – click to enlarge
UAVSAR, which in this case was being operated by scientists onboard a Gulfstream III aircraft, has the advantage of shooting radar signals directly downward and at an angle, meaning it's able to produce maps with more dimensionality than typical ground-penetrating radar that only aims straight down.
"We were looking for the bed of the ice and out pops Camp Century," said NASA JPL cryospheric scientist and project co-lead Alex Gardner. "We didn't know what it was at first."
JPL glaciologist and remote sensing specialist Chad Greene noted that the serendipitous image captured by UAVSAR shows individual structures of Camp Century that, when compared to existing layout plans, gives an unprecedented view into the state of the facility after nearly 60 years under the ice.
"Our goal was to calibrate, validate, and understand the capabilities and limitations of UAVSAR for mapping the ice sheet's internal layers and the ice-bed interface," Greene noted.
That said, while UAVSAR has granted new insights into the state of Camp Century, the images aren't perfect. NASA noted that, because of the angular nature of part of the capture, a band on the image makes it look like Century is below the ice bed, which is in reality miles below the ice sheet – far deeper than the ruins of the abandoned facility. The error is due to the angled radar picking up the ice bed further in the distance, NASA explained.
Because of the imperfections in the UAVSAR image of Camp Century, the image is "a novel curiosity" rather than a useful bit of scientific data, NASA said. Understanding more about Camp Century's status, its depth under the ice, and the status of the frozen water above it is critical, however, because it contains a lot of nuclear, biological, and chemical waste that the Army wasn't too worried about in the pre-climate change era.
If current climate change trends continue apace, NASA researchers determined in 2011, all the harmful stuff stored under the ice at Camp Century could leach into the surrounding ice well before surface melting begins to show changes.
NASA's predicted surface ice loss around Camp Century by 2090 – click to enlarge
And there's good reason to worry. There's lots of waste down there. By NASA's estimate, there are around 53,000 gallons of diesel fuel, 6.3 million gallons of waste water, including sewage from the camp's years in service, as well as an unknown quantity of radioactive waste and PCBs. The Atomic Heritage Foundation estimates the PM-2A reactor may have created more than 47,000 gallons of low-level radioactive waste over its lifetime, and that or more is likely buried beneath the ice too.
Previous 2D radar images of Century show the presence of buried waste material, so scientists know it's there, but without better imaging they can't know if anything has shifted or could begin to leak.
A 2011 radar capture of Camp Century showing what NASA scientists believe to be the buried waste at the abandoned facility – click to enlarge
NASA estimates that, by 2090, climate change could cause Greenland's ice sheet to destabilize above Camp Century, but that doesn't account for leaching into the ice before surface changes begin.
Camp Century was closed in 1967 after Project Iceworm, which sought to build thousands of miles of tunnels to deploy 600 missiles, failed due to a determination that Greenland's ice sheet was too unstable to support long-term subterranean facilities. All that remains to note the presence of the facility today is a project led by the governments of Greenland and Denmark to keep watch on the site from a small outpost above the camp, which is located 150 miles inland from the US Space Force's Pituffik Space Base, formerly known as Thule Air Base, where construction of Camp Century was managed.
"Without detailed knowledge of ice thickness, it is impossible to know how the ice sheets will respond to rapidly warming oceans and atmosphere, greatly limiting our ability to project rates of sea level rise," Gardner said.
Environmental damage from Camp Century may be inevitable as the climate continues to change, likely unabated, unless the world's governments take action. NASA noted that the flight that captured the new images of Camp Century would "enable the next generation of mapping campaigns in Greenland, Antarctica and beyond," though whether additional passes over Camp Century are planned to better map the facility is unknown.
"When we don't own what we buy, everything becomes disposable..." :
Makers of smart devices that fail to disclose how long they will support their products with software updates may be breaking the Magnuson Moss Warranty Act, the Federal Trade Commission (FTC) warned this week.
The FTC released its statement after examining 184 smart products across 64 product categories, including soundbars, video doorbells, breast pumps, smartphones, home appliances, and garage door opener controllers. Among devices researched, the majority—or 163 to be precise—"did not disclose the connected device support duration or end date" on their product webpage, per the FTC's report [PDF]. Contrastingly, 11.4 percent of devices examined shared a software support duration or end date on their product page.
In addition to manufacturers often neglecting to commit to software support for a specified amount of time, it seems that even when they share this information, it's elusive.
For example, the FTC reported that some manufacturers made software support dates available but not on the related product's webpage. Instead, this information is sometimes buried in specs, support, FAQ pages, or footnotes.
The FTC report added:
... some used ambiguous language that only imply the level of support provided, including phrases like, "lifetime technical support," "as long as your device is fully operational," and "continuous software updates," for example. Notably, staff also had difficulty finding on the product webpages the device's release date ...
At times, the FTC found glaring inconsistencies. For example, one device's product page said that the device featured "lifetime" support, "but the search result pointing to the manufacturer's support page indicated that, while other updates may still be active, the security updates for the device had stopped in 2021," per the FTC.
Those relying on Google's AI Overviews may also be misled. In one case, AI Overviews pointed to a smart gadget getting "software support and updates for 3–6 months." But through the link that AI Overviews provided, the FTC found that the three to six months figure that Google scraped actually referred to the device's battery life. The next day, AI Overviews said that it couldn't determine the duration of software support or updates for the gadget, the FTC noted.
In its report, the FTC encouraged law enforcement and policymakers to investigate whether vendors properly disclose software support commitments. The government agency warned that not informing shoppers about how long products with warranties will be supported may go against the Magnuson Moss Warranty Act:
This law requires that written warranties on consumer products costing more than $15 be made available to prospective buyers prior to sale and that the warranties disclose a number of things, including, "a clear description and identification of products, or parts, or characteristics, or components or properties covered by and where necessary for clarification, excluded from the warranty."
The FTC also noted that vendors could be in violation of the FTC Act if omissions or misrepresentations around software support are likely to mislead shoppers.
The FTC's research follows a September letter to the agency from 17 groups, including iFixit, Public Interest Research Group, Consumer Reports, and the Electronic Frontier Foundation, imploring that the FTC provide "clear guidance" on "making functions of a device reliant on embedded software that ties the device back to a manufacturer's servers," aka software tethering.
Speaking to Ars Technica in September, Lucas Gutterman, the Designed to Last campaign director with the US PIRG Education Fund and one of the letter's signatories, expressed optimism that the FTC would get involved, like when it acted against Harley-Davidson in 2022, saying that it was using warranty policies to limit customers right' to repair illegally, or when it investigated the 2016 shutdown of Nest Labs' Revolv Smart Home Hub.
In response to the FTC's report this week, Gutterman pointed to initiatives like repair scores as potential remedies.
"When we don't own what we buy, everything becomes disposable, and we get stuck in a loop where products keep dying and we keep buying," he said.
As more devices join the Internet of Things, the risk of consumers falling victim to flashy marketing that promises convenient features that could be ripped away through lack of software support becomes more concerning. Whether it's dealing with bricked devices or the sudden removal of valued features, owners of smart devices—from smart bassinets and Pelotons to printers, indoor gardening systems, and toothbrushes—have all faced the harsh realities of what happens when a vendor loses interest or the ability to support products likely sold at premiums. Some are tired of waiting for vendors to commit to clear, reliable software support and are hoping that the government creates a mandatory path for disclosure.
https://physics.aps.org/articles/v17/168
Observations confirm a theoretical model explaining how—in Earth's magnetosphere—large-scale magnetic waves heat up the magnetosphere's plasma by transferring their energy to smaller-scale acoustic waves.
Ocean currents spin off huge gyres, whose kinetic energy is transferred to ever-smaller turbulent structures until viscosity has erased velocity gradients and water molecules jiggle with thermal randomness. A similar cascade plays out in space when the solar wind slams into the magnetopause, the outer boundary of Earth's magnetic field. The encounter launches large-scale magnetic, or Alfvén, waves whose energy ends up heating the plasma inside the magnetosphere. Here, however, the plasma is too thin for viscosity to mediate the cascade. Since 1971 researchers have progressively developed their understanding of how Alfvén waves in space plasmas generate heat. These studies later culminated in a specific hypothesis: Alfvén waves accelerate ion beams, which create small-scale acoustic waves, which generate heat. Now Xin An of UCLA and his collaborators have found direct evidence of that proposed mechanism [1]. What's more, the mechanism is likely at work in the solar wind and other space plasmas.
Laboratory-scale experiments struggle to capture the dynamics of rotating plasmas, and real-world observations are even more scarce. The observations that An and his collaborators analyzed were made in 2015 by the four-spacecraft Magnetospheric Multiscale (MMS) mission. Launched that year, the MMS was designed to study magnetic reconnection, a process in which the topology of magnetic-field lines is violently transformed. The field rearrangements wrought by reconnection can be large, on the scale of the huge loops that sprout from the Sun's photosphere. But the events that initiate reconnection take place in a much smaller region where neighboring field lines meet, the X-line. The four spacecraft of MMS can fly in a configuration in which all of them witness the large-scale topological transformation while one of them could happen to fly through the X-line—a place where no spacecraft had deliberately been sent before.
On September 8, 2015, the orbits of the MMS spacecraft took them through the magnetopause on the dusk side of Earth. They were far enough apart that together they could detect the passage of a large-scale Alfvén wave, while each of them could individually detect the motion of ions in the surrounding plasma. An and his collaborators later realized that these observations could be used to test the theory that ion beams and the acoustic waves that they generate mediate the conversion of Alfvén-wave energy to heat.
Data from the various instruments aboard the MMS spacecraft show signatures from all three factors that drive the energy cascade: Alfvén waves and ion beams, both of which have length scales of about 2000 km, and acoustic waves, which have length scales of 50–1500 m. Crucially, the instruments also recorded connections between the processes. The Alfvén waves' magnetic-pressure variations were in sync with fluctuations in ion density and the local electric field, while the ion beams' speeds matched those of either the local Alfvén waves or the acoustic waves.
Reference:
Xin An, Anton Artemyev, Vassilis Angelopoulos, Terry Z. Liu, Ivan Vasko, and David Malaspina, Cross-Scale Energy Transfer from Fluid-Scale Alfvén Waves to Kinetic-Scale Ion Acoustic Waves in the Earth's Magnetopause Boundary Layer,Phys. Rev. Lett. 133, 225201, (DOI: https://doi.org/10.1103/PhysRevLett.133.225201)
"This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia. There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite.
(Michelle Rowland, Minister for Communications, Australian Parliament, Nov 21)
Australia's House of Representatives has passed a bill that would ban access to social media platforms TikTok, Facebook, Snapchat, Reddit, X and Instagram for youngsters under 16. The bill passed by 102 against 13.
Once the bill gets through the Senate -- expected this week -- the platforms would have a year to work out how to implement the age restriction, without using government-issued identity documents (passport, driving licenses), and without digital identification through a government system.
The leaders of all eight Australian states and mainland territories have unanimously backed the plan, although Tasmania, the smallest state, would have preferred the threshold was set at 14.
There are some counter-noises though (no, not you, Elon). More than 140 academics signed an open letter to Prime Minister Anthony Albanese condemning the 16-year age limit as "too blunt an instrument to address risks effectively."
The writers of that open letter fear that the responsibility of giving access to social media will fall on the parents, and "not all parents will be able to manage the responsibility of protection in the digital world".
Further, " Some social media 'type' services appear too integral to childhood to be banned, for example short form video streamers. But these too have safety risks like risks of dangerous algorithms promoting risky content. A ban does not function to improve the products children will be allowed to use."
The open letter pleads instead for systemic regulation, which "has the capacity to drive up safety and privacy standards on platforms for all children and eschews the issues described above. Digital platforms are just like other products, and can have safety standards imposed."
Australia's ban on social media will be a world-first, with fines of up to 50 million Australian Dollars for each failure to prevent them youngsters of having a social media account.
Under the laws, which won't come into force for another 12 months, social media companies could be fined up to $50 million for failing to take "reasonable steps" to keep under 16s off their platforms. There are no penalties for young people or parents who flout the rules. Social media companies also won't be able to force users to provide government identification, including the Digital ID, to assess their age.
From ban children under the age of 16 from accessing social media we also get the following:
Under the laws, which won't come into force for another 12 months, social media companies could be fined up to $50 million for failing to take "reasonable steps" to keep under 16s off their platforms. There are no penalties for young people or parents who flout the rules. Social media companies also won't be able to force users to provide government identification, including the Digital ID, to assess their age.
Social Media, or an "age-restricted social media platform" has been defined in the legislation as including services where:
- the "sole purpose, or a significant purpose" is to enable "online social interaction" between people
- people can "link to, or interact with" others on the service
- people can "post material", or
- it falls under other conditions as set out in the legislation.
Tracking Indoor Location, Movement and Desk Occupancy in the Workplace: A case study on technologies for behavioral monitoring and profiling using motion sensors and wireless networking infrastructure inside offices and other facilities
As offices, buildings and other corporate facilities become networked environments, there is a growing desire among employers to exploit data gathered from their existing digital infrastructure or additional sensors for various purposes. Whether intentionally or as a byproduct, this includes personal data about employees, their movements and behaviors.
Technology vendors are promoting solutions that repurpose an organization's wireless networking infrastructure as a means to monitor and analyze the indoor movements of employees and others within buildings. While GPS technology is too imprecise to track indoor location, Wi-Fi access points that provide internet connectivity for laptops, smartphones, tables and other networked devices can be used to track the location of these devices. Bluetooth, another wireless technology, can also be used to monitor indoor location. This can involve Wi-Fi access points that track Bluetooth-enabled devices, so-called "beacons" that are installed throughout buildings and Bluetooth-enabled badges carried by employees. In addition, employers can utilize badging systems, security cameras and video conferencing technology installed in meeting rooms for behavioral monitoring, or even environmental sensors that record room temperature, humidity and light intensity. Several technology vendors provide systems that use motion sensors installed under desks or in the ceilings of rooms to track room and desk attendance.
[Source]: Cracked Labs
[Case Study]: https://crackedlabs.org/dl/CrackedLabs_Christl_IndoorTracking.pdf [PDF]
[Also Covered By]: The Register
Intel has a serious problem with Arrow Lake and memory compatibility:
We've had Intel's Arrow Lake chips on the test benches lately, and the performance we were expecting wasn't quite there. Even on some of the best LGA1851 motherboards, we've noticed various performance quirks that need more than just a quick fix. One of those odd behaviors is DDR5 RAM sticks that refuse to boot at XMP settings, regardless of what speed they are rated for.
This is separate from the latency issue caused by the slower bus speed and the shifting of the memory controller to the SoC tile instead of on the compute tile, and it seems Arrow Lake is pickier with memory than any Intel platform I can think of since forever. I mean, Intel has always been the processor to get for using fast, low-latency RAM, and that long streak has now ended.
RAM compatibility for Arrow Lake is abysmal, Intel used to be the gold standard for fast RAM support but no longer
I've been around computing a long time, long enough that dual-data rate RAM wasn't even a thing at the time. Every new DDR revision has come with teething issues, as have most major CPU architecture changes. DDR5 still isn't a mature technology, but it's getting close as faster speeds combined with low timings are becoming more common. Arrow Lake is the first major architecture change from Intel since 2023's Alder Lake, when both DDR4 and DDR5 were supported.
Both of these new technologies seem to have caused more issues when combined, and this is one of the worst launches I've had hands-on experience with for memory compatibility. Arrow Lake feels pickier than even first-gen Ryzen was, back when Samsung B-die was the king for DDR4 RAM, spending hours looking over spec sheets and forums to find the memory kits using the vaunted DRAM modules.
But Ryzen at least had a common fix, as every Samsung B-die memory stick worked fine once the DDR and SoC voltages were increased slightly. When the Core Ultra 9 285 K was on the test bench, I tried nearly a dozen different DDR5 kits, and not one would boot with XMP enabled. Those kits ranged between 5,600MT/s and 8,800MT/s and between 16GB and 32GB per DIMM. Some of those kits were early DDR5 with XMP support, some were recent, and two kits were of the new CUDIMM variety that have an onboard clock driver to enable faster RAM speeds on Arrow Lake specifically. Some even had trouble booting at JEDEC speeds, which I've never experienced on any platform.
The only kit that did boot at higher speeds was from Kingston, and they were 8,800MT/s CUDIMMs. But it wasn't any of the BIOS settings I set from the years of RAM overclocking experience that I have on multiple platforms that worked. I had to boot into Windows and use Gigabyte's AI Snatch program, which tested the RAM and used algorithms to decide what speed and timings the kit should be using. After a reboot into BIOS to enable those AI generated settings and ensuring the DDR voltage was set to 1.45V, it booted into Windows at 8,933MT/s.
There was one last issue, however, in that the RAM would only run in Gear 4. Most low-latency memory DDR4 runs in Gear 1; using Gear 2 is a way to get higher speeds but at a slight penalty to latency, as it runs the memory at a 2:1 ratio compared to the memory controller. Most DDR5 uses Gear 2 to begin with, and to get higher speeds on Arrow Lake, drops to a 4:1 ratio, aka Gear 4. That's a huge performance hit in latency, on top of the considerable latency that Arrow Lake has by design. And remember, this is one kit out of nearly a dozen that could run at or above its rated speed.
XMP speeds will get fixed for the most part, as we've seen with AMD's Ryzen and how much better it handles RAM compatibility since its initial release. Arrow Lake is Intel's Ryzen moment, with the potential to do much more and build better CPUs in the future. It just has to get there, and the ring bus that shuttles data between the CPU and L3 cache and the hop to the memory controller are two things that need improving for the next silicon release. Intel can mitigate some of the aspects of the memory hit, but it can't do much about the inherent latency of the trip between the SoC tile and the compute tile.
It will likely involve a combination of silicon fixes, Windows, and driver improvements, as AMD did with Ryzen. If the computer's software is aware of the latency, it can be rewritten to account for it somewhat and make the system snappier as a result.
[...] Intel might not be able to fix everything [...] the hardware limitations of Arrow Lake's design means a true fix is unlikely.
The good news for consumers (and for Intel) is that the company has identified a combination of tuning and optimization issues that it can fix, and an update should be coming soon. That should improve gaming and productivity performance, and improve the overall experience while using Arrow Lake chips. We're looking forward to retesting at that time to see if we have to revise our review scores, but have realistic expectations on the performance bump because there's one thing that no amount of optimization can wipe out.
That's the inherent latency in the memory pipeline of Arrow Lake chips, because it's baked into the fabric of the hardware. Moving the IMC away from the compute tile seems to have compounded any other optimization issues. Remember, when Ryzen first launched, the IMC was on the CCX, and AMD still had memory latency issues, partly because of inter-CCX data. Later versions of Ryzen moved the IMC onto the I/O chiplet, but AMD was able to reduce the memory latency penalty because of how they designed the Infinity Fabric interconnect.
It already seems that Intel is returning to an integrated IMC because speculation and leaks around Panther Lake suggest that the IMC will be placed on the compute tile again. That might be expected anyway, as Panther Lake is a mobile chip and wouldn't have the space on the packaging substrate for an SoC tile. But Nova Lake, which is the successor to Arrow Lake, will move the IMC off the compute tile again, but with more optimizations to reduce latency hits. Or, at least, that's what the plan seems to be from the speculation.
[...] So, where does this leave Intel? The troubled chipmaker was already struggling with designs, as it canceled Meteor Lake's desktop chips so that the team could focus on Arrow Lake. One can only imagine how much worse things could have been if that hadn't happened and the engineering team had to work on two CPU lines at once. The other thing is that Arrow Lake is the first new architectural change since 2021, so Intel is already running behind on its usual tick-tock process change and then improvement cycle. Even with plenty of engineering talent going to Apple to make Apple Silicon, Intel still has plenty of talent on deck, so it's more a question of when, rather than if, it finds its groove again.