Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Idiosyncratic use of punctuation - which of these annoys you the most?

  • Declarations and assignments that end with }; (C, C++, Javascript, etc.)
  • (Parenthesis (pile-ups (at (the (end (of (Lisp (code))))))))
  • Syntactically-significant whitespace (Python, Ruby, Haskell...)
  • Perl sigils: @array, $array[index], %hash, $hash{key}
  • Unnecessary sigils, like $variable in PHP
  • macro!() in Rust
  • Do you have any idea how much I spent on this Space Cadet keyboard, you insensitive clod?!
  • Something even worse...

[ Results | Polls ]
Comments:34 | Votes:74

posted by janrinok on Friday September 20, @08:34PM   Printer-friendly
from the fake-department-of-fake-liars dept.

https://arstechnica.com/information-technology/2024/09/due-to-ai-fakes-the-deep-doubt-era-is-here/

Given the flood of photorealistic AI-generated images washing over social media networks like X [arstechnica.com] and Facebook [404media.co] these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back [nytimes.com] decades—and analog media long before [wikipedia.org] that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

[...] Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend [bu.edu] years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

Doubt has been a political weapon since ancient times [populismstudies.org]. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

[...] In April, a panel of federal judges [arstechnica.com] highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials.

[...] Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential "cultural singularity [fastcompany.com]," a threshold where truth and fiction in media become indistinguishable.

[...] "Deep doubt" is a new term, but it's not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of [theguardian.com] an upcoming "information apocalypse" due to deepfakes and questioned, "When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?"

[...] Throughout recorded history, historians and journalists have had to evaluate the reliability of sources [wm.edu] based on provenance, context, and the messenger's motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it's reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.

[...] You'll notice that our suggested counters to deep doubt above do not include watermarks, metadata, or AI detectors as ideal solutions. That's because trust does not inherently derive from the authority of a software tool. And while AI and deepfakes have dramatically accelerated the issue, bringing us to this new "deep doubt" era, the necessity of finding reliable sources of information about events you didn't witness firsthand is as old as history itself.

[...] It's likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled [thedailybeast.com] as AI-generated, creating ongoing pain for students in particular.

Throughout history, any form of recorded media, including ancient clay tablets, has been susceptible to forgeries [researchgate.net]. And since the invention of photography, we have never been able to fully trust a camera's output: the camera can lie [nytimes.com].

[...] Credible and reliable sourcing is our most critical tool in determining the value of information, and that's as true today as it was in 3000 BCE [wikipedia.org], when humans first began to create written records.


Original Submission

posted by hubie on Friday September 20, @03:49PM   Printer-friendly

https://cosmographia.substack.com/p/the-black-death-is-far-older-than

In 1338, among a scattering of obscure villages just to the west of Lake Issyk-Kul, Kyrgyzstan, people began dropping dead in droves. Among the many headstones found in the cemeteries of Kara-Djigach and Burana, one can read epitaphs such as "This is the grave of Kutluk. He died of the plague with his wife." Recently, ancient DNA exhumed from these sites has confirmed the presence of the plague bacterium Yersinia pestis, cause of the condition that became known as the Black Death. The strain detected in those remote graveyards of Central Asia has been identified as the most recent common ancestor of the plague that went on to kill as much as 60% of the Eurasian population in the great pandemic of the 14th-century.

[...] In 2018, a team of researchers found ancient traces of the plague bacterium in 4900-year old remains in Sweden. A few years later, traces of the bacterium were found in a 5000-year old skull in Latvia. It was tentatively suggested that these finds correlate with the Neolithic Decline, and might explain the large die off within these farming societies. However the cases were isolated, with some of the infected buried with uninfected, suggesting there wasn't an epidemic comparable to the Black Death outbreaks that would come in later millenniums.

[...] Whether the Neolithic Decline was mostly, or in part, caused by the plague is still up for debate, but one thing is clear: humanity has been battling Yersinia pestis for a long, long time.


Original Submission

posted by hubie on Friday September 20, @11:03AM   Printer-friendly
from the one-step-for-AI-one-giant-leap-for-the-hype-train dept.

https://arstechnica.com/information-technology/2024/09/openais-new-reasoning-ai-models-are-here-o1-preview-and-o1-mini/

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and API users.
[...]
In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"
[...]
AI benchmarks are notoriously unreliable and easy to game; however, independent verification and experimentation from users will show the full extent of o1's advancements over time. On top of that, MIT Research showed earlier this year that some of OpenAI's benchmark claims it touted with GPT-4 last year were erroneous or exaggerated.

One of the examples of o1's abilities that OpenAI shared is perhaps the least consequential and impressive, but it's the most talked about due to a recurring meme where people ask LLMs to count the number of Rs in the word "strawberry." Due to tokenization, where the LLM processes words in data chunks called tokens, most LLMs are typically blind to character-by-character differences in words.
[...]
It's no secret that some people in tech have issues with anthropomorphizing AI models and using terms like "thinking" or "reasoning" to describe the synthesizing and processing operations that these neural network systems perform.

Just after the OpenAI o1 announcement, Hugging Face CEO Clement Delangue wrote, "Once again, an AI system is not 'thinking', it's 'processing', 'running predictions',... just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it's more clever than it is."


Original Submission

posted by martyb on Friday September 20, @06:16AM   Printer-friendly
from the is-there-such-a-thing-as-a-good-patent-troll dept.

Tell Congress: We Can't Afford More Bad Patents:

A key Senate Committee is about to vote on two bills that would bring back some of the worst patents and empower patent trolls.

The Patent Eligibility Restoration Act (PERA), S. 2140, would throw out crucial rules that ban patents on many abstract ideas. Courts will be ordered to approve patents on things like ordering food on a mobile phone or doing basic financial functions online. If PERA Passes, the floodgates will open for these vague software patents that will be used to sue small companies and individuals. This bill even allows for a type of patent on human genes that the Supreme Court rightly disallowed in 2013.

A second bill, the PREVAIL Act, S. 2220, would sharply limit the public's right to challenge patents that never should have been granted in the first place.

Patent trolls—companies that have no product or service of their own, but simply make patent infringement demands on others—are a big problem. They've cost our economy billions of dollars. For a small company, a patent troll demand letter can be ruinous.

We took a big step towards fighting off patent trolls in 2014, when a landmark Supreme Court ruling, the Alice Corp. v. CLS Bank case, established that you can't get a patent by adding "on a computer" to an abstract idea. In 2012, Congress also expanded the ways that a patent can be challenged at the patent office.

These two bills, PERA and PREVAIL, would roll back both of those critical protections against patent trolls. We know that the bill sponsors, Sens. Thom Tillis (R-NC) and Chris Coons (D-DE) are pushing hard for these bills to move forward. We need your help to tell Congress that it's the wrong move.


Original Submission

posted by janrinok on Friday September 20, @01:31AM   Printer-friendly
from the I-spy-with-my-AI-eye dept.

https://www.businessinsider.com/larry-ellison-ai-surveillance-keep-citizens-on-their-best-behavior-2024-9[paywalled].
https://arstechnica.com/information-technology/2024/09/omnipresent-ai-cameras-will-ensure-good-behavior-says-larry-ellison/

Larry Ellison (of Oracle) predicts a future of AI enabled mass-surveillance where everyone lives in a panopticon. Constantly watched and recorded by AI that reports all transgressions.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison's vision bears more than a passing resemblance to the cautionary world portrayed in George Orwell's prescient novel 1984. In Orwell's fiction, the totalitarian government of Oceania uses ubiquitous "telescreens" to monitor citizens constantly, creating a society where privacy no longer exists and independent thought becomes nearly impossible.

(Here's looking at you, kin?)


Original Submission

posted by janrinok on Thursday September 19, @08:45PM   Printer-friendly
from the Don't-ignore-the-observations dept.

https://www.earth.com/news/new-observations-disprove-big-bang-theory-universe-began-tired-light-theory/

We have been getting stories for a while about how JWST observations don't line up with the current Big Bang timelines. I'm certain there will be "Big Bang Band Aid" theories at least until the current crop of Astrophysicists who built their entire career on the semi-biblical "In the Beginning..." theory of where it all started have, themselves, died off. Meanwhile, there is also never a shortage of contrarian theories out there, and one of them is starting to get some support from the JWST observations of the "deep past" - which, maybe, isn't so deep after all.

Current theories for the redshift observed in more distant galaxies rely on the postulate: "photons travel at the speed of light and arrive unchanged at their destination, exactly when they left their source, from their perspective."

There are other theories. One, in particular, explains the observed redshifts with the idea that photons "get tired" on their Billions of light year journeys and lose a little frequency / gain a little wavelength along the way. JWST observations that are seeing mature galaxies back at, and before, the previously presumed start of "it all" may align better with the less well developed tiring photon theory than they do with the Big Bang. Not only does the "tired light" theory directly explain red-shift, but the observations of wavelength shift with respect to galactic rotation seem to be lining up better with "tired light" than "Big Bang," too...

Around the same time, Fritz Zwicky, a well-known astronomer, came up with a different idea.

He proposed that the redshift we see in distant galaxies — basically a shift in the light spectrum towards red — might not be because those galaxies are speeding away.

Instead, he thought that the light photons from these galaxies could be losing energy, or "tiring out," as they travel through space.

This energy loss could make it look like the farther galaxies are moving away from us faster than they actually are.

"[...] But the confidence of some astronomers in the Big Bang theory started to weaken when the powerful James Webb Space Telescope (JWST) saw first light."

What if the Universe isn't expanding at all, but instead is quite a bit bigger than we have been guessing it is?


Original Submission

posted by hubie on Thursday September 19, @04:01PM   Printer-friendly
from the burn-baby-burn dept.

A Tesla Semi's fiery crash on California's Interstate 80 turned into a high-stakes firefight, as emergency responders struggled to douse flames ignited by the vehicle's lithium-ion battery pack:

The National Transportation Safety Board (NTSB) reported that CAL FIRE had to use a jaw-dropping 50,000 gallons of water, alongside fire-retardant airdrops, to put out the blaze. The crash and subsequent fire shut down eastbound lanes of I-80 for a staggering 15 hours, as reported by Breitbart.

The Tesla electric big rig, driven by a Tesla employee, veered off the road on August 19, smashing into a traffic post and a tree before careening down a slope and igniting a post-crash inferno. Fortunately, no one was injured. However, the NTSB's report sheds light on the difficulty of extinguishing fires in electric vehicles. Tesla's infamous "thermal runaway" effect—the tendency of lithium-ion batteries to reignite hours after being "put out"—was a constant concern, but the semi's battery system stayed under control this time.

[...] The blaze and the hazardous materials response that followed created chaos along I-80, a key artery linking Northern California with Nevada. Traffic was rerouted, and the full shutdown stretched late into the evening, causing significant delays.

Previously:


Original Submission

posted by janrinok on Thursday September 19, @11:18AM   Printer-friendly
from the what's-my-DNA-worth-anyway? dept.

Genetic information and ancestry reports of U.S. citizens were among the information stolen in the cyber attack:

23andMe proposes to compensate millions of customers affected by a data breach on the company's platform, offering $30 million as part of the settlement, along with providing users access to a security monitoring system.

The genetic testing service will pay the amount to approximately 6.4 million American users, according to a proposed class action settlement filed in the U.S. District Court for the Northern District of California on Sept. 12. Personal information was exposed last year after a hacker breached the website's security and posted critical user data for sale on the dark web.

[...] According to the settlement proposal, users will be sent a link where they can delete all information related to 23andMe.

[...] In an emailed statement to The Epoch Times, 23andMe Communications Director Andy Kill said that out of the $30 million aggregate amount, "roughly $25 million of the settlement and related legal expenses are expected to be covered by cyber insurance coverage."

Also at USA Today, Fox Business and The Verge.

Previously:


Original Submission

posted by janrinok on Thursday September 19, @06:33AM   Printer-friendly

Pagers kill a dozen, injure thousands... Huh? Pagers?

If you know what a pager is, you're OLD. Or are a Hezbollah terrorist. According to the Washington Post (paywalled), Wall Street Journal, CNN, and just about every outlet, about a dozen people were killed and thousands reportedly injured.

See, kid, back in the stone age we didn't have supercomputers in our pockets acting as telephones, we only had telephones. They were a permanent part of a room. If you weren't home, nobody could call you. But if you were a physician, people need to call you. So they had "pagers", also called "beepers," that alerted you to call the office.

They're not supposed to blow up. This is James Bond stuff. Since the Israelis can listen in to every cell phone call in the area, Hezbollah needed a secure way to communicate, so used pagers. But who loaded them with explosives? How? Pagers weren't big, the explosive must be high tech.

What was 007's tech guy's name?

exploding pagers: actual cyber war?

I remember vague stories heard in the 90s about "viruses" that would take over your computer, then spin your hard drive so fast that it broke.
Then there was the history of stuxnet and the Iran uranium centrifuges.
Just now I saw this story about pagers (of Hezbollah members) exploding https://www.bbc.com/news/articles/cd7xnelvpepo
I suspect a virus that does something to batteries, rather than traditional explosives.

if my suspicion is true... are we looking at a future where high-density batteries are too dangerous for regular people?


Original Submission #1Original Submission #2

posted by hubie on Thursday September 19, @01:43AM   Printer-friendly
from the just-like-frozen-concentrated-orange-juice dept.

The availability of large datasets which are used to train LLMs enabled their rapid development. Intense competition among organizations has made open-sourcing LLMs an attractive strategy that's leveled the competitive field:

Large Language Models (LLMs) have not only fascinated technologists and researchers but have also captivated the general public. Leading the charge, OpenAI ChatGPT has inspired the release of numerous open-source models. In this post, I explore the dynamics that are driving the commoditization of LLMs.

Low switching costs are a key factor supporting the commoditization of Large Language Models (LLMs). The simplicity of transitioning from one LLM to another is largely due to the use of a common language (English) for queries. This uniformity allows for minimal cost when switching, akin to navigating between different e-commerce websites. While LLM providers might use various APIs, these differences are not substantial enough to significantly raise switching costs.

In contrast, transitioning between different database systems involves considerable expense and complexity. It requires migrating data, updating configurations, managing traffic shifts, adapting to different query languages or dialects, and addressing performance issues. Adding long-term memory [4] to LLMs could increase their value to businesses at the cost of making it more expensive to switch providers. However, for uses that require only the basic functions of LLMs and do not need memory, the costs associated with switching remain minimal.

[...] Open source models like Llama and Mistral allow multiple infrastructure providers to enter the market, enhancing competition and lowering the cost of AI services. These models also benefit from community-driven improvements, which in turn benefits the organizations that originally developed them.

Furthermore, open source LLMs serve as a foundation for future research, making experimentation more affordable and reducing the potential for differentiation among competing products. This mirrors the impact of Linux in the server industry, where its rise enabled a variety of providers to offer standardized server solutions at reduced costs, thereby commoditizing server technology.

Previously:


Original Submission

posted by janrinok on Wednesday September 18, @08:58PM   Printer-friendly

https://dm319.github.io/pages/2024_09_09_hp12_comma.html

The HP-12c is probably the most iconic financial calculator. Not being in finance myself, and in fact being terribly bad at that kind of thing, I never quite got the purpose of these special-purpose devices. My ignorance came to a halt due to an unfortunate combination of my fixed-rate mortgage period ending and Liz Truss happening, and I was driven to a sudden keen interest in the 'time value of money' (TVM) calculation.

...

Earlier this year I came across a fairly benign-looking reddit post describing some difficulty changing the decimal point to a decimal comma on a new Brazilian-bought HP-12c. Most of the replies were along the lines of 'you're holding it wrong', but something caught my attention. They weren't the only one, and not only had someone else had the same experience, I was pointed to numerous Amazon reviews describing similar woes.

To find out for myself, I VPN'd myself over to the Brazilian Amazon and started reading (with the assistance of my phone and google translate) reviews. What I saw was quite consistent - people couldn't change the point to the comma, and the calculator also failed on something called the internal rate of return (IRR) calculation.

I was curious, were these a different version of the HP-12c? Was it a fake? It is generally accepted that the HP-12c (and to some degree the related HP-12c platinum) will return exactly the same results no matter. Why would it otherwise? It was at this point I needed help, and a very kind Brazilian redditor did the work needed to run the aforementioned TVM tests as a forensic tool. What they found was a set of results entirely different to not just the regular HP-12c, but to any other financial calculator we had tested.


Original Submission

posted by mrpg on Wednesday September 18, @04:11PM   Printer-friendly
from the you-look-very-good-for-your-age dept.

One of the most recent Ig Nobel winners that caught my eye was: Saul Justin Newman, for detective work in discovering that many of the people famous for having the longest lives lived in places that had lousy birth-and-death recordkeeping. He found that almost all data on the reported oldest people in the world are staggeringly wrong, as high as 82% incorrect, and he says, "If equivalent rates of fake data were discovered in any other field... a major scandal would ensue. In demography, however, such revelations seem to barely mention citation."

The Conversation also picked up on this and interviewed him about it:

I started getting interested in this topic when I debunked a couple of papers in Nature and Science about extreme ageing in the 2010s. In general, the claims about how long people are living mostly don't stack up. I've tracked down 80% of the people aged over 110 in the world (the other 20% are from countries you can't meaningfully analyse). Of those, almost none have a birth certificate. In the US there are over 500 of these people; seven have a birth certificate. Even worse, only about 10% have a death certificate.

The epitome of this is blue zones, which are regions where people supposedly reach age 100 at a remarkable rate. For almost 20 years, they have been marketed to the public. They're the subject of tons of scientific work, a popular Netflix documentary, tons of cookbooks about things like the Mediterranean diet, and so on.

Okinawa in Japan is one of these zones. There was a Japanese government review in 2010, which found that 82% of the people aged over 100 in Japan turned out to be dead. The secret to living to 110 was, don't register your death.

[...] Regions where people most often reach 100-110 years old are the ones where there's the most pressure to commit pension fraud, and they also have the worst records. For example, the best place to reach 105 in England is Tower Hamlets. It has more 105-year-olds than all of the rich places in England put together. It's closely followed by downtown Manchester, Liverpool and Hull. Yet these places have the lowest frequency of 90-year-olds and are rated by the UK as the worst places to be an old person.

[...] Longevity is very likely tied to wealth. Rich people do lots of exercise, have low stress and eat well. I just put out a preprint analysing the last 72 years of UN data on mortality. The places consistently reaching 100 at the highest rates according to the UN are Thailand, Malawi, Western Sahara (which doesn't have a government) and Puerto Rico, where birth certificates were cancelled completely as a legal document in 2010 because they were so full of pension fraud. This data is just rotten from the inside out.

Do you think the Ig Nobel will get your science taken more seriously?

I hope so. But even if not, at least the general public will laugh and think about it, even if the scientific community is still a bit prickly and defensive. If they don't acknowledge their errors in my lifetime, I guess I'll just get someone to pretend I'm still alive until that changes.


Original Submission

posted by janrinok on Wednesday September 18, @11:34AM   Printer-friendly
from the Back-to-face-to-face-meetings dept.

The next encrypted phone service have fallen after Encrochat, Sky ECC and Anom. This time it's probably "Ghost".

A press conference will be held on Wednesday 18 September 2024 to announce a major action against an encrypted communication platform used for criminal activities, such as large-scale drugs trafficking, homicides and money laundering.

This operation is the latest sophisticated effort to date to disrupt the activities of high-risk criminal organisations operating from all four corners of the world.
Details
Speakers:

        Europol
        French National Gendarmerie (Gendarmerie Nationale)
        United States' Federal Bureau of Investigation (FBI)
        Australian Federal Police (AFP)
        Irish An Garda Síochána
        Royal Canadian Mounted Police (RCMP)

Countries and organisations involved:

Australia, Canada, France, Iceland, Ireland, Italy, the Netherlands, Sweden, United States, Europol, Eurojust

https://www.europol.europa.eu/media-press/newsroom/news/invitation-%E2%80%93-press-conference-livestreamed


Original Submission

posted by hubie on Wednesday September 18, @06:49AM   Printer-friendly
from the hybrid-anyone? dept.

Prices of emissions-free trucks need to fall by as much as half to make them an affordable alternative to diesel models, a study by consultancy firm McKinsey published on Wednesday said, a necessary step to help achieve European Union climate targets:

Less than 2% of the EU's heavy freight vehicles are now electric and hydrogen-powered. To meet the bloc's carbon emission reduction targets, the share should rise to 40% of new sales by 2030, the study released before the IAA Transportation 2024 truck show in Hanover showed.

Currently production costs for electric trucks are 2.5-3 times higher than for diesel ones, the study said, and with logistics firms unwilling to accept higher costs for emissions-free freight, that goal is still distant.

To overcome that, prices for new electric trucks should be no more than 30% higher than for diesel models, McKinsey said, which would require a technological leap in batteries.

For successful implementation of the EU's CO2 strategy, a 25% cut in charging costs is also needed, the study showed, with 900,000 private charging points to be installed in Europe by 2035, which would require a $20 billion investment.


Original Submission

posted by hubie on Wednesday September 18, @02:03AM   Printer-friendly
from the duck-and-cover dept.

Arthur T Knackerbracket has processed the following story:

Cybercriminals closed some schools in America and Britain this week, preventing kindergarteners in Washington state from attending their first-ever school day and shutting down all internet-based systems for Biggin Hill-area students in England for the next three weeks.

On Sunday, Highline Public Schools, a Seattle-area school district that serves more than 17,000 students from pre-K through high school, alerted its parents and students that all schools, along with activities, athletics and meetings planned for Monday, had been canceled.

"We have detected unauthorized activity on our technology systems and have taken immediate action to isolate critical systems," according to a notice posted on the district's website. 

Upon finding the digital intruders on the network, the district called in third-party infosec experts, along with US federal and state law enforcement, to help restore the systems, we're told.

[...] No criminal group has claimed responsibility for the Highline breach, though the school closures follow a ransomware infection that snarled traffic at the Seattle-Tacoma International Airport in late August.

[...] Meanwhile, in the UK, Charles Darwin School sent home a letter with all of its students on September 6, telling parents and caregivers that the "IT issues" it had been experiencing were "worse than hoped." In fact, they were due to a ransomware attack.

Charles Darwin has 1,320 secondary and sixth-form students in Bromley, England.

The Biggin Hill school would be closed between September 9 and September 11 as IT admins wiped all of the staff devices and teachers reorganized all of their lessons, according to headteacher Aston Smith. 

Internet, email, and other school systems will be knocked out for an estimated three weeks, he added. 

[...] Black Suit, believed to be an offshoot of the now defunct Conti ransomware gang, has claimed to be behind the Charles Darwin School attack. In a post on the criminals' dark-web blog, they say they stole 200 GB of data, including user, business data, employee, student and financial information. 

[...] "Unfortunately, cyber-attacks like this are happening more frequently despite having the latest security measures in place," he said. "Our understanding of our situation is that it is similar to what was experienced by the NHS, Transport for London, National Rail, other schools and public sector departments."

[...] "There is no honor amongst the ransomware gangs attacking schools in Washington state and the UK," Semperis principal technologist Sean Deuby told The Register, adding that schools are more vulnerable targets because of their smaller IT budgets and fewer defensive resources. "Attacking just before the first day of school for young kindergartners demonstrates their amorality."

While the Seattle-area district hasn't called the incident ransomware, "reading between the lines on these attacks leads me to believe that the schools were hit by ransomware," Deuby opined.

[...] "Most schools today use Office 365 but still depend upon their on-premises identity system, Active Directory, for its users," Deuby said, adding that this makes exploiting Microsoft AD vulnerabilities more enticing to criminals. 

While there's "no silver bullet" to solve schools' security challenges, he suggests working with their IT providers to identify critical services "such as AD that are single points of failure." 

"If critical services go down, school stops, and the school buses don't roll," Deuby noted. "Have a plan for what to do. This doesn't have to be perfect but think now about what to do if email goes away or a teacher portal is locked."


Original Submission