Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Zooming through the outer reaches of the solar system, A NASA spacecraft just clocked a distance 60 times farther from the sun than Earth.
The extraordinary benchmark announced this week means the New Horizons probe has doubled its 2015 distance, when it was snapping pictures of Pluto and its moons.
Perhaps more surprising than this intangible deep-space milestone is the one this intrepid spacecraft hasn't reached yet: the outer edge of the solar system's Kuiper Belt, a disk beyond Neptune of countless comets and thousands of tiny ice worlds. The far-flung region is littered with leftover rubble from the time when primitive planets were forming.
Scientists had expected the spacecraft to arrive at the proverbial edge about 1 billion miles ago.
Arthur T Knackerbracket has processed the following story:
The “no-fly” list has many problems. Pretty much any fed can “nominate” someone for the list. Pretty much everyone on the list has almost zero chance of getting off it other than by filing a lawsuit. And even though the government has been forced by court decisions to offer a venue for challenges, the federal government is still under no obligation to tell people why they’ve been placed on the list, much less promise to never put them back on it again.
When people have been removed (almost exclusively following lawsuits), they’re simply told they’ve been removed. The only way to find out if they’ve been reinstated is to buy a ticket to ride only to have it denied after they’ve already spent their money and arrived at the airport.
Then there’s the cross-pollination of federal law enforcement databases, which turns people on the “no fly” list into suspected terrorists, even if there’s nothing in the database that supports this implication or any cop’s corresponding inference.
As unjust as this all is, at least there are some limits. Well, maybe one. And maybe one that only applies to this specific incident. But, there’s at least one limit and it’s spelled out by this decision [PDF] handed down by the Eleventh Circuit Appeals Court. And that limitation is this: you can’t stop someone from driving just because they’re not allowed to board a plane. (h/t FourthAmendment.com)
Here’s how this all went down in Georgia, leading to this federal lawsuit:
First, they ignored direct instructions telling the officers not to detain the driver. Then they kept him detained for 91 minutes which, if nothing else, definitely violates the Supreme Court’s Rodriguez decision — the one that says officers cannot prolong traffic stops without the reasonable suspicion to do so.
The State Police officers didn’t have any of that. All they had was a “no fly” hit that came coupled with instructions stating that his mere presence on this list did not justify further detention. And none of that justified the warrantless search of his truck.
And, according to the allegations in the lawsuit, the only reason Meshal was on the FBI’s “no fly” list was because he had refused to become an FBI snitch.
Not exactly an improbable allegation! The FBI has been known to do this. A lot. Even if it feels it can’t justify a “no fly” list nomination, agents feel more than comfortable threatening people with deportation or further disruption of their travel plans. That a state officer would feel comfortable detaining someone in contravention of direct instructions otherwise makes it clear anyone the government merely wants to pretend is a terrorist is justification enough for any further violation of their rights.
At the district court level, all involved officers (Janufka, Oglesby, and Wright) were denied qualified immunity for this prolonged, suspicionless detention of Meshal, as well as for the completely unjustified search of his vehicle. They appealed. And the 11th Circuit says, too bad. Maybe don’t violate rights if you don’t like being sued.
The court first cites the Rodriguez decision in response to the officers’ arguments that the stop was not “unreasonably” prolonged. It also addresses their claim that detaining Meshal was necessary, even though the original stop was (allegedly) for him following another driver too closely.
As for the claim that it was the FBI’s fault the detention took 90 minutes due to officers waiting for a return call from the agency (after ignoring the agency’s direct instructions not to detain the driver), the court is even less sympathetic. An extended stop can’t be justified just because officers chose to involve an outside agency.
Driving the point home, the Appeals Court says all of this is stuff officers should know — so clearly established they can’t plausibly claim they weren’t “on notice” that detaining someone on a no fly list (much less searching his truck) for driving isn’t acceptable under the US Constitution.
The lawsuit will continue. And rights that were always present have been reaffirmed, something that’s going to help plenty of people who have been placed on the FBI’s “no fly” list (as this lawsuit alleges) for purely vindictive reasons. I would expect the state of Georgia to settle soon, rather than just wait around for more precedent curbing officer misconduct to be solidified.
The Harvard Business Review ran a piece back in July 2024 on the future of computer security,
https://hbr.org/2024/07/when-cyberattacks-are-inevitable-focus-on-cyber-resilience
Well written (imo) in straightforward language, the gist is:
What is cyber resiliency? And why is it different than cyber protection?
A prevention mindset means doing all you can to keep the bad guys out. A resilience mindset adds a layer: while you do all you can to prevent an attack, you also work with the expectation that they still might break through your defenses and invest heavily preparing to respond and recover when the worst happens. Resilient organizations specifically devote significant resources to drawing up plans for what they will do if an attack happens, designing processes to execute them when the time comes, and practicing how to put these plans into action. Prevention is critical — but it's not enough.
[...]
Yet in my work as a researcher in conversation with chief information security officers and other cyber experts, I have noticed that many leaders focus most, if not all, of their security resources on prevention and leave recovery to business continuity plans that aren't usually designed with cyber incidents in mind. Instead, leaders need to embrace a mindset of cyber-resilience.
The HBR readership is (I believe) tilted toward C-class executives, so this may well filter down into IT departments. Anyone here seen any signs of a push toward "resilience" recently?
Paywalled? Try https://archive.is/CSFA3
Arthur T Knackerbracket has processed the following story:
Tropical storms and hurricanes like Helene could indirectly cause up to 11,000 deaths in the 15 years that follow the initial destruction. Hurricane Helene may have already hammered the Southeast, but its lethal aftermath could last a decade or more.
Tropical cyclones, which include hurricanes like Helene and other whirling storms, boost local death rates for up to 15 years after whipping along U.S. coastlines, scientists report October 2 in Nature. Each storm may indirectly cause between 7,000 and 11,000 deaths, estimate University of California, Berkeley environmental economist Rachel Young and Stanford University economist Solomon Hsiang.
That’s a Mount Everest of an estimate compared to the official number of deaths — 24 — that the National Oceanic and Atmospheric Administration attributes to the average storm in the team’s analysis. The results suggest that “hurricanes and tropical storms are a much greater public health concern than anyone previously thought,” Young says.
Using a statistical model, she and Hsiang analyzed the impact of all 501 tropical cyclones that hit the contiguous United States from 1930 to 2015. They measured changes in mortality for up to 20 years after each of these storms. Their analysis suggests that an individual hurricane may indirectly lead to thousands of lives lost. And taken together, the storms could have spurred as many as 5 percent of all deaths over that time period. Infants were particularly vulnerable, as were Black populations, the team found.
Young and Hsiang don’t know all the ways hurricanes may contribute to mortality, but they have some ideas. It’s possible the stress of a surviving such a storm, or the pollution left in the wake of destruction, harms people’s health (SN: 10/1/24). Or maybe local governments have less money to spend on health care after rebuilding ravaged infrastructure. It could be some combination of these and other factors, Young says. She’s interested in digging into what’s going on.
In the meantime, Young thinks her team’s work highlights the need for new disaster response polices — ones that account for hurricanes’ impact long term. “We really pull together after these disasters to help people immediately in the aftermath,” she says. But “we need to be thinking about these folks long after those initial responses are over.”
California enacts car data privacy law to curb domestic violence:
California Governor Gavin Newsom has signed a bill that requires automakers selling internet-connected cars to do more to protect domestic abuse survivors, a move that may expand such safeguards nationwide.
As automakers add ever more sophisticated technology to their cars, instances of stalking and harassment using features such as location tracking and remote controls have begun to emerge.
The bill passed the California state legislature with overwhelming support, and Newsom signed it on Friday along with several other measures intended to protect domestic violence survivors. The law could lead to the new standards being implemented beyond California, as automakers tend to avoid producing different cars for different states.
Legislative analysts cited reporting from Reuters and the New York Times about carmakers which did not help women who alleged they were being targeted by their partners. One woman unsuccessfully sued Tesla, alleging the company failed to act after she repeatedly complained that her husband was stalking and harassing her with the automaker's technology despite a restraining order.
Among its provisions, the California bill requires automakers to set up a clear process for drivers to submit a copy of a restraining order or other documentation and request termination of another driver's remote access within two business days. It also mandates that carmakers enable drivers to easily turn off location access from inside the vehicle.
No carmaker officially opposed the law. The Alliance for Automotive Innovation, which counts several car manufacturers as members, said it supports the goal of protecting victims of domestic abuse. The Alliance raised some concerns about technical feasibility during the legislative process, and a spokesman said in an email on Monday it has discussed ways to potentially address those issues next year.
Thousands of Linux systems infected by stealthy malware since 2021:
Thousands of machines running Linux have been infected by a malware strain that's notable for its stealth, the number of misconfigurations it can exploit, and the breadth of malicious activities it can perform, researchers reported Thursday.
The malware has been circulating since at least 2021. It gets installed by exploiting more than 20,000 common misconfigurations, a capability that may make millions of machines connected to the Internet potential targets, researchers from Aqua Security said. It can also exploit CVE-2023-33426, a vulnerability with a severity rating of 10 out of 10 that was patched last year in Apache RocketMQ, a messaging and streaming platform that's found on many Linux machines.
The researchers are calling the malware Perfctl, the name of a malicious component that surreptitiously mines cryptocurrency. The unknown developers of the malware gave the process a name that combines the perf Linux monitoring tool and ctl, an abbreviation commonly used with command line tools. A signature characteristic of Perfctl is its use of process and file names that are identical or similar to those commonly found in Linux environments. The naming convention is one of the many ways the malware attempts to escape notice of infected users.
Perfctl further cloaks itself using a host of other tricks. One is that it installs many of its components as rootkits, a special class of malware that hides its presence from the operating system and administrative tools. Other stealth mechanisms include:
- Stopping activities that are easy to detect when a new user logs in
- Using a Unix socket over TOR for external communications
- Deleting its installation binary after execution and running as a background service thereafter
- Manipulating the Linux process pcap_loop through a technique known as hooking to prevent admin tools from recording the malicious traffic
- Suppressing mesg errors to avoid any visible warnings during execution.
The malware is designed to ensure persistence, meaning the ability to remain on the infected machine after reboots or attempts to delete core components. Two such techniques are (1) modifying the ~/.profile script, which sets up the environment during user login so the malware loads ahead of legitimate workloads expected to run on the server and (2) copying itself from memory to multiple disk locations. The hooking of pcap_loop can also provide persistence by allowing malicious activities to continue even after primary payloads are detected and removed.
Besides running using the machine resources to mine cryptocurrency, Perfctl also turns the machine into a profit-making proxy that paying customers use to relay their Internet traffic. Aqua Security researchers have also observed the malware serving as a backdoor to install other families of malware.
Assaf Morag, Aqua Security's threat intelligence director, wrote in an email:
Perfctl malware stands out as a significant threat due to its design, which enables it to evade detection while maintaining persistence on infected systems. This combination poses a challenge for defenders and indeed the malware has been linked to a growing number of reports and discussions across various forums, highlighting the distress and frustration of users who find themselves infected.
Perfctl uses a rootkit and changes some of the system utilities to hide the activity of the cryptominer and proxy-jacking software. It blends seamlessly into its environment with seemingly legitimate names. Additionally, Perfectl's architecture enables it to perform a range of malicious activities, from data exfiltration to the deployment of additional payloads. Its versatility means that it can be leveraged for various malicious purposes, making it particularly dangerous for organizations and individuals alike.
While Perfctl and some of the malware it installs are detected by some antivirus software, Aqua Security researchers were unable to find any research reports on the malware. They were, however, able to find a wealth of threads on developer-related sites that discussed infections consistent with it.
This Reddit comment posted to the CentOS subreddit is typical. An admin noticed that two servers were infected with a cryptocurrency hijacker with the names perfcc and perfctl. The admin wanted help investigating the cause.
"I only became aware of the malware because my monitoring setup alerted me to 100% CPU utilization," the admin wrote in the April 2023 post. "However, the process would stop immediately when I logged in via SSH or console. As soon as I logged out, the malware would resume running within a few seconds or minutes." The admin continued:
I have attempted to remove the malware by following the steps outlined in other forums, but to no avail. The malware always manages to restart once I log out. I have also searched the entire system for the string "perfcc" and found the files listed below. However, removing them did not resolve the issue. as it keep respawn on each time rebooted.
Other discussions include: Reddit, Stack Overflow (Spanish), forobeta (Spanish),brainycp (Russian), natnetwork (Indonesian), Proxmox (Deutsch), Camel2243 (Chinese), svrforum (Korean), exabytes,virtualmin,serverfault and many others.
After exploiting a vulnerability or misconfiguration, the exploit code downloads the main payload from a server, which, in most cases, has been hacked by the attacker and converted into a channel for distributing the malware anonymously. An attack that targeted the researchers' honeypot named the payload httpd. Once executed, the file copies itself from memory to a new location in the /temp directory, runs it, and then terminates the original process and deletes the downloaded binary.
Once moved to the /tmp directory, the file executes under a different name, which mimics the name of a known Linux process. The file hosted on the honeypot was named sh. From there, the file establishes a local command-and-control process and attempts to gain root system rights by exploiting CVE-2021-4043, a privilege-escalation vulnerability that was patched in 2021 in Gpac, a widely used open source multimedia framework.
The malware goes on to copy itself from memory to a handful of other disk locations, once again using names that appear as routine system files. The malware then drops a rootkit, a host of popular Linux utilities that have been modified to serve as rootkits, and the miner. In some cases, the malware also installs software for "proxy-jacking," the term for surreptitiously routing traffic through the infected machine so the true origin of the data isn't revealed.
The researchers continued:
As part of its command-and-control operation, the malware opens a Unix socket, creates two directories under the /tmp directory, and stores data there that influences its operation. This data includes host events, locations of the copies of itself, process names, communication logs, tokens, and additional log information. Additionally, the malware uses environment variables to store data that further affects its execution and behavior.
All the binaries are packed, stripped, and encrypted, indicating significant efforts to bypass defense mechanisms and hinder reverse engineering attempts. The malware also uses advanced evasion techniques, such as suspending its activity when it detects a new user in the btmp or utmp files and terminating any competing malware to maintain control over the infected system.
[Editor's Comment: There is much more information in the full article but, yeah, we know, nobody reads TFA]
Arthur T Knackerbracket has processed the following story:
At this point, it seems many people are starting to give up on the impact Generative AI was supposed to have. After all, haven't we already fallen into the dreaded "trough of disillusionment"? The tenor of media coverage, social media posts, and more has clearly shifted from giddy excitement to weary skepticism, driven by a fear of overhyping the technology.
But out in the real world, it's clear that GenAI initiatives across businesses of all sizes and industries are actually just starting to shift into higher gear.
A new study by TECHnalysis Research emphasizes this point. "The Intelligent Path Forward: GenAI in the Enterprise" is based on a web survey of over 1,000 US-based IT decision makers across ten industries and both medium businesses (100-999 employees) and large enterprises (1,000+ employees).
The study results show that not only are investments in GenAI initiatives continuing, but companies are also finding new sources of funding to support these efforts. Over half of the survey respondents said they're using funds outside traditional IT to help pay for these efforts – 31% from special budgets dedicated to GenAI, and another 22% from other corporate budgets or initiatives, such as business units.
The results shown in Figure 1 above. It's an amazing statistic that highlights how companies continue to be enthusiastic about their GenAI-related efforts (as well as how disconnected many short-term thinking pundits are from what's really happening!).
In addition to their continued enthusiasm, companies are evolving their thinking about how and where GenAI deployments are taking place. Due to the need to use vast amounts of critical, often sensitive, data to train and fine-tune the models driving their GenAI applications, many organizations are interested in conducting more of this work within their own data centers or colocation facilities.
While cloud-based GenAI efforts continue to dominate and will likely remain the majority for some time, a remarkable 80% of respondents expressed some degree of interest in local deployments. This shift from established industry practices represents a significant opportunity for GenAI industry participants to develop new products, services, and solutions to meet these needs.
As Figure 2 highlights below, there are still several challenges that need to be addressed before these local efforts become a reality – not the least of which is a dramatic need for more education and training – but the potential pivot opens up some very interesting new paths for industry evolution.
In addition to new funding sources and deployment strategies, the study uncovered overlooked challenges that aren't being widely addressed. Many early GenAI efforts failed due to issues with data quality and the processes used for model training and tuning – topics directly related to provenance (determining the source and characteristics of the model and data) and governance (procedures to ensure output quality and mitigate potential risks).
Among medium-sized businesses, a staggering 84% reported having no provenance policies, and 64% lacked governance policies. Large enterprises fared better, but even there, about a quarter had no provenance policies, and a fifth had no governance procedures. (See "Two Words That Are Critical to GenAI's Future" for more on provenance and governance.)
The number one benefit that organizations hope to get from their GenAI initiatives is increased efficiency and productivity followed by improved quality and accuracy of output.
While some of the study's findings were surprising, others confirmed what many in the industry have been thinking. The number one benefit that organizations hope to get from their GenAI initiatives, for example, is increased efficiency and productivity followed by improved quality and accuracy of output.
When asked to the rank the importance of the potential outcomes of their GenAI efforts, however, the results showed more pragmatic, measurable results. Most importantly, companies said they wanted to create new products or services with the help of GenAI, and reducing costs was their second most important outcome. Not only does this reflect the fact that benefits and importance don't always align when it comes to these initiatives, but it also indicates that organizations recognize what GenAI can offer now, yet still have more aspirational goals for the future.
In terms of the GenAI-powered applications that organizations are using, as Figure 3 illustrates, the first and third top choices are text-related with Text-Based Document Creation number one and Text-based Summarization number three.
Collaboration-based applications came in second, reflecting the popularity of GenAI features such as meeting summarizations, automatic note-taking, language translation, and other capabilities integrated into messaging platforms. Despite their heavy usage, satisfaction with collaboration tools was found to be lower than with other product categories.
The study also dives into more detailed aspects of the companies' GenAI efforts, including which foundation models and platforms they're using, why they chose them, techniques for data preparation and model fine-tuning, the deployment of GenAI applications at the edge, on PCs, and on smartphones, as well as the types of partners companies are working with and the specific services they need.
Finally, returning to the original theme, GenAI-powered summarization and sentiment analysis of the survey respondents' comments highlight that we're still in the early stages of the GenAI revolution. While companies expressed legitimate concerns about GenAI and its impact, they also made it clear that they understand the long-term potential of the technology and are eager to integrate it into their organizations.
There has been a lot of research on the types of people who believe conspiracy theories, and their reasons for doing so. But there's a wrinkle: My colleagues and I have found that there are a number of people sharing conspiracies online who don't believe their own content.
They are opportunists. These people share conspiracy theories to promote conflict, cause chaos, recruit and radicalize potential followers, make money, harass, or even just to get attention.
[...]
Coaxing conspiracists—the extremistsIn our chapter of a new book on extremism and conspiracies, my colleagues and I discuss evidence that certain extremist groups intentionally use conspiracy theories to entice adherents. They are looking for a so-called "gateway conspiracy" that will lure someone into talking to them, and then be vulnerable to radicalization.
[...]
Combative conspiracists—the disinformantsGovernments love conspiracy theories. The classic example of this is the 1903 document known as the "Protocols of the Elders of Zion," in which Russia constructed an enduring myth about Jewish plans for world domination. More recently, China used artificial intelligence to construct a fake conspiracy theory about the August 2023 Maui wildfire.
Often the behavior of the conspiracists gives them away. Years later, Russia eventually confessed to lying about AIDS in the 1980s.
[...]
Forgeries aren't created by accident. They knew they were lying.
[...]
Chaos conspiracists–the trolls
In general, research has found that individuals with what scholars call a high "need for chaos" are more likely to indiscriminately share conspiracies, regardless of belief. These are the everyday trolls who share false content for a variety of reasons, none of which are benevolent. Dark personalities and dark motives are prevalent.
[...]
Commercial conspiracists–the profiteersOften when I encounter a conspiracy theory I ask: "What does the sharer have to gain? Are they telling me this because they have an evidence-backed concern, or are they trying to sell me something?"
When researchers tracked down the 12 people primarily responsible for the vast majority of anti-vaccine conspiracies online, most of them had a financial investment in perpetuating these misleading narratives.
[...]
Common conspiracists–the attention-gettersYou don't have to be a profiteer to like some attention. Plenty of regular people share content where they doubt the veracity, or know it is false.
[...]
Many share without even reading past a headline. Still others, approximately 7 percent to 20 percent of social media users, share despite knowing the content is false. Why?Some claim to be sharing to inform people "just in case" it is true. But this sort of "sound the alarm" reason actually isn't that common.
Often, folks are just looking for attention or other personal benefit.
[...]
The dangers of spreading liesOver time, the opportunists may end up convincing themselves. After all, they will eventually have to come to terms with why they are engaging in unethical and deceptive, if not destructive, behavior.
[...]
So be aware that the next time you share an unfounded conspiracy theory, online or offline, you could be helping an opportunist. They don't buy it, so neither should you. Be aware before you share. Don't be what these opportunists derogatorily refer to as "a useful idiot."
Best Quality References Much Good:
https://en.wikipedia.org/wiki/Poop_emoji
https://en.wikipedia.org/wiki/Shitposting
https://en.wikipedia.org/wiki/Troll_(slang)
Original Article:
Some online conspiracy-spreaders don't even believe the lies they're spewing
Two Harvard students recently revealed that it's possible to combine Meta smart glasses with face image search technology to "reveal anyone's personal details," including their name, address, and phone number, "just from looking at them."
In a Google document, AnhPhu Nguyen and Caine Ardayfio explained how they linked a pair of Meta Ray Bans 2 to an invasive face search engine called PimEyes to help identify strangers by cross-searching their information on various people-search databases. They then used a large language model (LLM) to rapidly combine all that data, making it possible to dox someone in a glance or surface information to scam someone in seconds—or other nefarious uses, such as "some dude could just find some girl's home address on the train and just follow them home," Nguyen told 404 Media.
This is all possible thanks to recent progress with LLMs, the students said.
[...] To prevent anyone from being doxxed, the co-creators are not releasing the code, Nguyen said on social media site X. They did, however, outline how their disturbing tech works and how shocked random strangers used as test subjects were to discover how easily identifiable they are just from accessing with the smart glasses information posted publicly online.
[...] But while privacy is clearly important to the students and their demo video strove to remove identifying information, at least one test subject was "easily" identified anyway, 404 Media reported. That test subject couldn't be reached for comment, 404 Media reported.
So far, neither Facebook nor Google has chosen to release similar technologies that they developed linking smart glasses to face search engines, The New York Times reported.
[...] In the European Union, where collecting facial recognition data generally requires someone's direct consent under the General Data Protection Regulation, smart glasses like I-XRAY may not be as big of a concern for people who prefer to be anonymous in public spaces. But in the US, I-XRAY could be providing bad actors with their next scam.
"If people do run with this idea, I think that's really bad," Ardayfio told 404 Media. "I would hope that awareness that we've spread on how to protect your data would outweigh any of the negative impacts this could have."
Related Stories on SoylentNews:
Illinois Just Made It Possible To Sue People For Doxxing Attacks - 20230815
Google Glass (Slight Return) - 20220727
Meeting Owl Videoconference Device Used by Govs is a Security Disaster - 20220605
PiGlass V2 Embraces The New Raspberry Pi Zero 2 - 20211203
Apple Glasses Leaks and Rumors: Here's Everything We Expect to See - 20200528
Google Announces $999 Glass Enterprise Edition 2 - 20190520
China Can Apparently Now Identify Citizens Based on the Way they Walk - 20181108
Google Glass Trial Helps Autistic Children Decode Facial Expressions - 20180803
Google Glass is Officially Back With a Clearer Vision - 20170719
It's Still a Bad Idea to Text While Driving Even With a Head-up Display - 20170414
Electronic Snooping 'Small Price to Pay' Against Terror: Expert - 20160325
Google Glass Assists Cardiologists in Coronary Artery Blockage Surgery - 20151122
Google Glass Ceases Consumer Sales - 20150116
71% Of 16-To-24-Year-Olds Want 'Wearable Tech.' - 20140923
Non-Identifying Facial Recognition - 20140829
Hacker in India Makes Google Glass Replica for $75, Opens the Design - 20140827
Google Glass Snoopers can Steal Your Passcode with a Glance - 20140624
Theater Chain Bans Google Glass Over Piracy Fears - 20140613
Google Glass is a Failure - 20140528
Google Glass - $80 Build Price "Absolutely Wrong" - 20140503
Lobbying Against Having Google Glass Banned While Driving - 20140301
https://blog.cloudflare.com/patent-troll-sable-pays-up/
Back in February, we celebrated our victory at trial in the U.S. District Court for the Western District of Texas against patent trolls Sable IP and Sable Networks. This was the culmination of nearly three years of litigation against Sable, but it wasn't the end of the story.
Today we're pleased to announce that the litigation against Sable has finally concluded on terms that we believe send a strong message to patent trolls everywhere — if you bring meritless patent claims against Cloudflare, we will fight back and we will win.
[...] While Sable's technical expert tried his hardest to convince the jury that various software and hardware components of Cloudflare's servers constitute "line cards," his explanations defied credibility. The simple fact is that Cloudflare's servers do not have line cards.
[...] Ultimately, the jury understood, returning a verdict that Cloudflare does not infringe claim 25 of the '919 patent.
In the end, Sable agreed to pay Cloudflare $225,000, grant Cloudflare a royalty-free license to its entire patent portfolio, and to dedicate its patents to the public, ensuring that Sable can never again assert them against another company.
Let's repeat that first part, just to make sure everyone understands:
Sable, the patent troll that sued Cloudflare back in March 2021 asserting around 100 claims across four patents, in the end wound up paying Cloudflare. While this $225,000 can't fully compensate us for the time, energy and frustration of having to deal with this litigation for nearly three years, it does help to even the score a bit. And we hope that it sends an important message to patent trolls everywhere to beware before taking on Cloudflare.
Officials in Ireland have fined Meta $101 million for storing hundreds of millions of user passwords in plaintext and making them broadly available to company employees.
Meta disclosed the lapse in early 2019. The company said that apps for connecting to various Meta-owned social networks had logged user passwords in plaintext and stored them in a database that had been searched by roughly 2,000 company engineers, who collectively queried the stash more than 9 million times.
[...]
When Meta disclosed the lapse in 2019, it was clear the company had failed to adequately protect hundreds of millions of passwords."It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," Graham Doyle, deputy commissioner at Ireland's Data Protection Commission, said. "It must be borne in mind, that the passwords, the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."
[...]
To date, the EU has fined Meta more than $2.23 billion (2 billion euros) for violations of the General Data Protection Regulation (GDPR), which went into effect in 2018. That amount includes last year's record $1.34 billion (1.2 billion euro) fine, which Meta is appealing.
A tale involving deepfakes, politics, and a magician:
Louisiana Democratic political consultant Steven Kramer was indicted in May over the robocalls. The 39-second message, which told people to 'save their votes' for the November presidential election, was created using a text-to-speech tool called ElevenLabs. The calls were spoofed so they appeared to originate from the former chairwoman of the New Hampshire Democratic Party, writes The New York Times.
Kramer had worked for Biden's primary rival, Rep. Dean Phillips, who condemned the calls. Kramer claimed that he paid $500 to have the calls sent to voters as a way of raising awareness about the dangers artificial intelligence can pose to election campaigns, which sounds like a questionable justification.
"For me to do that and get $5 million worth of exposure, not for me," Kramer told CBS New York. "I kept myself anonymous so the regulations could just play themselves out or begin to play themselves out. I don't need to be famous. That's not my intention. My intention was to make a difference."
Making a strange story even weirder, Kramer hired an actual New Orleans magician named Paul Carpenter to make the robocalls. Carpenter said creating the recording only took about 20 minutes and cost $1, and that Kramer paid him $150 via Venmo. He believed what he was doing had been authorized by President Biden's campaign. Carpenter's account has since been shut down by ElevenLabs.
The FCC writes that Kramer violated the Truth in Caller ID Act, which makes spoofed calls illegal when made with the intent to defraud, cause harm, or wrongfully obtain anything of value. The FCC this year voted to have the law apply to deepfakes.
Arthur T Knackerbracket has processed the following story:
California governor Gavin Newsom vetoed the state’s controversial AI safety law known as ‘Senate Bill 1047’ yesterday (29 September).
Newsom said he that did not think this legislation would be the best approach to protect the public from threats posed by AI.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.
“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
Marred in controversy since its introduction earlier this year, the bill has been opposed by many – including politician Nancy Pelosi who called the bill “well-intentioned but ill-informed” and Silicon Valley heavyweights, including OpenAI which argued for a federal bill rather than a state one, accelerator Y Combinator, which signed a letter along with around 140 start-ups, stating that the bill could “threaten the vibrancy of California’s technology economy,” and AI start-up Anthropic which made suggestions that led to amendments in the bill.
Introduced earlier this year by state senator Scott Wiener, the bill’s aim was to ensure the safe development of AI systems by putting more responsibilities on developers.
[...] Instead of SB 1047, governor Newsom announced that he has enlisted expert assistance, who will “help California develop workable guardrails for deploying GenAI”.
The team of experts include the ‘godmother of AI’ Dr Fei-Fei Li; Tino Cuéllar, a member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science and Society at UC Berkeley.
Arthur T Knackerbracket has processed the following story:
Efficiency and scalability are key benefits of enterprise cloud computing, but they come at a cost. Security threats specific to cloud environments are the leading cause of concern among top executives and they're also the ones organizations are least prepared to address.
That's according to PwC's latest cybersecurity report, released today, which showed that cloud threats are the biggest security concern for most (42 percent) business leaders.
The top five threats, according to PwC's 4,020 respondents, comprise hack and leak operations (38 percent), third-party breaches (35 percent), attacks on connected products (33 percent), and ransomware (27 percent).
If you've just read that and questioned why ransomware is so low on the list, you might be a CISO. The level of concern about ransomware jumped to 42 percent when analyzing responses from CISOs alone.
[...] All the threats that feature in execs' top five deemed "most concerning" are perhaps unsurprisingly also the same as the threats organizations feel least prepared to address, although not quite in the same order.
[...] Of course, it wouldn't be a cybersecurity report in 2024 unless AI got its moment in the spotlight.
Despite generative AI being used for good in many cases, and the majority (78 percent) increasing their investment in the tech in the past year, it's the primary contributor to the widening attack surface faced by organizations.
More than two-thirds of respondents (67 percent) said genAI increased their susceptibility to attacks "slightly" or "significantly" – the most significant factor of any in the past year, although cloud was only narrowly behind at 66 percent.
As a force for good, however, generative AI is being deployed widely across global organizations, supporting key cybersecurity functions such as threat detection and response, and threat intelligence.
"Cybersecurity is predominantly a data science problem," said Mike Elmore, global CISO at GSK. "It's becoming imperative for cyber defenders to leverage the power of generative AI and machine learning to get closer to the data to drive timely and actionable insights that matter the most."
Shockingly, PwC also found that business leaders who have regulatory and legal requirements to improve security do just that.
Indeed, 96 percent said regulations prompted an organization to improve its security, while 78 percent said the same regs have challenged, improved, or increased their security posture.
[...] "Organizations that embrace regulatory requirements tend to benefit from stronger security frameworks and a more robust posture against emerging threats," read PwC's report. "Compliance shouldn't be viewed as a box-ticking exercise but as an opportunity to build long-term resilience and trust with stakeholders."
These new regulations have also ushered in new investment into cybersecurity. Roughly a third of organizations (32 percent) said cyber investment increased to a "large extent" in the past 12 months. 37 percent said investment increased to a "moderate extent," while 14 percent said the increase in investment was "significant."
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot.
[...]
ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
[...]
To craft a bot that could beat reCAPTCHA v2, the researchers used a fine-tuned version of the open source YOLO ("You Only Look Once") object-recognition model, which long-time readers may remember has also been used in video game cheat bots.