Privacy is prerequisite for free thought, dissent, experimentation, and innovation, which are in turn prerequisites for democracy. At NBTV, Naomi Brockwell has posted four reasons why limits on privacy are absolutely not a price worth paying for mainstream adoption.
Today I participated in a Privacy Salon in Denver where we debated a proposition that cuts to the core of the modern privacy movement:
"Limits on privacy are a price worth paying for mainstream adoption of cryptographic privacy."
I was on the "no" side alongside Matt Green, with Evin McMullen and Wei Dai arguing "yes."
It was a lively, thoughtful exchange that forced us to confront a deeper question: is weakening privacy simply the cost of scale?
Below is my opening statement from the debate.
The false argument about having nothing to hide does not hold water. As Ed Snowden observed years ago, "arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say."
Previously:
(2026) Ring Cancels Flock Deal After Dystopian Super Bowl Ad Prompts Mass Outrage
(2026) Discord Will Require a Face Scan or ID for Full Access Next Month
(2026) "ICE Out of Our Faces Act" Would Ban ICE and CBP Use of Facial Recognition
(2025) Big Tech Wants Direct Access to Our Brains
(2025) Discord Customer Service Data Breached; Government-ID Images, and User Details Stolen
(2025) A Surveillance Vendor Was Caught Exploiting a New SS7 Attack to Track People's Phone Locations
... and many more
« Pink Noise Reduces REM Sleep and May Harm Sleep Quality | Concrete “Battery” Developed at MIT Now Packs 10 Times the Power »
Related Stories
A surveillance vendor was caught exploiting a new SS7 attack to track people's phone locations:
Security researchers say they have caught a surveillance company in the Middle East exploiting a new attack capable of tricking phone operators into disclosing a cell subscriber's location.
The attack relies on bypassing security protections that carriers have put in place to protect intruders from accessing SS7, or Signaling System 7, a private set of protocols used by the global phone carriers to route subscribers' calls and text messages around the world.
SS7 also allows the carriers to request information about which cell tower a subscriber's phone is connected to, typically used for accurately billing customers when they call or text someone from overseas, for example.
Researchers at Enea, a cybersecurity company that provides protections for phone carriers, said this week that they have observed the unnamed surveillance vendor exploiting the new bypass attack as far back as late 2024 to obtain the locations of people's phones without their knowledge.
Enea VP of Technology Cathal Mc Daid, who co-authored the blog post, told TechCrunch that the company observed the surveillance vendor target "just a few subscribers" and that the attack did not work against all phone carriers.
Mc Daid said that the bypass attack allows the surveillance vendor to locate an individual to the nearest cell tower, which in urban or densely populated areas could be narrowed to a few hundred meters.
[...] Surveillance vendors, which can include spyware makers and providers of bulk internet traffic, are private companies that typically work exclusively for government customers to conduct intelligence-gathering operations against individuals. Governments often claim to use spyware and other exploitative technologies against serious criminals, but the tools have also been used to target members of civil society, including journalists and activists.
In the past, surveillance vendors have gained access to SS7 by way of a local phone operator, a misused leased "global title," or through a government connection.
But due to the nature of these attacks happening at the cell network level, there is little that phone subscribers can do to defend against exploitation. Rather, defending against these attacks rests largely on the telecom companies.
Discord has revealed that one of its customer service providers has suffered a data breach. The attackers gained access to Government-ID images, and user details.
Discord doesn't actually mention when the breach took place, it only says it "recently discovered an incident". The fact that Government ID images were stolen is important, the U.K.'s Online Safety Act came into effect on July 25, 2025. So, that means the data breach happened sometime between then and October 3rd, when the news about the incident was revealed. It's also worth noting that the victim of the hack was a third-party customer service that has not been named.
As for the attack, the incident involved an unauthorized party compromising one of the messaging services' customer service providers, which in turn allowed the hackers access to limited customer data, pertaining to those who had contacted Customer Support and/or Trust & Safety teams. Discord says it revoked the breached service provider's access to its ticketing system. It is investigating the matter with the help of a computer forensics firm, and is working with law enforcement. Users who were impacted by the incident are being notified via an email that is sent from [email protected]
Here's what Discord says the hackers managed to access: Name, Discord username, email and other contact details that were provided to customer support, billing information such as payment type, the last four digits of credit cards, and purchase history of the accounts, IP addresses, messages with customer service agents, and limited corporate data (training materials, internal presentations).
There was something else.
"The unauthorized party also gained access to a small number of government?ID images (e.g., driver's license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive."
The story continues:
https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html
https://archive.ph/mgZRE
As neural implant technology and A.I. advance at breakneck speeds, do we need a new set of rights to protect our most intimate data — our minds?
On a recent afternoon in the minimalist headquarters of the M.I.T. Media Lab, the research scientist Nataliya Kosmyna handed me a pair of thick gray eyeglasses to try on. They looked almost ordinary aside from the three silver strips on their interior, each one outfitted with an array of electrical sensors. She placed a small robotic soccer ball on the table before us and suggested that I do some "basic mental calculus." I started running through multiples of 17 in my head. After a few seconds, the soccer ball lit up and spun around. I seemed to have made it move with the sheer force of my mind, though I had not willed it in any sense. My brain activity was connected to a foreign object. "Focus, focus," Kosmyna said. The ball swirled around again. "Nice," she said. "You will get better." Kosmyna, who is also a visiting research scientist at Google, designed the glasses herself. They are, in fact, a simple brain-computer interface, or B.C.I., a conduit between mind and machine. As my mind went from 17 to 34 to 51, electroencephalography (EEG) and electrooculography (EOG) sensors picked up heightened electrical activity in my eyes and brain. The ball had been programmed to light up and rotate whenever my level of neural "effort" reached a certain threshold. When my attention waned, the soccer ball stood still.
For now, the glasses are solely for research purposes. At M.I.T., Kosmyna has used them to help patients with A.L.S. (Amyotrophic Lateral Sclerosis) communicate with caregivers — but she said she receives multiple purchase requests a week. So far she has declined them. She's too aware that they could easily be misused.
Neural data can offer unparalleled insight into the workings of the human mind. B.C.I.s are already frighteningly powerful: Using artificial intelligence, scientists have used B.C.I.s to decode "imagined speech," constructing words and sentences from neural data; to recreate mental images (a process known as brain-to-image decoding); and to trace emotions and energy levels. B.C.I.s have allowed people with locked-in syndrome, who cannot move or speak, to communicate with their families and caregivers and even play video games. Scientists have experimented with using neural data from fMRI imaging and EEG signals to detect sexual orientation, political ideology and deception, to name just a few examples.
Advances in optogenetics, a scientific technique that uses light to stimulate or suppress individual, genetically modified neurons, could allow scientists to "write" the brain as well, potentially altering human understanding and behavior. Optogenetic implants are already able to partially restore vision to patients with genetic eye disorders; lab experiments have shown that the same technique can be used to implant false memories in mammal brains, as well as to silence existing recollections and to recover lost ones.
Neuralink, Elon Musk's neural technology company, has so far implanted 12 people with its rechargeable devices. "You are your brain, and your experiences are these neurons firing," Musk said at a Neuralink presentation in June. "We don't know what consciousness is, but with Neuralink and the progress that the company is making, we'll begin to understand a lot more."
Musk's company aims to eventually connect the neural networks inside our brains to artificially intelligent ones on the outside, creating a two-way path between mind and machine. Neuroethicists have criticized the company for ethical violations in animal experiments, for a lack of transparency and for moving too quickly to introduce the technology to human subjects, allegations the company dismisses. "In some sense, we're really extending the fundamental substrate of the brain," a Neuralink engineer said in the presentation. "For the first time we are able to do this in a mass market product."
The neurotechnology industry already generates billions of dollars of revenue annually. It is expected to double or triple in size over the next decade. Today, B.C.I.s range from neural implants to wearable devices like headbands, caps and glasses that are freely available for purchase online, where they are marketed as tools for meditation, focus and stress relief. Sam Altman founded his own B.C.I. start-up, Merge Labs, this year, as part of his effort to bring about the day when humans will "merge" with machines. Jeff Bezos and Bill Gates are investors in Synchron, a Neuralink competitor.
Beginning in March, all accounts will have a 'teen-appropriate experience by default'
Discord announced on Monday that it's rolling out age verification on its platform globally starting next month, when it will automatically set all users' accounts to a "teen-appropriate" experience unless they demonstrate that they're adults.
"For most adults, age verification won't be required, as Discord's age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process," Savannah Badalich, Discord's global head of product policy, tells The Verge.
Users who aren't verified as adults will not be able to access age-restricted servers and channels, won't be able to speak in Discord's livestream-like "stage" channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
RelatedDirect messages and servers that are not age-restricted will continue to function normally, but users won't be able to send messages or view content in an age-restricted server until they complete the age check process, even if it's a server they were part of before age verification rolled out. Badalich says those servers will be "obfuscated" with a black screen until the user verifies they're an adult. Users also won't be able to join any new age-restricted servers without verifying their age.
[...] If Discord's age inference model can't determine a user's age, a government ID might still be required for age verification in its global rollout. According to Discord, to remove the new "teen-by-default" changes and limitations, "users can choose to use facial age estimation or submit a form of identification to [Discord's] vendor partners, with more options coming in the future."
The first option uses AI to analyze a user's video selfie, which Discord says never leaves the user's device. If the age group estimate (teen or adult) from the selfie is incorrect, users can appeal it or verify with a photo of an identity document instead. That document will be verified by a third party vendor, but Discord says the images of those documents "are deleted quickly — in most cases, immediately after age confirmation."
Badalich also says after the October data breach, Discord "immediately stopped doing any sort of age verification flows with that vendor" and is now using a different third-party vendor. She adds, "We're not doing biometric scanning [or] facial recognition. We're doing facial estimation. The ID is immediately deleted. We do not keep any information around like your name, the city that you live in, if you used a birth certificate or something else, any of that information."
[...] Even so, there's still a risk that some users will leave Discord as a result of the age verification rollout. "We do expect that there will be some sort of hit there, and we are incorporating that into what our planning looks like," Badalich says. "We'll find other ways to bring users back."
Senator: ICE and CBP "have built an arsenal of surveillance technologies":
A few Senate Democrats introduced a bill called the ''ICE Out of Our Faces Act," which would ban Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) from using facial recognition technology.
The bill [PDF] would make it "unlawful for any covered immigration officer to acquire, possess, access, or use in the United States—(1) any biometric surveillance system; or (2) information derived from a biometric surveillance system operated by another entity." All data collected from such systems in the past would have to be deleted. The proposed ban extends beyond facial recognition to cover other biometric surveillance technologies, such as voice recognition.
The proposed ban would prohibit the federal government from using data from biometric surveillance systems in court cases or investigations. Individuals would have a right to sue the federal government for financial damages after violations, and state attorneys general would be able to bring suits on behalf of residents.
The bill was submitted yesterday by Sen. Edward J. Markey (D-Mass.), who held a press conference [video not reviewed -Ed] about the proposal with Sen. Jeff Merkley (D-Ore.), and US Rep. Pramila Jayapal (D-Wash.). The Senate bill is also cosponsored by Sens. Ron Wyden (D-Ore.), Angela Alsobrooks (D-Md.), and Bernie Sanders (I-Vt.).
"This is a dangerous moment for America," Markey said at the press conference, saying that ICE and CBP "have built an arsenal of surveillance technologies that are designed to track and to monitor and to target individual people, both citizens and non-citizens alike. Facial recognition technology sits at the center of a digital dragnet that has been created in our nation."
"This is definitely not about dogs," senator says, urging a pause on Ring face scans:
Amazon and Flock Safety have ended a partnership that would've given law enforcement access to a vast web of Ring cameras.
The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.
The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new "Search Party" feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.
At that point, the ad takes a "creepy" turn, Sen. Ed Markey (D.-Mass.) told Amazon CEO Andy Jassy in a letter urging changes to enhance privacy at the company.
Illustrating how a single Ring post could use AI to instantly activate searchlights across an entire neighborhood, the ad shocked critics like Markey, who warned that the same technology could easily be used to "surveil and identify humans."
Markey suggested that in blasting out this one frame of the ad to Super Bowl viewers, Amazon "inadvertently revealed the serious privacy and civil liberties risks attendant to these types of Artificial Intelligence-enabled image recognition technologies."
In his letter, Markey also shared new insights from his prior correspondence with Amazon that he said exposed a wide range of privacy concerns. Ring cameras can "collect biometric information on anyone in their video range," he said, "without the individual's consent and often without their knowledge." Among privacy risks, Markey warned that Ring owners can retain swaths of biometric data, including face scans, indefinitely. And anyone wanting face scans removed from Ring cameras has no easy solution and is forced to go door to door to request deletions, Markey said.
On social media, other critics decried Amazon's ad as "awfully dystopian," declaring it was "disgusting to use dogs to normalize taking away our freedom to walk around in public spaces." Some feared the technology would be more likely to benefit police and Immigration and Customs Enforcement (ICE) officers than families looking for lost dogs.
Amazon's partnership with Flock, announced last October as coming soon, only inflamed those fears. So did the company's recent rollout of a feature using facial recognition technology called "Familiar Faces"—which Markey considers so invasive, he has demanded that the feature be paused.
"What this ad doesn't show: Ring also rolled out facial recognition for humans," Markey posted on X. "I wrote to them months ago about this. Their answer? They won't ask for your consent. This definitely isn't about dogs—it's about mass surveillance."
[...] But while Ring may have hurt its brand, WebProNews, which reports on business strategy in the tech industry, suggested that "the fallout may prove more consequential for Flock Safety than for Ring." For Flock, the Ring partnership represented a meaningful expansion of their business and "data collection capabilities," WebProNews reported. And because this all happened around one of the most-watched TV events of the year, other tech companies may be more hesitant to partner with Flock after Amazon dropped the integration and privacy advocates witnessed the seeming power of their collective outrage.
[...] Ring's statements so far do not "acknowledge the real issue," Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed "Americans want more control of their privacy right now" and "are savvy enough to see through sappy dog pics."
"Stop trying to build a surveillance dystopia consumers didn't ask for" and "focus on shipping good, private products," Scott-Railton said.
He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.
(Score: 5, Touché) by Subsentient on Monday February 23, @10:22AM (3 children)
But since when have we been the ones able to choose what decisions they make?
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 5, Insightful) by RS3 on Monday February 23, @02:44PM (2 children)
Sometime around 1776 I believe.
"Patriot Act" did much to propel the huge momentum toward the mass surveillance we now have.
We (USA) and much of the world have "representative" governments. Some of the problem lies with us, the population. We don't make enough noise about these things.
There's much "groupthink" / mass psychology going on. Too many of the public are apathetic, at best, "head in the sand" at worst regarding privacy. My friends / acquaintances (mostly) quietly think I'm a paranoid nut because I refuse to use a cell phone to do banking / buying / pretty much anything. They don't read tech news, are fully ignorant of the scale of the mass surveillance / data hoovering going on, and simply pass it off that people like me are a bit loony.
We in the tech world know what's going on, and I think we need to do a better job of enlightening the public. That Super Bowl ad seems to have shed some light, but my worry is it'll pass like the latest fashion / fad.
A doctor acquaintance (and his wife) got hit hard with "identity theft" last summer. They lost $, and had to close all credit and debit cards, all bank accounts, email addresses, etc. Cost them big $ and huge time. I don't know the details of how it happened, but they're doing very little online, and no cellphone financial stuff at all now.
There is no technology that is 100% safe. Anyone, including government agencies, who has our personal information is vulnerable at some point to our information being copied out ("stolen") by criminals.
Encryption you say? Remember 20 years ago when SSL was all the rage? "Best practices" and all that? Well now you're dumb if you think you're safe over SSL, right? Point is, tech considered "safe" now won't be in the near future, especially with everything being put in the "cloud".
(Score: 4, Insightful) by aafcac on Monday February 23, @06:44PM
Being able to choose depends upon there being options that both don't have that issue as well as other bigger issues. Choice effectively started to go away in the '80s with all the deregulation and got worse as the focus on antitrust became solely on the impact that mergers and corporate behavior had on prices, rather than also on the impact it has on choices. It effectively makes it incredibly hard to effectively boycott as often times you don't even know who you're doing business with without doing an internet search and it may very well come down to all the options being terrible.
(Score: 4, Insightful) by Thexalon on Monday February 23, @08:26PM
The Patriot Act surely did not help, but what actually was the turning point was Google's profitability based on targeted ads rather than untargeted ads like all of the Dotcom bubble failures. Once mining consumer data acquired a commercial value, every company with an ability to hoover up consumer data did so with every intention of storing it forever.
And even with data privacy laws like the GDPR, the goal of the industry has always been to get consumers to mindlessly click to allow that data collection, and/or install an app to enable that data collection. Because that data has commercial value, even if it's tiny as an individual.
And sure, governments want that data too. But they would be having to spend a lot more to get it has these companies not been gathering and storing it already.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 3, Insightful) by VLM on Monday February 23, @06:16PM (1 child)
No idea what that is.
If their strategy is to market it solely using broad, sweeping, vaguely liked generalizations, at the microscale level, the individual things they're supporting must be truly awful if they assume no one would support what they actually want. My guess would be online cheese pizza trading and online grooming. Those never sell well to the general public so I see why they would cover it all up and be super vague and instead discuss if freedom is good. Trying to turn the entire thing into a debate about if democracy is good or bad instead of discussing the actual issues, indicates they can't have people looking at what they are actually supporting, so its likely to be some awful stuff.
I like (sarcastically) the false dilemma pretense of the entire debate, some quality sophistry there.
If you don't like their product don't use it? Or if the carefully unnamed "privacy limit" is generally seen as not important, that might be because it is, indeed, not important?
We're right, for reasons we won't tell anyone, but everyone else is wrong because they don't agree with us, because we're more gooder. Yeah whatever ... next...
Yeah yeah I love their vague generalizations just like any "good" person I like innovation and freedom, its all so double plus goodspeak. But I'm sure whatever they're selling is a crock of shit if they have to hide it so intensely. "Look if you don't hate Mom, America, and Apple Pie you must support us including all our specific secret goals" yeah uh huh sure I bet.
(Score: 4, Touché) by Reziac on Tuesday February 24, @03:01AM
Modern privacy movement: That would be the EU righteously demanding that everyone else stop hoovering up personal data, because "that's OUR job!"
/sarc, maybe
And there is no Alkibiades to come back and save us from ourselves.