Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday December 06, @04:17PM   Printer-friendly
from the I-spy-with-my-little-internet-eye dept.

Spying has always been limited by the need for human labor. A.I. is going to change that:

Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we're doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there's no reasonable way for us to opt out of it.

Spying is another matter. It has long been possible to tap someone's phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people's phones, but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass. Spying is limited by the need for human labor.

A.I. is about to change that.

[...] We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?

Related:


Original Submission

Related Stories

EU-US Data Privacy Framework to Face Serious Legal Challenges, Experts Say 8 comments

EU-US Data Privacy Framework to face serious legal challenges, experts say:

Nine months after US President Joe Biden signed an executive order that updated rules for the transfer of data between the US and the EU, the European Commission this week ratified the EU-US Data Privacy Framework. Industry experts, however, say it will be challenged at the European Court of Justice (CJEU), and stands a good chance of being struck down.

The move comes three years after the CJEU shut down the previous EU-US data sharing agreement, known as Privacy Shield, on grounds that the US doesn't provide adequate protection for personal data, particularly in relation to state surveillance. In 2015, a previous attempt to forge a data sharing pact, dubbed Safe Harbor, was also struck down by the CJEU.

The President of the European Commission, Ursula von der Leyen, said the new framework should provide "legal certainty" to transatlantic businesses, calling the commitments "unprecedented."

[...] However, industry experts expect the accord to face a plethora of legal challenges from privacy advocates before ultimately being struck down like its predecessors.

"We have various options for a challenge already in the drawer, although we are sick and tired of this legal ping-pong," said Max Schrems, an Austrian lawyer and privacy activist who founded NOYB (None of Your Business) – European Center for Digital Rights. In 2016 and 2020, Schrems initiated legal proceedings against Safe Harbor and Privacy Shield, respectively, which led to the CJEU invalidating both agreements.

"We currently expect this to be back at the Court of Justice by the beginning of next year," Schrems said in a statement published on NOYB's website.


Original Submission

Debunking the Myth of “Anonymous” Data 15 comments

From The Electronic Frontier Foundation: Debunking the Myth of "Anonymous" Data

Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them.

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person's name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy.

...

However, in practice, any attempt at de-identification requires removal not only of your identifiable information, but also of information that can identify you when considered in combination with other information known about you. Here's an example:

  • First, think about the number of people that share your specific ZIP or postal code.
  • Next, think about how many of those people also share your birthday.
  • Now, think about how many people share your exact birthday, ZIP code, and gender.

According to one landmark study, these three characteristics are enough to uniquely identify 87% of the U.S. population. A different study showed that 63% of the U.S. population can be uniquely identified from these three facts.

We cannot trust corporations to self-regulate. The financial benefit and business usefulness of our personal data often outweighs our privacy and anonymity. In re-obtaining the real identity of the person involved (direct identifier) alongside a person's preferences (indirect identifier), corporations are able to continue profiting from our most sensitive information. For instance, a website that asks supposedly "anonymous" users for seemingly trivial information about themselves may be able to use that information to make a unique profile for an individual.


Original Submission

A Controversial US Surveillance Program is up tor Renewal. Critics are Speaking Out. 18 comments

[Editor's note. I found it unnerving to see tracking links in this article. I encourage readers to "right-click" and "view source" on each link before actually clicking on each link in the article. (I despise trackers!) You have been warned. --Martyb]

Here's what you need to know.

For the past week my social feeds have been filled with a pretty important tech policy debate that I want to key you in on: the renewal of a controversial program of American surveillance.

The program, outlined in Section 702 of the Foreign Intelligence Surveillance Act (FISA), was created in 2008. It was designed to expand the power of US agencies to collect electronic “foreign intelligence information,” whether about spies, terrorists, or cybercriminals abroad, and to do so without a warrant.

Tech companies, in other words, are compelled to hand over communications records like phone calls, texts, and emails to US intelligence agencies including the FBI, CIA, and NSA. A lot of data about Americans who communicate with people internationally gets swept up in these searches. Critics say that is unconstitutional.

Despite a history of abuses by intelligence agencies, Section 702 was successfully renewed in both 2012 and 2017. The program, which has to be periodically renewed by Congress, is set to expire again at the end of December. But a broad group that transcends parties is calling for reforming the program, out of concern about the vast surveillance it enables. Here is what you need to know.

Of particular concern is that while the program intends to target people who aren’t Americans, a lot of data from US citizens gets swept up if they communicate with anyone abroad—and, again, this is without a warrant. The 2022 annual report on the program revealed that intelligence agencies ran searches on an estimated 3.4 million “US persons” during the previous year; that’s an unusually high number for the program, though the FBI attributed it to an uptick in investigations of Russia-based cybercrime that targeted US infrastructure. Critics have raised alarms about the ways the FBI has used the program to surveil Americans including Black Lives Matter activistsand a member of Congress.

In a letter to Senate Majority Leader Chuck Schumer this week, over 25 civil society organizations, including the American Civil Liberties Union (ACLU), the Center for Democracy & Technology, and the Freedom of the Press Foundation, said they “strongly oppose even a short-term reauthorization of Section 702.”

Tor University Challenge: First Semester Report Card 4 comments

Back in August the Tor Project and the EFF launched an advocacy campaign for getting more Tor relays running at universities. Now it is December and they have published an update on how the Tor University Challenge has gone so far.

In August of 2023 EFF announced the Tor University Challenge, a campaign to get more universities around the world to operate Tor relays. The primary goal of this campaign is to strengthen the Tor network by creating more high bandwidth and reliable Tor nodes. We hope this will also make the Tor network more resilient to censorship since any country or smaller network cutting off access to Tor means it would be also cutting itself off from a large swath of universities, academic knowledge, and collaborations.

So far they have established contact with more pre-existing relays at universities, increased the number of relays in general running at universities, and cultivated better contact with the national-level university Internet connectivity organizations (NRENs). Some of the institutions have established public relays, and others even added new exit relays.

Previously:
(2023) The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying
(2023) Mullvad VPN And The Tor Project Collaborate On A Web Browser
(2022) Tor Project Releases Latest Version of its Eponymous Browser
(2022) Tor Project Upgrades Network Speed Performance with New System
(2022) Tor Project Battles Russian Censorship Through the Courts
... and more.


Original Submission

Exploring the Emergence of Technoauthoritarianism 7 comments

The theoretical promise of AI is as hopeful as the promise of social media once was, and as dazzling as its most partisan architects project. AI really could cure numerous diseases. It really could transform scholarship and unearth lost knowledge. Except that Silicon Valley, under the sway of its worst technocratic impulses, is following the playbook established in the mass scaling and monopolization of the social web:

Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)

And yet, to a remarkable degree, Facebook’s way of doing business remains the norm for the tech industry as a whole, even as other social platforms (TikTok) and technological developments (artificial intelligence) eclipse Facebook in cultural relevance.

The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movement.

[...] The Shakespearean drama that unfolded late last year at OpenAI underscores the extent to which the worst of Facebook’s “move fast and break things” mentality has been internalized and celebrated in Silicon Valley. OpenAI was founded, in 2015, as a nonprofit dedicated to bringing artificial general intelligence into the world in a way that would serve the public good. Underlying its formation was the belief that the technology was too powerful and too dangerous to be developed with commercial motives alone.

Related:


Original Submission

An Online Dump of Chinese Hacking Documents Offers a Rare Window Into Pervasive State Surveillance 3 comments

Chinese police are investigating an unauthorized and highly unusual online dump of documents from a private security contractor linked to the nation's top policing agency and other parts of its government — a trove that catalogs apparent hacking activity and tools to spy on both Chinese and foreigners:

Among the apparent targets of tools provided by the impacted company, I-Soon: ethnicities and dissidents in parts of China that have seen significant anti-government protests, such as Hong Kong or the heavily Muslim region of Xinjiang in China's far west.

The dump of scores of documents late last week and subsequent investigation were confirmed by two employees of I-Soon, known as Anxun in Mandarin, which has ties to the powerful Ministry of Public Security. The dump, which analysts consider highly significant even if it does not reveal any especially novel or potent tools, includes hundreds of pages of contracts, marketing presentations, product manuals, and client and employee lists.

[...] The hacking tools are used by Chinese state agents to unmask users of social media platforms outside China such as X, formerly known as Twitter, break into email and hide the online activity of overseas agents. Also described are devices disguised as power strips and batteries that can be used to compromise Wi-Fi networks.

[...] "We see a lot of targeting of organizations that are related to ethnic minorities — Tibetans, Uyghurs. A lot of the targeting of foreign entities can be seen through the lens of domestic security priorities for the government," said Dakota Cary, a China analyst with the cybersecurity firm SentinelOne.

Also at WaPo, NYT, and The Guardian.

Originally spotted on Schneier on Security

Related: The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying


Original Submission

“Disabling Cyberattacks” Are Hitting Critical US Water Systems, White House Warns 36 comments

https://arstechnica.com/security/2024/03/critical-us-water-systems-face-disabling-cyberattacks-white-house-warns/

The Biden administration on Tuesday warned the nation's governors that drinking water and wastewater utilities in their states are facing "disabling cyberattacks" by hostile foreign nations that are targeting mission-critical plant operations.

"Disabling cyberattacks are striking water and wastewater systems throughout the United States," Jake Sullivan, assistant to the president for National Security Affairs, and Michael S. Regan, administrator of the Environmental Protection Agency, wrote in a letter. "These attacks have the potential to disrupt the critical lifeline of clean and safe drinking water, as well as impose significant costs on affected communities."

[...] The letter extended an invitation for secretaries of each state's governor to attend a meeting to discuss better securing the water sector's critical infrastructure. It also announced that the EPA is forming a Water Sector Cybersecurity Task Force to identify vulnerabilities in water systems. The virtual meeting will take place on Thursday.

"EPA and NSC take these threats very seriously and will continue to partner with state environmental, health, and homeland security leaders to address the pervasive and challenging risk of cyberattacks on water systems," Regan said in a separate statement.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by VLM on Wednesday December 06, @05:01PM (3 children)

    by VLM (445) on Wednesday December 06, @05:01PM (#1335387)

    A.I. is

    AI is infallible magic that always works perfectly is already getting tired.

    • (Score: 2) by Tork on Wednesday December 06, @06:54PM (1 child)

      by Tork (3914) Subscriber Badge on Wednesday December 06, @06:54PM (#1335403)

      AI is infallible magic that always works perfectly is already getting tired.

      You're right, AI won't be used to summarize massive caches of data for surveillance. We can all move on to the next thread.

      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 3, Touché) by JoeMerchant on Wednesday December 06, @10:16PM

        by JoeMerchant (3937) on Wednesday December 06, @10:16PM (#1335440)

        >AI won't be used to summarize massive caches of data for surveillance.

        These aren't the droids you are looking for.

        >We can all move on to the next thread.

        We can all move on to the next thread.

        --
        🌻🌻 [google.com]
    • (Score: 2) by takyon on Friday December 08, @10:36AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday December 08, @10:36AM (#1335678) Journal

      Facial/image recognition was already going strong long before the generative/LLM hype cycle, and it's the minimum needed to create a surveillance state unlike anything the world has ever seen. This is a "great" area for automation.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 4, Interesting) by VLM on Wednesday December 06, @05:13PM (1 child)

    by VLM (445) on Wednesday December 06, @05:13PM (#1335389)

    Spying is another matter

    Another interesting topic is the power law of addictive behaviors or whatever its called exactly.

    The amount of social media use in my high school graduating class is a typical example. 1/3 of the class does not use facebook or at least is unfindable. 1/3 of the class has a facebook account, many have several, all of them unused (I am in this category I only log in a couple time per year). 1/3 of the class uses FB actively and the use rates follow the usual addiction profile of highly disproportionate use.

    The problem with using AI to track chronically-online people is they're going to be extremely paranoid about AI tracking them and their chronically-online friends, but the population of chronically-online people is very small and according to Dead Internet Theory most of the 'people' online are advertising fraud bots anyway, so a company trying to make a business model out of automating the "huge workload" of watching the chronically-online will find there are not many humans to be monitored. It'll mostly be bots trying to KGB other companies/TLAs bots and if you have a human target they probably are off grid enough that there won't be enough data to make it worthwhile to automate spying on them.

    • (Score: 2) by HiThere on Thursday December 07, @07:28PM

      by HiThere (866) Subscriber Badge on Thursday December 07, @07:28PM (#1335575) Journal

      You've got a very restricted idea about how AI will be used. Consider the possibilities inherent in on-line romantic partners. Friends for the isolated. Etc. It's not really up to the job, yet, but it's not far. (Actually, for many people it already suffices. Which is scary in and of itself.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 4, Insightful) by Rich on Wednesday December 06, @05:29PM

    by Rich (945) on Wednesday December 06, @05:29PM (#1335395) Journal

    someone still has to sort through all the conversations.

    This assumption is profoundly wrong. Already in the last millennium, the NSA had a (public) patent filed to use ordinary statistic histogram profiling of character sequences to find out the topic of a conversation. Applied to phonemes, this will automatically filter out any conversation related to a certain topic. No people involved. (The most difference AI makes here is that it could generate such a conversation, too...). The fact that some third world hush job uses primitive pattern matching is just a distraction.

    It is probabilistic prediction of combined surveillance data that is in the center of interest with this, and here, too, there seem to be examples of women finding out that they are pregnant by a mysterious barrage of baby care product advertisements. We're way past the point where anything could be saved. Again, machine learning is not any major game changer here.

    The only way to escape is a low profile. If you make major demands regarding how your private data is kept, you already stand out as a troublemaker. If you plan an armed uprising, share cute hamster videos and buy Christmas decorations with your credit card.

  • (Score: 5, Insightful) by JoeMerchant on Wednesday December 06, @05:43PM

    by JoeMerchant (3937) on Wednesday December 06, @05:43PM (#1335396)

    OCR has been around, 20+ years.

    Databases have been around forever, give the DMVs credit for having functional "online" license plate databases for 50+ years.

    Network connected cameras... there was a fuzzy start, but by 15+ years ago they were pretty common.

    One developer with access to the network camera feeds, license plate databases and OCR software could easily have put together a "who passed this camera" application in 1999.

    Granted, all of those technologies have been advancing rapidly since 1999 - it's much easier now. In 1999 the camera resolution wasn't great, neither was the network bandwidth or processing power. A dedicated top-of-the-line desktop computer would probably have struggled to recognize plates from a 15fps video stream, and the camera would have to have been focused pretty tight on the plate area (such as entering / exiting a parking garage). These days, a Raspberry Pi with 5MP camera can do the plate recognition real-time from a moving squad car, integrate the observation data with GPS and store thousands of observations per hour. Whenever you've got a 5G connection you can hot-spot in to the DMV database to get names and addresses, but the real gold is in time-location patterns of observation, tying those to names is just the final step.

    Back around turn of the millennium "Dragon Naturally Speaking" was the consumer facing state of the art voice recognition and transcription software. Voice recognition has been continually improving (and diversifying to different development teams) since then. Again, back in that turn of the millennium timeframe ECHELON was all the buzz about Langley and the five eyes, or nine eyes, or whatever you want to call them trawling all the radio waves and most of the phone trunk lines searching for keywords transcribed by automated voice recognition. Does anyone think that process has slowed, or gotten harder to do in the last 20-25 years? I mean, it was nice for Snowden to provide a little insight to what was actually going on - but like the Panama Papers, I believe the outrage has been largely followed up by the sound of crickets and surprised Pikachu faces are mostly what you get when you ask about what reforms have taken place since then.

    I'll say again, I appreciate Google for getting "in my face" with their trawling of my e-mail contents, pinging me with reminders I didn't ask for when it's time to leave to catch a flight, etc.

    In all areas of AI application, it seems to be just a formalization of a semi-optimized search. Try a lot of things, find those that match what you're looking for better than others, try again in that direction, try not to get caught in "local minima" of the value function(s), then when you have a result present it as if it were magic. It's not.

    Owner/users of data can pull some serious Sherlock Holmes on you through that data. This article is from recently, but the event in question (Target figuring out a young woman was pregnant before her father did) is from 10-15 years ago: https://www.linkedin.com/pulse/targets-ai-powered-insight-how-target-figured-out-teenage/ [linkedin.com]

    Technology continues to march on, NVIDIA is churning out massively parallel processors that facilitate these types of data synth-analyses, and whether you call it AI or just plain creepy over-invasion of your formerly private life, it has been creeping up for decades and can be assumed to accelerate at a similar pace to all technological development: alarmingly exponentially quickly.

    --
    🌻🌻 [google.com]
  • (Score: 5, Insightful) by Rosco P. Coltrane on Wednesday December 06, @06:17PM (9 children)

    by Rosco P. Coltrane (4757) on Wednesday December 06, @06:17PM (#1335399)

    Whenever you read about "client-side scanning" (this garbage [theverge.com] for instance), and computers or cellphones that come with a dedicated AI chip [wikipedia.org], you can be certain it has nothing to do with any noble purpose or any application that's useful to you or to society: it's about planting a semi-clever, always-on spy on your device to violate the living heck out of your privacy and report to the mothership.

    The cleverer the AI spy, the better it will spy on you on the mothership's behalf. Hence the need for a powerful, dedicated AI chip. Don't think for a minute its processing power will be available to you to generate images, write essays or translate stuff for you on the go. This is processing power planted in your device for future use by the mothership. And you're paying for it baby.

    As for the "client-side scanning" excuse (in Apple's case, a classic think-of-the-children con), it's a legal trojan to justify uploading the code for the AI spy onto your device.

    Big Data needs both the AI chips and the client-side scanning excuses, and they've been slowly but surely deploying the former and pushing for the latter. Once they're in a position to combine the two, the obscene privacy violation hellfest will commence. And that day isn't far.

    • (Score: 4, Insightful) by Freeman on Wednesday December 06, @07:55PM (8 children)

      by Freeman (732) on Wednesday December 06, @07:55PM (#1335419) Journal

      Once this has been implemented, the likes of Nazi Germany's "cleansing" will be much easier. All it will take is the right set of the wrong people in charge to slide into a horrific dystopian nightmare.

      https://en.wikipedia.org/wiki/First_they_came_... [wikipedia.org]

      First they came for the Communists [Free Speech Absolutists]
      And I did not speak out
      Because I was not a Communist [Free Speech Absolutist]

      Then they came for the Socialists [Second Amendment Absolutists]
      And I did not speak out
      Because I was not a Socialist [Second Amendment Absolutist]

      Then they came for the trade unionists [$random_religion practitioners]
      And I did not speak out
      Because I was not a trade unionist [$random_religion practitioner]

      Then they came for the Jews [$personae_non_gratae]
      And I did not speak out
      Because I was not a Jew [$persona_non_grata]

      Then they came for me
      And there was no one left
      To speak out for me

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 4, Touché) by DannyB on Wednesday December 06, @09:25PM (4 children)

        by DannyB (5839) Subscriber Badge on Wednesday December 06, @09:25PM (#1335435) Journal

        First they came for the ... [Free Speech Absolutists]

        I don't believe these mythical Free Speech Absolutists exist.

        It is a pretense of being accepting of unpopular speech. The baggage that comes with it is a desire to censor speech critical of the unpopular speech. When I say "unpopular speech", what I really mean is stuff so evil that only 8chan might have tolerated.

        First they came for the Perl programmers.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 3, Touché) by khallow on Thursday December 07, @01:25AM (3 children)

          by khallow (3766) Subscriber Badge on Thursday December 07, @01:25AM (#1335459) Journal

          I don't believe these mythical Free Speech Absolutists exist.

          It is a pretense of being accepting of unpopular speech. The baggage that comes with it is a desire to censor speech critical of the unpopular speech. When I say "unpopular speech", what I really mean is stuff so evil that only 8chan might have tolerated.

          That's a bizarre claim to make. I'm pretty sure I can find people who would fit your definition of a free speech absolutist: who would support expression of stuff that mean and simultaneously the criticism of said speech.

          • (Score: 4, Touché) by The Vocal Minority on Friday December 08, @06:39AM

            by The Vocal Minority (2765) on Friday December 08, @06:39AM (#1335669) Journal

            It's not that bizarre a claim if you understand it as an excuse to censor because "[the people who I want to censor] would just censor me if they were able to". The thief thinks that all are of his persuasion.

          • (Score: 2, Disagree) by DannyB on Monday December 11, @07:59PM (1 child)

            by DannyB (5839) Subscriber Badge on Monday December 11, @07:59PM (#1336158) Journal

            There might be "free speech absolutists" who don't have thin skin. There might be unicorns.

            My experience is that free speech absolutists suddenly see censorship as acceptable when someone says something they don't like, even if it is true.

            --
            To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
            • (Score: 2, Touché) by khallow on Tuesday December 12, @04:03AM

              by khallow (3766) Subscriber Badge on Tuesday December 12, @04:03AM (#1336214) Journal

              There might be "free speech absolutists" who don't have thin skin.

              You're looking at one.

      • (Score: 4, Interesting) by JoeMerchant on Wednesday December 06, @11:00PM (2 children)

        by JoeMerchant (3937) on Wednesday December 06, @11:00PM (#1335446)

        I'm sorry, when have Second Amendment Absolutists ever been under any kind of threat whatsoever? They're put in check from ever expanding "their right to bear arms" towards ICBMs and tactical nukes (sure, the strategic ones are only for political purposes, but you never know when I might need to stop an invasion in my back 40, better for me to have some hot and ready than to let those insurgents get a foothold...) but I don't think they've ever been put in check like Jews in Germany, Protestants in Spain, Japanese in California...

        My stepfather recently passed, around age 80. He had a gun collection that he invested about a year's salary in - still worth roughly that today. He shot a few ducks, went to the range once every few years, and concealed carried from age 30 up through about 65 when it got to be a little heavy for him so he switched to a "tactical pocket knife." In all those decades, the only thing his handguns ever shot was a bookcase in the office, accidentally while cleaning one of them. He says they made him feel more secure "when walking on the dangerous streets of downtown Tampa to and from his work building from the parking garage." He also apparently felt insecure while driving, he spoke often of his plans to shoot an assailant through the car door. Personally, I think the whole "being ready" thing made him feel significantly more afraid and insecure than he would have had he attempted to deal with life without a personal arsenal - but that was his choice, and his right under the 2nd Amendment, and I respect that.

        As for spying coming for us? wake up Comrade... we (in the U.S., Western Europe, etc.) have been under the microscope since before you were born - well, maybe not you - you're boring - but if you ever became interesting you certainly would have had your landline tapped (in a way that didn't make funny clicking and breathing noises), mail read (in ways you would never detect), movements tracked, associates noted and investigated, etc. It gets easier today, today the corporations can do it too, not just the government - but... is there really a difference?

        --
        🌻🌻 [google.com]
        • (Score: 3, Touché) by fliptop on Thursday December 07, @01:28AM (1 child)

          by fliptop (1666) on Thursday December 07, @01:28AM (#1335460) Journal

          but I don't think they've ever been put in check like Jews in Germany

          Which happened after Hitler disarmed them.

          Protestants in Spain

          Which happened in what? The 1500's? The more recent harassment occurred during a dictatorship.

          Japanese in California

          Which was during a horrible World War where Japan was aligned w/ the likes of Hitler and Mussolini and, although now it seems atrocious, at the time seemed like the safest plan of action in preventing attacks on the homeland from within. I'm surprised you didn't include Ammon Bundy in your examples.

          --
          Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
          • (Score: 2) by JoeMerchant on Thursday December 07, @03:01AM

            by JoeMerchant (3937) on Thursday December 07, @03:01AM (#1335467)

            >The more recent harassment occurred during a dictatorship.

            Hmmm... those are bad, aren't they? I'd like to continue to avoid them by respecting the outcome of our elections, rather than just arming that segment of the population that feels the need to easily kill other humans at the pull of a trigger.

            --
            🌻🌻 [google.com]
  • (Score: 2) by RamiK on Wednesday December 06, @06:56PM (10 children)

    by RamiK (1813) on Wednesday December 06, @06:56PM (#1335405)

    The real world problem with the surveillance programs is that the triple-letter-agencies were going through political activists', elected official's and the odd ex-wife's records without any legal cause (not just probable cause; there weren't even filled complaints and/or open cases in the vast majority of instances) solely to dig out random skeletons for this or that personal gain and the FISA courts were rubber stamping the few cases that were actually brought in front of them. So, if an AI is doing the filtering, there will be no need to give individual agents direct access to the records. Instead, the system would only notify law enforcement when there's evidence of actual crimes.

    Now, of course, over-seeing who gets what access to the databases behind the AI will need some attention. But it's going to be limited to a handful of contractors that you can file charges against when they violate the law as opposed to the current hundreds of thousands of violations the government simply can't afford to litigate.

    --
    compiling...
    • (Score: 3, Insightful) by JoeMerchant on Wednesday December 06, @07:10PM (9 children)

      by JoeMerchant (3937) on Wednesday December 06, @07:10PM (#1335408)

      >Instead, the system would only notify law enforcement when there's evidence of actual crimes.

      Or innocent activities misidentified as crimes. The real problem is when the humans start rubber stamping the system outputs without ever having the stones to step up and call it wrong (which it frequently is, and will be more frequently as it is more broadly applied in the future).

      https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/ [politico.eu]

      --
      🌻🌻 [google.com]
      • (Score: 2) by RamiK on Wednesday December 06, @08:06PM (8 children)

        by RamiK (1813) on Wednesday December 06, @08:06PM (#1335420)

        Or innocent activities misidentified as crimes. The real problem is when the humans start rubber stamping the system outputs without ever having the stones to step up and call it wrong (which it frequently is, and will be more frequently as it is more broadly applied in the future).

        That's not how it would (should) work: The AI will output suspected recordings and a human would then need to sign it off before it's handed to investigators which then start a full investigation (warrants and all) which then goes to court. That's to say, the first person in the process is incentivized to drop the case rather than approve it blindly as they're in a regulator position where they need to prove they're doing their job. In fact, it will probably be necessary to pass the recordings to multiple verifiers and average them since, unlike police investigators, they won't be awarded for closing cases.

        --
        compiling...
        • (Score: 5, Touché) by mhajicek on Wednesday December 06, @08:12PM (1 child)

          by mhajicek (51) on Wednesday December 06, @08:12PM (#1335423)

          Would and should are seldom the same. We've already seen it with facial recognition.

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
          • (Score: 2) by Freeman on Thursday December 07, @03:08PM

            by Freeman (732) on Thursday December 07, @03:08PM (#1335534) Journal

            DNA samples from the likes of 23andMe as well. Though, they were just hacked, so pretty much any DNA to date may already be compromised.

            --
            Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by JoeMerchant on Wednesday December 06, @08:21PM (5 children)

          by JoeMerchant (3937) on Wednesday December 06, @08:21PM (#1335425)

          The linked article is about the Dutch welfare system which output an algorithmic guidance that was supposed to be reviewed and signed off by human case workers, but, due to the Milgram effect (tendency of most humans to defer to authority, any authority, rather than making decisions themselves) the case workers virtually never disagreed with the algorithm.

          >That's to say, the first person in the process is incentivized to drop the case rather than approve it blindly

          Sorry, I missed the incentive mechanism. Are we incentivizing these people to act as if they're not working at all, output the same results as if they just spent the day at the beach then blindly rejected all the cases in their queue? Any enforcement agency I've ever dealt with is expected to take a certain number of actions per month just to show that they're actively doing their job, instead of "working from home" and never finding anything out of line anywhere.

          >they won't be awarded for closing cases.

          See, I would expect them to be awarded for successful prosecution ratios. If there are a number of people performing the screening job, there will be a statistical rate of successful / unsuccessful prosecutions, and I would expect the award to be based on something like the product of the number of successful prosecutions referred maybe multiplied by their success rate, or even success rate squared, so that those who serve as a more effective filter get bigger bonuses.

          Note: successful prosecution is not necessarily correlated with guilt or innocence, just the outcome of the court cases - which can be quite arbitrary in reality.

          --
          🌻🌻 [google.com]
          • (Score: 2) by RamiK on Wednesday December 06, @10:12PM (4 children)

            by RamiK (1813) on Wednesday December 06, @10:12PM (#1335438)

            Sorry, I missed the incentive mechanism...I would expect them to be awarded for successful prosecution ratios.

            There's multiple incentives to both over and under regulate:
            1. (Over) A police officer handing out speeding tickets knows that if they end a shift without so and so figures for so and so days, they'll get into trouble. Security analysts aren't any different.
            2. (Over) Successful prosecution are rewarded one way or another.
            3. (Under) Investigators complaining they're being repeatedly sent on wild goose chases is annoying. Let alone prosecutors and judges...

            Now, if you're a librarian you'll weight 1, 2 & 3 as over-regulation like how Milton Friedman argued against the FDA. And if you're a progressive you'll say institutional oversight will keep things in check. In practice, this, specifically, is already a solve problem: Signal intelligence is done by working multiple analysts separately on the same cases and an officer above then signs off the actionable reports. Now, it can fail miserably at the follow-up if politics gets in the way (ask Israel...), but, since you can build a performance record for each analyst and officer over a fairly short period of time, managing things at that level to prevent false flags or missed flags tends to be a solid, near-empirical practice that leaves it up to policy makers to decide on the margin of error they're willing to tolerate.

            Anyhow, it's a classic political science and police work topic and the process controls here at the industrial level nowadays (sampling... stats... there's methodologies and specialists for this stuff) so it's a matter of letting the professionals do their jobs and keep the politicians away.

            --
            compiling...
            • (Score: 2) by JoeMerchant on Wednesday December 06, @10:20PM (3 children)

              by JoeMerchant (3937) on Wednesday December 06, @10:20PM (#1335442)

              >it's a matter of letting the professionals do their jobs and keep the politicians away.

              I would agree, except, Transparency is Always the Answer. In the world of state secrets, obviously secrets and sources aren't going to be aired and shared publicly - but political oversight is the next best thing we have...

              --
              🌻🌻 [google.com]
              • (Score: 2) by RamiK on Wednesday December 06, @11:43PM (2 children)

                by RamiK (1813) on Wednesday December 06, @11:43PM (#1335449)

                Well, with cases going through warrant requests to court and on public record, an increase in false arrests won't escape scrutiny (executive or public).

                Mind you, the context here (which is poorly highlighted in the article) is the 702 reauthorization bill where, presently, the FBI has conducted 200k warrant-less searches of Americans in a single year: https://rollcall.com/2023/12/04/house-judiciary-panel-to-consider-section-702-reauthorization-bill/ [rollcall.com]

                So, an AI alone would be a HUGE improvement over the current state of things but I'm also talking about AI + warrants + public record (where possible. FISA is there for a reason) so we're talking about the feature release after the next patch release...

                --
                compiling...
                • (Score: 2) by JoeMerchant on Thursday December 07, @02:50AM (1 child)

                  by JoeMerchant (3937) on Thursday December 07, @02:50AM (#1335465)

                  >200k warrant-less searches of Americans

                  Which sounds horrible, and is horrible, but keep in mind, that's less than 1/1700... so they must have all been really unusually bad people, right? /s

                  >an AI alone would be a HUGE improvement over the current state of things

                  That, of course, all depends on how the AI is implemented. Starting with whether or not it will be trolling (fishing expedition style) through more images than the existing humans do... I think it's a safe assumption that it will, so that's already starting off on a bad foot. You might argue that they will drastically reduce the humans in the loop, by a factor of 10 or more, but, then, remember: AI has already been shown to be prejudiced...

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by RamiK on Thursday December 07, @12:57PM

                    by RamiK (1813) on Thursday December 07, @12:57PM (#1335500)

                    AI has already been shown to be prejudiced...

                    The thing about national security threats being ignored / overestimated is that it tends to lead to rather EXPLOSIVE outcomes looping back to correct the system soon after whether it's human or machine sigint being prejudiced.
                    However, when humans are involved, it's harder to pinpoint who changed what and when and you typically end up casting responsibility to some scapegoat rather than the actual responsible party. However, with machines, you create a commit log and accountability becomes a matter of ledgers and written orders...

                    --
                    compiling...
  • (Score: 4, Insightful) by JoeMerchant on Wednesday December 06, @08:26PM

    by JoeMerchant (3937) on Wednesday December 06, @08:26PM (#1335426)

    >We could prohibit mass spying. We could pass strong data-privacy rules.

    We have all sorts of prohibitions and strong rules on the books already being ignored, do we really want more?

    Deeds, not words. Monetary penalties levied against the violators, payable to the damaged, adjudicated by independent panels elected by the affected population.

    Transparency is always the answer, open data stores. If a large corporation has "rights" to my data, the data they have the rights to should also be public information - otherwise they can legally forestall prosecution for violation of any rules / prohibitions indefinitely.

    --
    🌻🌻 [google.com]
(1)