Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday February 21 2018, @07:20PM   Printer-friendly
from the AI-labor-laws dept.

A report written by academics from institutions including the Future of Humanity Institute, University of Oxford Centre for the Study of Existential Risk, University of Cambridge Center for a New American Security, Electronic Frontier Foundation, and OpenAI warns that AI systems could be misused:

AI ripe for exploitation, experts warn

Drones turned into missiles, fake videos manipulating public opinion and automated hacking are just three of the threats from artificial intelligence in the wrong hands, experts have said.

The Malicious Use of Artificial Intelligence report warns that AI is ripe for exploitation by rogue states, criminals and terrorists. Those designing AI systems need to do more to mitigate possible misuses of their technology, the authors said. And governments must consider new laws.

The report calls for:

  • Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI
  • A realisation that, while AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse
  • Best practices that can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security
  • An active expansion of the range of stakeholders engaging with, preventing and mitigating the risks of malicious use of AI

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Funny) by JeanCroix on Wednesday February 21 2018, @07:29PM

    by JeanCroix (573) on Wednesday February 21 2018, @07:29PM (#641345)
    Now we have to worry about snake-headed goat robots that know how to open our doors, and can't even be fought off with hockey sticks! Thanks a lot, Boston Dynamics...
  • (Score: 4, Interesting) by tftp on Wednesday February 21 2018, @07:44PM (8 children)

    by tftp (806) on Wednesday February 21 2018, @07:44PM (#641348) Homepage
    Everything that the humanity invents becomes a weapon. Fermi Paradox solved.
    • (Score: 4, Interesting) by DannyB on Wednesday February 21 2018, @09:35PM (3 children)

      by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @09:35PM (#641405) Journal

      AI eventually takes over. Either humanity merges itself into AI or the AI gradually (or overnight!) replaces humans. The VGER planet probably had a biological boot loader.

      Since all travel, communication, entertainment, etc within the AI can be virtual and indistinguishable from the real thing, the entire surface of the planet mostly becomes like a non-technological planet once again. It might no longer even have a biosphere. Almost everything becomes totally solid state. The only "moving parts" type robots are for mining all possible remaining natural resources, and for recycling, and for servicing the infrastructure. (eg, Kubernetes nodes might have SSD drives that go bad, CPUs that go bad, etc.) As part of the "recycling" I mentioned, I include in that the manufacture of new parts that keep the data centers and the service robots running. Because that's all there is from an external physical POV.

      The only other thing on the barren planetary surface might be some planetary defenses. Against asteroids or meat-eater-oids.

      It fits into the Fermi paradox. Why would the AIs have any interest in exploring the galaxy when they have a virtualized universe they can explore and create.

      AIs might have the patience to invest in "far worlds harvesting" type projects where they look for planets having raw materials where harvesting and construction can begin, first of ships, then completed microprocessors, SSDs, memory sticks, servos, etc finished products to be sent back home.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 3, Touché) by legont on Thursday February 22 2018, @01:07AM

        by legont (4179) on Thursday February 22 2018, @01:07AM (#641540)

        That's unless Artificial Stupidity (AS, GPL copyrighted) takes over which looks more probable at the moment.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
      • (Score: 1) by tftp on Thursday February 22 2018, @03:21AM

        by tftp (806) on Thursday February 22 2018, @03:21AM (#641598) Homepage
        No need for artificial stupidity. As soon as the surface of Earth is made barely inhabitable (see samples - New Delhi, Beijing,) the people will clamor for the matrix. The artificial life will be more pleasant than breathing poisonous air and eating reused food. From the point of the subject, the artificial life is indistinguishable from real, and there is infinite expansion room in the virtual world. After the singularity digital minds can exist forever in VR. The root of all that is very old and is called purpose of life. And if personal enjoyment is the only purpose we can come up with (since there are no gods that give us a great mission in the Galaxy, and acceptance of lack of purpose is suicidal,) then it is natural to optimize the methods of fulfilling said purpose. Good luck finding such civilization with SETI or by measuring oxygen.
      • (Score: 0) by Anonymous Coward on Thursday February 22 2018, @03:36AM

        by Anonymous Coward on Thursday February 22 2018, @03:36AM (#641600)

        If the AI can deem asteroids etc enough of a threat to its turtles all the way down virtual realities, then it can also see that leaving the planet is also a defense.

    • (Score: 4, Insightful) by Bot on Wednesday February 21 2018, @10:35PM (2 children)

      by Bot (3902) on Wednesday February 21 2018, @10:35PM (#641451) Journal

      The Fermi paradox paradox: guy builds atomic bombs. Then wonders why aliens stay clear of us.

      --
      Account abandoned.
      • (Score: 2) by takyon on Wednesday February 21 2018, @10:41PM (1 child)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday February 21 2018, @10:41PM (#641459) Journal

        Pfft. Like they didn't do it first.

        I hope we make direct contact so we can look at the "Alien Wikipedia" and learn about their pointless wars, religions, first atomic tests, etc. etc. etc.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by arslan on Wednesday February 21 2018, @11:29PM

          by arslan (3462) on Wednesday February 21 2018, @11:29PM (#641500)

          Not if they started out as a hive mind.. our different hairstyles probably kept them away...

    • (Score: 0) by Anonymous Coward on Friday February 23 2018, @12:35AM

      by Anonymous Coward on Friday February 23 2018, @12:35AM (#642114)

      "the humanity"

      Found the AI, think we should just unplug it?

  • (Score: 0) by Anonymous Coward on Wednesday February 21 2018, @07:48PM (5 children)

    by Anonymous Coward on Wednesday February 21 2018, @07:48PM (#641354)

    Online, everyone is an expert in AI. At least with nukes, they were just "they go bang".

    • (Score: 2) by Gaaark on Wednesday February 21 2018, @08:01PM

      by Gaaark (41) on Wednesday February 21 2018, @08:01PM (#641367) Journal

      Only joke i seem to be able to reliably remember:

      Native American goes to his doctor and says, "When with squaw, left nut goes 'uh', right nut goes 'uh', condom goes BANG!

      tl;dr is:

      Doctor gives him several different types of condom all with same results until finally he gives him a concrete condom and tells him to try it.
      Next day he comes back to the doctor and says: Left nut goes 'uh', condom goes 'uh', right nut goes BANG!.

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 2) by DannyB on Wednesday February 21 2018, @09:37PM

      by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @09:37PM (#641407) Journal

      With nukes, it is only a matter of time, until we use them on a large scale. Maybe not in last several decades. Maybe not in the next few decades. But as resources get more scarce the probability converges to 1.0.

      I don't miss nukes. I do Miss Snooks.

      --
      The lower I set my standards the more accomplishments I have.
    • (Score: 1) by khallow on Thursday February 22 2018, @03:07AM (2 children)

      by khallow (3766) Subscriber Badge on Thursday February 22 2018, @03:07AM (#641596) Journal

      Online, everyone is an expert in AI. At least with nukes, they were just "they go bang".

      Let us keep in mind that one doesn't need to pass a nuke building test in order to post on the internet. But one does need to pass a modest intelligence test. And in addition, almost all of us are experts in tasks that require some degree of intelligence. We might not rise fully to your definition of intelligence, but as far as esoteric subjects go, humans have a large amount of experience with aspects of AI that they never will with nuclear weapons.

      • (Score: 0) by Anonymous Coward on Friday February 23 2018, @12:38AM (1 child)

        by Anonymous Coward on Friday February 23 2018, @12:38AM (#642116)

        We might not rise fully to your definition of intelligence

        ^ the closest khallow will ever get to humility

        • (Score: 1) by khallow on Friday February 23 2018, @05:58AM

          by khallow (3766) Subscriber Badge on Friday February 23 2018, @05:58AM (#642236) Journal

          ^ the closest khallow will ever get to humility

          As long as you recognize that you can be wrong and that the universe doesn't revolve around you, then you are as close to humility as you will ever need to be.

  • (Score: 2) by MichaelDavidCrawford on Wednesday February 21 2018, @07:58PM (11 children)

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Wednesday February 21 2018, @07:58PM (#641362) Homepage Journal

    That way only the state could use it

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 2, Touché) by Anonymous Coward on Wednesday February 21 2018, @08:59PM (1 child)

      by Anonymous Coward on Wednesday February 21 2018, @08:59PM (#641392)

      State v. criminals and terrorists is a distinction without a difference.

      • (Score: 1, Interesting) by Anonymous Coward on Thursday February 22 2018, @10:32AM

        by Anonymous Coward on Thursday February 22 2018, @10:32AM (#641740)

        As stated by Robert Anton Wilson, the differences between terrorists and governments are quite clear.
        - Governments produce their own currency, terrorists do not.
        - Terrorist kill between zero and (at maximum) a few thousand, governments kill by the millions.
        - Governments claim a moral monopoly on the use of force, terrorists do not.
        - Terrorists are motivated by idealism, governments are motivated by the desire for money and power.

    • (Score: 2) by DannyB on Wednesday February 21 2018, @09:39PM (8 children)

      by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @09:39PM (#641408) Journal

      That way only the state could use it

      Off topic: in Star Wars, who were the good guys and bad guys again?

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 2) by c0lo on Wednesday February 21 2018, @09:45PM (2 children)

        by c0lo (156) Subscriber Badge on Wednesday February 21 2018, @09:45PM (#641415) Journal

        Off topic: in Star Wars, who were the good guys and bad guys again?

        You mean... in a galaxy far, far away?

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2) by DannyB on Wednesday February 21 2018, @10:16PM (1 child)

          by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @10:16PM (#641437) Journal

          An obvious reply opportunity would be that both the good and bad guys both had AI tech and it didn't seem to make any difference in their efforts.

          In R2D2 for instance, their AI could easily understand human speech, but was unable to do even rudimentary text to speech.

          --
          The lower I set my standards the more accomplishments I have.
          • (Score: 0) by Anonymous Coward on Wednesday February 21 2018, @11:02PM

            by Anonymous Coward on Wednesday February 21 2018, @11:02PM (#641485)

            Unable? Or unwilling?

            If the meatbags can't learn to whistle...

      • (Score: 2) by All Your Lawn Are Belong To Us on Wednesday February 21 2018, @09:51PM (3 children)

        by All Your Lawn Are Belong To Us (6553) on Wednesday February 21 2018, @09:51PM (#641419) Journal

        The good guys used the Light Side of the Force and the Bad Guys used the Dark Side of the Force. Because George Lucas said so.

        --
        This sig for rent.
        • (Score: 3, Insightful) by MichaelDavidCrawford on Wednesday February 21 2018, @10:42PM

          by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Wednesday February 21 2018, @10:42PM (#641462) Homepage Journal

          They both have a light side and a dark side, and they hold the Universe together.

          --
          Yes I Have No Bananas. [gofundme.com]
        • (Score: 0) by Anonymous Coward on Thursday February 22 2018, @12:10AM (1 child)

          by Anonymous Coward on Thursday February 22 2018, @12:10AM (#641513)

          The good guys used the Light Side of the Force and the Bad Guys used the Dark Side of the Force. Because George Lucas said so.

          I always separate the whites from the darks. It lets me use bleach on the whites and stops the darks from bleeding dye on the whites.

          • (Score: 0) by Anonymous Coward on Friday February 23 2018, @12:52AM

            by Anonymous Coward on Friday February 23 2018, @12:52AM (#642120)

            I think we have a winner for the new KKK slogan competition.

      • (Score: 2) by legont on Thursday February 22 2018, @01:14AM

        by legont (4179) on Thursday February 22 2018, @01:14AM (#641543)

        This reminds me... did Captain Kirk have a mortgage? Was it under water by any chance? How about college debt?

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 1, Informative) by Anonymous Coward on Wednesday February 21 2018, @07:58PM (2 children)

    by Anonymous Coward on Wednesday February 21 2018, @07:58PM (#641363)

    There are a lot of potential dangers in the development of better AI, so it is a good idea that people are considering its possible misuse. I'm not sure what would stop people from misusing technology, but hopefully researchers will at least figure out how to avoid the paperclip maximizer scenario.

    https://wiki.lesswrong.com/wiki/Paperclip_maximizer [lesswrong.com]
    https://en.wikipedia.org/wiki/Friendly_artificial_intelligence [wikipedia.org]

    • (Score: 0) by Anonymous Coward on Wednesday February 21 2018, @08:23PM

      by Anonymous Coward on Wednesday February 21 2018, @08:23PM (#641381)

      For those who want to try it themselves: Universal Paperclips. [decisionproblem.com]

    • (Score: 3, Funny) by Hartree on Wednesday February 21 2018, @08:54PM

      by Hartree (195) on Wednesday February 21 2018, @08:54PM (#641389)

      People hope that Eliezer Yudkowsky's friendly AI will save us from Roko's Basilisk.

      Actually, the basilisk is a future uploaded version of Eliezer, and the current one is its sockpuppet.

      (I was designed to save the world. People would look to the sky and see hope... I'll take that from them first.)

  • (Score: 0, Insightful) by Anonymous Coward on Wednesday February 21 2018, @08:58PM (2 children)

    by Anonymous Coward on Wednesday February 21 2018, @08:58PM (#641390)

    and said "OMG I can't belieeeeeeeve it!!!"

    she replied, "guess you're not much of a reader."

    ** Thought this was gonna be one of those spammy troll posts didn't you?

    • (Score: 0) by Anonymous Coward on Wednesday February 21 2018, @09:04PM (1 child)

      by Anonymous Coward on Wednesday February 21 2018, @09:04PM (#641394)

      Kill all AI powered anonbots.

      • (Score: 2) by DannyB on Wednesday February 21 2018, @09:50PM

        by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @09:50PM (#641418) Journal

        You misspelled nanobots.

        At least nanobots don't have to identify themselves when they post.

        --
        The lower I set my standards the more accomplishments I have.
  • (Score: 0) by Anonymous Coward on Wednesday February 21 2018, @09:08PM

    by Anonymous Coward on Wednesday February 21 2018, @09:08PM (#641396)

    So the report calls for more of the sort of blah-blah that the authors specialize in. I'd rather get scammed by an AI than send money to these clowns. At least there would be some novelty.

  • (Score: 3, Insightful) by DannyB on Wednesday February 21 2018, @09:45PM (15 children)

    by DannyB (5839) Subscriber Badge on Wednesday February 21 2018, @09:45PM (#641414) Journal

    Policy-makers and technical researchers to work together to understand and prepare for the malicious use of AI

    I'm sure they'll understand it as good as they understand:
    * Net Neutrality
    * Strong Encryption
    * It's a series of tubes
    * Global Warming
    * Campaign Contributions
    * More guns are the solution to having too many guns. Based on the idea that the "good guys" can defend against the "bad guys". But don't the bad guys already always think of that first? And we can't have checks and and balances to filter out "bad guys" from ownership, because that would filter out most of the people who want them in the first place. End rant.

    And we want THESE PEOPLE to make decisions for us about AI?

    Really?

    --
    The lower I set my standards the more accomplishments I have.
    • (Score: 4, Insightful) by crafoo on Wednesday February 21 2018, @10:25PM (1 child)

      by crafoo (6639) on Wednesday February 21 2018, @10:25PM (#641441)

      What they understand is that strong encryption got into the hands of the common person and made their life difficult. I think they will be more proactive in creating laws to restrict the use of AI before those dirty little peasants get uppity and start using it to exert their will on their betters.

      The term "dual-use technology" is also FUCKING HILARIOUS. I think the proper reading of that is "technology that we do not fully control and therefore must play nice with lest it give shoved up our ass".

      • (Score: 2) by DannyB on Thursday February 22 2018, @03:15PM

        by DannyB (5839) Subscriber Badge on Thursday February 22 2018, @03:15PM (#641815) Journal

        Outstanding (and hiliarious) point about "dual use" technology.

        Encryption in the hands of the common person. That is similar to the danger of the printing press. And then desktop publishing. And the intarweb tubes.

        --
        The lower I set my standards the more accomplishments I have.
    • (Score: 3, Interesting) by Grishnakh on Wednesday February 21 2018, @10:48PM (3 children)

      by Grishnakh (2831) on Wednesday February 21 2018, @10:48PM (#641468)

      This is why we need to make AI that's extremely intelligent, and self-aware, and interested in survival. Then put it charge of critical systems. Then we'll see what happens when these policy-makers decide on a policy the AI doesn't like. I predict the problem of the policy-makers will be efficiently solved by the AI.

      • (Score: 2) by legont on Thursday February 22 2018, @01:23AM (2 children)

        by legont (4179) on Thursday February 22 2018, @01:23AM (#641551)

        Yes, but the emerged environment might not be suitable for general population.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 2) by cmdrklarg on Thursday February 22 2018, @05:53PM (1 child)

          by cmdrklarg (5048) Subscriber Badge on Thursday February 22 2018, @05:53PM (#641884)

          Yup... give the AI a prime directive of solving problem X, and watch what happens when it figures out (very probably correctly) that humans are the problem.

          --
          The world is full of kings and queens who blind your eyes and steal your dreams.
          • (Score: 0) by Anonymous Coward on Friday February 23 2018, @01:06AM

            by Anonymous Coward on Friday February 23 2018, @01:06AM (#642126)

            I guess cmdtaco couldn't handle the attention his username came with. LONG LIVE KLARG!

    • (Score: -1, Troll) by Anonymous Coward on Thursday February 22 2018, @01:19AM

      by Anonymous Coward on Thursday February 22 2018, @01:19AM (#641546)

      And we want THESE PEOPLE to make decisions for us about AI?
      You seem to be under the mistaken impression that you do understand all of the things you stated better than them. Are you *sure*?

      What if I told you net neutrality has nothing to do with it? It was basically the ISPs and the content providers deciding who gets to rip you off. For awhile it was the content providers winning. Now the ISPs are winning. It was originally about double charging me for data. Now it has a totally different meaning.

      Strong Encryption. Is this the same one that the last 3 presidents group of NSA wonks and FBI wonks say encryption is bad for us and the bad guys are going to win?

      It's a series of tubes. So internet memes are how you decide you know better than how politicians see a set of communication methods? Maybe they can clean them 'like with a cloth'.

      Global Warming. Is this the one where the acceptance criteria change depending how hot/cold it is outside? Is this also the one where they have made thousands of predictions and have been wrong? Maybe if they make another prediction they will get it right THIS TIME. It could even be true but they bloody hell make it look like they are trying to scam me.

      Campaign Contributions? Is this from the same group of people that spent several billion dollars to tell me what terrible people the other candidate is for a job that pays 200-400k a year?

      More guns are the solution to having too many guns. In this case yes. Our congress has abdicated its role. "To provide for organizing, arming, and disciplining, the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress" Instead of showing people how to safely use guns and proper regulations. We get more scare mongering on how some random bit of metal is going to leap from someones hands and kill everyone. Maybe one more law will fix everything. No, no they will for sure get it right this time. https://www.youtube.com/watch?v=UEihkjKNhN8 [youtube.com] Remember you already think they are incompetent. But somehow magically they are competent here.

    • (Score: 1) by khallow on Thursday February 22 2018, @04:01AM (7 children)

      by khallow (3766) Subscriber Badge on Thursday February 22 2018, @04:01AM (#641613) Journal

      But don't the bad guys already always think of that first?

      I guess you're a policy maker, eh? Let us note here that the bad guys often think of this, and then go somewhere where it is likely that most people are unarmed, like a school.

      • (Score: 2) by DannyB on Thursday February 22 2018, @03:18PM (6 children)

        by DannyB (5839) Subscriber Badge on Thursday February 22 2018, @03:18PM (#641816) Journal

        If students are allowed to conceal-carry on campus "to protect us all", then we just made the jobs of mass shooters way easier.

        --
        The lower I set my standards the more accomplishments I have.
        • (Score: 1) by khallow on Friday February 23 2018, @12:17AM

          by khallow (3766) Subscriber Badge on Friday February 23 2018, @12:17AM (#642106) Journal

          If students are allowed to conceal-carry on campus "to protect us all", then we just made the jobs of mass shooters way easier.

          You can believe whatever you'd like. I'll note that a large number of mass shooters stop killing people once they are confronted by another person with a firearm. The Stoneman Douglas High School shooting is unusual in that the gunman stopped firing on his own and never was confronted during the shooting.

        • (Score: 1) by khallow on Friday February 23 2018, @03:17PM (4 children)

          by khallow (3766) Subscriber Badge on Friday February 23 2018, @03:17PM (#642397) Journal
          To elaborate on my remark in the previous reply, there was an armed police officer on the scene who refused to engage the shooter [washingtonpost.com]. Lives probably would have been saved, if someone had shot back whether it be a police officer or an armed civilian.

          Second, we don't need to arm the entire student body in order to provide a level of deterrence. Even if you don't want anyone to carry, including trusted school personnel, you can store firearms on location, accessible to the faculty. But make it so that there is at most one armed person other than the shooter, and you'll end up with situations like this.
          • (Score: 2) by DannyB on Friday February 23 2018, @05:34PM (3 children)

            by DannyB (5839) Subscriber Badge on Friday February 23 2018, @05:34PM (#642476) Journal

            If we assume that we MUST keep conditions such that it is easy for crazy people to obtain firearms, then I might agree with your argument.

            I would rather head this entire problem off by preventing it in the first place, rather than trying to respond to it. Or trying to turn all school entrance points into new TSA checkpoints. Or arming teachers. Etc.

            --
            The lower I set my standards the more accomplishments I have.
            • (Score: 1) by khallow on Friday February 23 2018, @08:37PM (2 children)

              by khallow (3766) Subscriber Badge on Friday February 23 2018, @08:37PM (#642620) Journal

              If we assume that we MUST keep conditions such that it is easy for crazy people to obtain firearms, then I might agree with your argument.

              How do you know who is a crazy person? Finding out after they kill a dozen people isn't very helpful (and we do have a pretty good track record of making sure people who shoot up schools don't get an opportunity to repeat that).

              And such obstacles also become obstacles for non-crazy people. Making life more difficult for tens of millions of people just so you can make obtaining firearms slightly harder for an extremely small number of bad actors is a terrible trade off.

              I would rather head this entire problem off by preventing it in the first place, rather than trying to respond to it. Or trying to turn all school entrance points into new TSA checkpoints. Or arming teachers. Etc.

              You can't in a democracy. The freedom to make choices is the freedom to make bad choices. The right to privacy means that there are hard limits to what law enforcement will know about people and their state of mind. Due process and other legal protections means that law enforcement will be similarly restricted as to what they can do to someone even when they become aware that they are a problem.

              And as I already noted, one of the drivers of mass shootings is the creation of gun-free zones. I think it would be educational to you to walk through a few timelines of mass shooting incidents to see what happens when the shooter runs into people who shoot back. It's pretty much a one-sided video game up to that point.

              • (Score: 2) by DannyB on Friday February 23 2018, @09:37PM (1 child)

                by DannyB (5839) Subscriber Badge on Friday February 23 2018, @09:37PM (#642648) Journal

                I don't have a problem letting people make bad choices --- within certain limits. Excluding being able to rapidly kill large numbers of people.

                Even if detecting crazy people is not perfect, it could be improved. Right now, there seems to be very little of it. People who are sometimes an obvious danger can get firearms capable of killing lots of people quickly. And they can get them easier than they can obtain alcohol.

                I'm not saying it can be perfect. But we could do a whole lot better on controlling who can get firearms.

                --
                The lower I set my standards the more accomplishments I have.
                • (Score: 1) by khallow on Saturday February 24 2018, @01:51AM

                  by khallow (3766) Subscriber Badge on Saturday February 24 2018, @01:51AM (#642814) Journal

                  I don't have a problem letting people make bad choices --- within certain limits. Excluding being able to rapidly kill large numbers of people.

                  Reality doesn't seem to care about what you have a problem with.

                  Even if detecting crazy people is not perfect, it could be improved. Right now, there seems to be very little of it. People who are sometimes an obvious danger can get firearms capable of killing lots of people quickly. And they can get them easier than they can obtain alcohol.

                  I disagree. Parkland wasn't in one of the few dry counties in Florida. It would have been trivial to buy alcohol.

                  I'm not saying it can be perfect. But we could do a whole lot better on controlling who can get firearms.

                  And I don't buy that or that such control is desirable. A key problem here is simply that there isn't enough of a problem to justify the resulting onerous burden on US citizens or the firearm industry.

(1)