Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday August 02 2016, @07:53PM   Printer-friendly

A new social network, Candid, will use machine learning to try and moderate posts:

We use a deep learning NLP (Natural Language Processing) algorithm, which basically looks at what you're saying and decides ... whether it's positive or negative. So it kind of classifies things as having a negative sentiment or a positive sentiment. It then gives it a score of how kind of strong your statement is — let's say you said something about someone or you threatened someone, it classifies that as saying, "Hey this is a very strong statement," because these kinds of categories are not good in terms of social discourse. And when we do that, we basically say if this thing has a score which is more than a particular level, a cut-off, then we basically take out the whole post. So whether it's self harm or like bullying or harassment, we look for certain phrases and the context of those phrases.

On the line between moderation and censorship

I mean, here is the thing between what is "loud free speech," quote-unquote, right? At some level you should be able to say what you want to say, but on another level, you also want to facilitate, you know, what I would say constructive social discussion. ... There is a kind of a trade-off or a fine line that you need to walk, because if you let everything in, you know the fear is that social discussion stops and it just becomes a name-calling game. And that's what happens if you just leave — like certain discussions, just let them be, don't pull things down — you will see they quickly devolve into people calling each other names and not having any kind of constructive conversations.

They've succeeded in getting some free press, if nothing else.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by anotherblackhat on Tuesday August 02 2016, @08:11PM

    by anotherblackhat (4722) on Tuesday August 02 2016, @08:11PM (#383299)

    If "troll" was a view point, or a method of speaking, you might have a chance.

    But "trolling" is comments from directed intelligence trying to derail a conversation.

    AI just isn't up to the task of detecting it.
    Even natural intelligence isn't always that good at detecting it.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Informative=1, Total=3
    Extra 'Insightful' Modifier   0  

    Total Score:   4  
  • (Score: 2) by Scruffy Beard 2 on Tuesday August 02 2016, @08:18PM

    by Scruffy Beard 2 (6030) on Tuesday August 02 2016, @08:18PM (#383303)

    The system analysing the data may have access to data that humans don't have easy access to like: sign-up date, IP address, historic post frequency.

    By the time a human notices those things, you have at least 5 socks in thread derailing the conversation.

    (No, I did not read TFA)

  • (Score: 1) by WillR on Tuesday August 02 2016, @08:23PM

    by WillR (2012) on Tuesday August 02 2016, @08:23PM (#383309)
    Reading intent is hard, but AI is getting really good at recognizing people in photos. A 90% accurate "if a photo contains a political figure, and it has text overlaid on it, then shitcan it" filter is certainly feasible, and it would make Facebook a lot more usable during election years.
    • (Score: 2, Insightful) by Anonymous Coward on Wednesday August 03 2016, @12:08AM

      by Anonymous Coward on Wednesday August 03 2016, @12:08AM (#383409)

      Facebook is never usable, because it is an unethical, abusive business [stallman.org]. The 'users' of Facebook are actually the ones being used.

  • (Score: 3, Insightful) by julian on Tuesday August 02 2016, @08:32PM

    by julian (6003) Subscriber Badge on Tuesday August 02 2016, @08:32PM (#383316)

    "Machines will never be able to do x" has never been a safe bet, and "machines will never be able to do x better than a human" isn't much better especially on long timelines.

    When it comes to purely intellectual work like this, to believe humans will always be better than AI means you believe at least one of two fantastical claims:

    1. that human technological progress will cease; not just slow down, but stop entirely
    2. that whatever goes on between our ears is somehow "magical" and cannot, fundamentally, be described by natural laws

    The first one is only barely salvageable because you could argue humans will go extinct before birthing truly robust AI. The second one is nonsense to any good methodological naturalist--which I hope we all are here.

    • (Score: 3, Insightful) by vux984 on Tuesday August 02 2016, @09:36PM

      by vux984 (5045) on Tuesday August 02 2016, @09:36PM (#383356)

      o believe humans will always be better than AI means you believe at least one of two fantastical claims:

      That is a false dilemma.

      There are other options. One could believe, for example, that the sort of AI that could be 'better than a human at certain X' would require such robust AI that it would be self aware, a person in its own right, and although artificial it would no longer be a 'machine'; that we could not ethically enslave it to censoring social media for us.

      1. that human technological progress will cease; not just slow down, but stop entirely

      That's kind of inevitable no matter how you think about it. Even without invoking the extinction of mankind. The universe, if it is finite, must, by definition be constrained in how much information there is in it. And thermodynamics (entropy) isn't doing us any favors either. But sure that's kicking the can down the line pretty far either way.

      2. that whatever goes on between our ears is somehow "magical" and cannot, fundamentally, be described by natural laws

      We don't know that much about the brain, maybe it uses quantum effects in some way. And if so, perhaps there is a limit that can't be crossed with a purely procedural machine simulation. That's not to invoke magic, because perhaps we can build a suitable 'quantum computer'. Or whatever. Or perhaps the theory is wrong. I'm not advocating it, per se. The point is there is room for stuff between 'something magical going on between our ears' and 'what a computer can do'.

      • (Score: 4, Insightful) by Thexalon on Wednesday August 03 2016, @02:10AM

        by Thexalon (636) on Wednesday August 03 2016, @02:10AM (#383457)

        One issue here is the difference between science and scientism.

        Science is the process of figuring out how the universe works by repeated application of observing the universe, theorizing descriptions and predictions based on those observations, and then testing those theories by experiment to see whether they're right. This is undoubtedly a useful activity that can lead us to a better understanding of the universe and allow us to create useful technology that takes advantage of that understanding.

        Scientism is a completely different beast. It is the belief that the process of science can, given sufficient time and resources, provide a complete understanding of the universe. This belief is, in fact, disproven in several fields: computer science has demonstrably unsolvable problems, physics has certain questions about quantum states of electrons that has recently been proven to be unanswerable due to mathematical paradox, and even mathematics demonstrably has unproveably true statements about any set of axioms. And just to be absolutely clear, we aren't talking about the sorts of problems where we haven't been able to figure out how to solve them, we're talking about the sorts of problems that any possible solution disproves its own existence. Scientism is very popular among those who have rejected religious nonsense and now wish to belief that they are now completely rational atheists who will through science finally disprove the religious nutjobs. However, it turns out that this belief is as irrational as the belief in "woo" (as James Randi would call it). Which makes sense, because if psychology has taught us nothing else it's that we're all thoroughly irrational even when we believe ourselves to be acting rationally.

        So what this all boils down to is that the chain of logic from (A) "science works" to (B) "science can tell us exactly how people think" to (C) "if we know how people think computers can be programmed to mimic that" is far from proven. Science might indeed tell us how people think, and there are a lot of biologists and psychologists trying to do just that. And computer scientists might well be able to mimic aspects of human thought in software, depending on what that thinking actually is. But neither of those steps are a guarantee.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by julian on Friday August 05 2016, @04:37PM

        by julian (6003) Subscriber Badge on Friday August 05 2016, @04:37PM (#384524)

        There are other options. One could believe, for example, that the sort of AI that could be 'better than a human at certain X' would require such robust AI that it would be self aware, a person in its own right, and although artificial it would no longer be a 'machine'; that we could not ethically enslave it to censoring social media for us.

        That's an interesting ethical consideration that will probably have to be tackled one way or the other but it's a separate issue.

        maybe it uses quantum effects in some way

        Quantum effects would still be described by natural laws. Even if, at bottom, there is some randomness fed into the system, you can account for that randomness in a rigorous way that can be described statistically.

        But the brain probably doesn't use quantum effects [uwaterloo.ca] in a way meaningful for describing consciousness. (warning, PDF)

        • (Score: 2) by vux984 on Friday August 05 2016, @08:15PM

          by vux984 (5045) on Friday August 05 2016, @08:15PM (#384610)

          but it's a separate issue.

          Not completely separate. I guess it depends a bit on where you come down on the semantics of 'computer'. But if you think of a computer as simply a machine that executes an algorithm, then if something needs to acheive consciousness to do X better than a human; then that something may no longer 'be' merely a computer.

          Quantum effects would still be described by natural laws. Even if, at bottom, there is some randomness fed into the system, you can account for that randomness in a rigorous way that can be described statistically.

          I cannot factor large numbers any faster by using a conventional computer to simulate a quantum one. Likewise, simulating a quantum effect based consciousness may not be practical with a conventional computer; even if its theoretically possible with an infinitely fast computer with an infinite amount of ram. :)

          But the brain probably doesn't use quantum effects in a way meaningful for describing consciousness.

          Perhaps not. I was just using it to illustrate there is room to rationally believe that you can't simulate the brain with a conventional computer without invoking a "magical brain".

          We don't know much about consciousness. It may not simply be the product of computation, it may arise from the structure of the brain itself in ways that cannot be 'simulated'. (In the same way that simulating an internal combustion engine in a computer yields no useful mechanical power, no kinetic energy we can use to power a car or turn a crank. It just yields a calculation of how much power would be generated... without actually generating it. Perhaps likewise, its possible that even perfectly simulating a brain in a conventional computer only yields information about energy potentials, neuron states, etc, etc... without generating actual consciousness that is self aware in the process. Perhaps we need to 'build an artificial brain' that is structurally capable of hosting consciousness -- and such an artifact may be fundamentally different from a conventional computer in important ways, where simply throwing more cycles and ram and 'better programs' just can't get you there.

  • (Score: 2) by takyon on Tuesday August 02 2016, @08:34PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 02 2016, @08:34PM (#383319) Journal

    If (user == "Anonymous Coward") { ban(); }

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Wednesday August 03 2016, @08:37AM

      by Anonymous Coward on Wednesday August 03 2016, @08:37AM (#383539)

      How to build a thriving community 101 right? =P

  • (Score: 5, Insightful) by frojack on Tuesday August 02 2016, @10:05PM

    by frojack (1554) on Tuesday August 02 2016, @10:05PM (#383365) Journal

    They aren't looking to control trolls.

    They are looking to control viewpoints, and limiting access to members of the echo chamber.
    The quoted last paragraph clearly says as much.

    In fact the last paragraph is a perfectly nuanced example of saying STFU and GTFO and making it sound like they are doing everybody a favor.

    You have to wonder if that paragraph could get past their AI filter. I suspect not.

    --
    No, you are mistaken. I've always had this sig.
  • (Score: 5, Insightful) by aristarchus on Tuesday August 02 2016, @10:25PM

    by aristarchus (2645) on Tuesday August 02 2016, @10:25PM (#383377) Journal

    But "trolling" is comments from directed intelligence trying to derail a conversation.

    I prefer of thinking of trolling as comments intended to correct a conversation, to ridicule dogmatic attempts to not think, and provoke deeper insight.

    AI just isn't up to the task of detecting it.
    Even natural intelligence isn't always that good at detecting it.

    If it can be detected, it is not trolling. Amateurs!! And trolling AIs is impossible, since they do not think, at least not in a way that can be provoked.

    • (Score: 0) by Anonymous Coward on Wednesday August 03 2016, @01:17AM

      by Anonymous Coward on Wednesday August 03 2016, @01:17AM (#383442)

      Tay bot was never trolled.

  • (Score: 2) by FatPhil on Tuesday August 02 2016, @10:56PM

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday August 02 2016, @10:56PM (#383389) Homepage
    Agreed. It works *because* natural intelligence can't detect it. It *is* the thing that people don't detect.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves