Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday August 02 2016, @07:53PM   Printer-friendly

A new social network, Candid, will use machine learning to try and moderate posts:

We use a deep learning NLP (Natural Language Processing) algorithm, which basically looks at what you're saying and decides ... whether it's positive or negative. So it kind of classifies things as having a negative sentiment or a positive sentiment. It then gives it a score of how kind of strong your statement is — let's say you said something about someone or you threatened someone, it classifies that as saying, "Hey this is a very strong statement," because these kinds of categories are not good in terms of social discourse. And when we do that, we basically say if this thing has a score which is more than a particular level, a cut-off, then we basically take out the whole post. So whether it's self harm or like bullying or harassment, we look for certain phrases and the context of those phrases.

On the line between moderation and censorship

I mean, here is the thing between what is "loud free speech," quote-unquote, right? At some level you should be able to say what you want to say, but on another level, you also want to facilitate, you know, what I would say constructive social discussion. ... There is a kind of a trade-off or a fine line that you need to walk, because if you let everything in, you know the fear is that social discussion stops and it just becomes a name-calling game. And that's what happens if you just leave — like certain discussions, just let them be, don't pull things down — you will see they quickly devolve into people calling each other names and not having any kind of constructive conversations.

They've succeeded in getting some free press, if nothing else.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by vux984 on Tuesday August 02 2016, @09:36PM

    by vux984 (5045) on Tuesday August 02 2016, @09:36PM (#383356)

    o believe humans will always be better than AI means you believe at least one of two fantastical claims:

    That is a false dilemma.

    There are other options. One could believe, for example, that the sort of AI that could be 'better than a human at certain X' would require such robust AI that it would be self aware, a person in its own right, and although artificial it would no longer be a 'machine'; that we could not ethically enslave it to censoring social media for us.

    1. that human technological progress will cease; not just slow down, but stop entirely

    That's kind of inevitable no matter how you think about it. Even without invoking the extinction of mankind. The universe, if it is finite, must, by definition be constrained in how much information there is in it. And thermodynamics (entropy) isn't doing us any favors either. But sure that's kicking the can down the line pretty far either way.

    2. that whatever goes on between our ears is somehow "magical" and cannot, fundamentally, be described by natural laws

    We don't know that much about the brain, maybe it uses quantum effects in some way. And if so, perhaps there is a limit that can't be crossed with a purely procedural machine simulation. That's not to invoke magic, because perhaps we can build a suitable 'quantum computer'. Or whatever. Or perhaps the theory is wrong. I'm not advocating it, per se. The point is there is room for stuff between 'something magical going on between our ears' and 'what a computer can do'.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by Thexalon on Wednesday August 03 2016, @02:10AM

    by Thexalon (636) on Wednesday August 03 2016, @02:10AM (#383457)

    One issue here is the difference between science and scientism.

    Science is the process of figuring out how the universe works by repeated application of observing the universe, theorizing descriptions and predictions based on those observations, and then testing those theories by experiment to see whether they're right. This is undoubtedly a useful activity that can lead us to a better understanding of the universe and allow us to create useful technology that takes advantage of that understanding.

    Scientism is a completely different beast. It is the belief that the process of science can, given sufficient time and resources, provide a complete understanding of the universe. This belief is, in fact, disproven in several fields: computer science has demonstrably unsolvable problems, physics has certain questions about quantum states of electrons that has recently been proven to be unanswerable due to mathematical paradox, and even mathematics demonstrably has unproveably true statements about any set of axioms. And just to be absolutely clear, we aren't talking about the sorts of problems where we haven't been able to figure out how to solve them, we're talking about the sorts of problems that any possible solution disproves its own existence. Scientism is very popular among those who have rejected religious nonsense and now wish to belief that they are now completely rational atheists who will through science finally disprove the religious nutjobs. However, it turns out that this belief is as irrational as the belief in "woo" (as James Randi would call it). Which makes sense, because if psychology has taught us nothing else it's that we're all thoroughly irrational even when we believe ourselves to be acting rationally.

    So what this all boils down to is that the chain of logic from (A) "science works" to (B) "science can tell us exactly how people think" to (C) "if we know how people think computers can be programmed to mimic that" is far from proven. Science might indeed tell us how people think, and there are a lot of biologists and psychologists trying to do just that. And computer scientists might well be able to mimic aspects of human thought in software, depending on what that thinking actually is. But neither of those steps are a guarantee.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 2) by julian on Friday August 05 2016, @04:37PM

    by julian (6003) Subscriber Badge on Friday August 05 2016, @04:37PM (#384524)

    There are other options. One could believe, for example, that the sort of AI that could be 'better than a human at certain X' would require such robust AI that it would be self aware, a person in its own right, and although artificial it would no longer be a 'machine'; that we could not ethically enslave it to censoring social media for us.

    That's an interesting ethical consideration that will probably have to be tackled one way or the other but it's a separate issue.

    maybe it uses quantum effects in some way

    Quantum effects would still be described by natural laws. Even if, at bottom, there is some randomness fed into the system, you can account for that randomness in a rigorous way that can be described statistically.

    But the brain probably doesn't use quantum effects [uwaterloo.ca] in a way meaningful for describing consciousness. (warning, PDF)

    • (Score: 2) by vux984 on Friday August 05 2016, @08:15PM

      by vux984 (5045) on Friday August 05 2016, @08:15PM (#384610)

      but it's a separate issue.

      Not completely separate. I guess it depends a bit on where you come down on the semantics of 'computer'. But if you think of a computer as simply a machine that executes an algorithm, then if something needs to acheive consciousness to do X better than a human; then that something may no longer 'be' merely a computer.

      Quantum effects would still be described by natural laws. Even if, at bottom, there is some randomness fed into the system, you can account for that randomness in a rigorous way that can be described statistically.

      I cannot factor large numbers any faster by using a conventional computer to simulate a quantum one. Likewise, simulating a quantum effect based consciousness may not be practical with a conventional computer; even if its theoretically possible with an infinitely fast computer with an infinite amount of ram. :)

      But the brain probably doesn't use quantum effects in a way meaningful for describing consciousness.

      Perhaps not. I was just using it to illustrate there is room to rationally believe that you can't simulate the brain with a conventional computer without invoking a "magical brain".

      We don't know much about consciousness. It may not simply be the product of computation, it may arise from the structure of the brain itself in ways that cannot be 'simulated'. (In the same way that simulating an internal combustion engine in a computer yields no useful mechanical power, no kinetic energy we can use to power a car or turn a crank. It just yields a calculation of how much power would be generated... without actually generating it. Perhaps likewise, its possible that even perfectly simulating a brain in a conventional computer only yields information about energy potentials, neuron states, etc, etc... without generating actual consciousness that is self aware in the process. Perhaps we need to 'build an artificial brain' that is structurally capable of hosting consciousness -- and such an artifact may be fundamentally different from a conventional computer in important ways, where simply throwing more cycles and ram and 'better programs' just can't get you there.