Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday August 02 2016, @07:53PM   Printer-friendly

A new social network, Candid, will use machine learning to try and moderate posts:

We use a deep learning NLP (Natural Language Processing) algorithm, which basically looks at what you're saying and decides ... whether it's positive or negative. So it kind of classifies things as having a negative sentiment or a positive sentiment. It then gives it a score of how kind of strong your statement is — let's say you said something about someone or you threatened someone, it classifies that as saying, "Hey this is a very strong statement," because these kinds of categories are not good in terms of social discourse. And when we do that, we basically say if this thing has a score which is more than a particular level, a cut-off, then we basically take out the whole post. So whether it's self harm or like bullying or harassment, we look for certain phrases and the context of those phrases.

On the line between moderation and censorship

I mean, here is the thing between what is "loud free speech," quote-unquote, right? At some level you should be able to say what you want to say, but on another level, you also want to facilitate, you know, what I would say constructive social discussion. ... There is a kind of a trade-off or a fine line that you need to walk, because if you let everything in, you know the fear is that social discussion stops and it just becomes a name-calling game. And that's what happens if you just leave — like certain discussions, just let them be, don't pull things down — you will see they quickly devolve into people calling each other names and not having any kind of constructive conversations.

They've succeeded in getting some free press, if nothing else.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by The Mighty Buzzard on Wednesday August 03 2016, @01:09AM

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday August 03 2016, @01:09AM (#383436) Homepage Journal

    That only works if the people in charge of adjudicating the rules are impartial. This is almost never the case.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by vux984 on Wednesday August 03 2016, @01:18AM

    by vux984 (5045) on Wednesday August 03 2016, @01:18AM (#383443)

    This is almost never the case.

    In my experience for actual live conversations/debates/etc they generally are. (perhaps not impartial, but sufficiently dedicated to behaving impartially so as not to matter) And in informal settings where societal social norms are all that are in effect to regular discussion, again it seems to work pretty well.

    Web forum moderators, etc, yeah, not so much. And this is why i don't think 'candid' really has much hope... it appears like its going to be managed by the same people who would moderate poorly. Automating bad moderation is still bad moderation.