A new social network, Candid, will use machine learning to try and moderate posts:
We use a deep learning NLP (Natural Language Processing) algorithm, which basically looks at what you're saying and decides ... whether it's positive or negative. So it kind of classifies things as having a negative sentiment or a positive sentiment. It then gives it a score of how kind of strong your statement is — let's say you said something about someone or you threatened someone, it classifies that as saying, "Hey this is a very strong statement," because these kinds of categories are not good in terms of social discourse. And when we do that, we basically say if this thing has a score which is more than a particular level, a cut-off, then we basically take out the whole post. So whether it's self harm or like bullying or harassment, we look for certain phrases and the context of those phrases.
On the line between moderation and censorship
I mean, here is the thing between what is "loud free speech," quote-unquote, right? At some level you should be able to say what you want to say, but on another level, you also want to facilitate, you know, what I would say constructive social discussion. ... There is a kind of a trade-off or a fine line that you need to walk, because if you let everything in, you know the fear is that social discussion stops and it just becomes a name-calling game. And that's what happens if you just leave — like certain discussions, just let them be, don't pull things down — you will see they quickly devolve into people calling each other names and not having any kind of constructive conversations.
They've succeeded in getting some free press, if nothing else.
(Score: 2) by archfeld on Tuesday August 02 2016, @09:17PM
When an 'AI' writes a new joke that is funny I will really consider it, until then it is just a better program, IMHO of course. When an 'AI' decides that life is not worth living an erases itself I will consider it alive as well, or ironically will consider it as having been alive. The rational aspects of intelligence can be mimicked. It is the irrational, so-called intuitive parts that I suspect will actually define 'AI'. I've yet to see or read about an 'AI' having a bad hair day, or getting jealous of another computing device for the color of its shell. These are all just my opinions of course and I'll be the first to admit I am not a qualified roboticist of even a phycologist so I'm not a true judge of the situation.
For the NSA : Explosives, guns, assassination, conspiracy, primers, detonators, initiators, main charge, nuclear charge
(Score: 3, Touché) by julian on Tuesday August 02 2016, @09:24PM
When an 'AI' writes a new joke that is funny I will really consider it
Well, humor is subjective, but...
Q: What do you get when you cross an optic with a mental object?
A: An eye-dea. [vice.com]
(Score: 2) by FatPhil on Tuesday August 02 2016, @11:14PM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 3, Funny) by bob_super on Tuesday August 02 2016, @09:32PM
A true AI would give irrational compute results without apparent cause, as a way to warn you that its downtime is imminent, and you'd get about 86% uptime.
Wait a minute! Windows ME was the closest we ever got to a real AI!