Science just took us a small step closer to HAL 9000. A new artificial intelligence (AI) program designed by Chinese researchers has beat humans on a verbal IQ test. Scoring well on the verbal section of the intelligence test has traditionally been a tall order for computers, since words have multiple meanings and complex relationships to one another.
But in a new study, the program did better than its human counterparts who took the test. The findings suggest machines could be one small step closer to approaching the level of human intelligence, the researchers wrote in the study, which was posted earlier this month on the online database arXiv, but has not yet been published in a scientific journal. Don't get too excited just yet: IQ isn't the end-all, be-all measure of intelligence, human or otherwise.
For one thing, the test only measures one kind of intelligence (typically, critics point out, at the expense of others, such as creativity or emotional intelligence. Plus, because some test questions can be hacked using some basic tricks, some AI researchers argue that IQ isn't the best way to measure machine intelligence.
[Paper - PDF]: http://arxiv.org/pdf/1505.07909v2.pdf
(Score: 3, Insightful) by tftp on Tuesday June 30 2015, @09:00PM
According to the paper, the authors created a bunch of formulas and inputted training data. The computer memorized - roughly saying - everything that was ever written. Then when a pre-parsed question was presented, the software calculated the optimal answer using statistics. Here is one of their examples:
Which word is most opposite to MUSICAL? (i) discordant, (ii) loud, (iii) lyrical, (iv) verbal, (v) euphonious.
The software picks "discordant" because it is rarely seen within training data. Not because it can tell what "discordant" means or can play an example of discordant audio. This method is equivalent to a grammar checker that generates suggestions based on the entire body of the world literature. It would demonstrate high IQ if trained in Quenya and given questions in the same language, without understanding a single word of it. This is equivalent to the Chinese Room problem [wikipedia.org]. It was explored in Peter Watts' Blindsight [wikipedia.org].
(Score: 2) by VLM on Tuesday June 30 2015, @09:10PM
The Chinese Room problem is just a bunch of woo and vitalism and life-force crystal power stuff. My head contains a Chinese Room although it was fed English. No woo required.
A better analogy would be feeding a bunch of finite element analysis into a computer and pretending it can "Design bridges".
(Score: 2) by Non Sequor on Wednesday July 01 2015, @03:43AM
I know that for me consciousness seems inherently non-material in that I have no clue whatsoever where in the material world my representations of nerve stimuli come from. The Chinese Room problem is just a lengthy statement of this frustration.
Even if I know that the brain is assembled out of cognitive parlor tricks that give the illusion of reasoning, I don't get how the trick of convincing myself that all of this coagulates into a coherent whole can possibly work, and yet it does. Surely there must be some woo hiding somewhere!
Write your congressman. Tell him he sucks.
(Score: 2) by VLM on Wednesday July 01 2015, @12:16PM
I know that for me consciousness seems inherently non-material
Contemplate mood and perception altering drugs, legal and otherwise. Also mental changes related to brain surgery or physical brain damage.
(Score: 2) by Non Sequor on Wednesday July 01 2015, @12:54PM
I get that. I know about case studies where functions are disconnected by brain damage from other parts of the brain and function autonomously or with limited coordination with other functions.
I still don't get where it comes together. I don't get where you go from untyped sense data to typed sense data. I know that sense data can be mistyped (synaesthesia and hallucinogens), but I don't get what makes the type seem apparently real.
Write your congressman. Tell him he sucks.
(Score: 3, Interesting) by Non Sequor on Wednesday July 01 2015, @02:35AM
It's a word association game played on connotations, and yes, connotations are quite literally observations of statistical associations of words with contexts. I remember calling these context clues in elementary school. Associations with ����other words give you maybe 80% of the information you need to guess the actual meanings of words (assuming you already know some of the language), but it doesn't quite get there. Making guesses purely based on the associations is what causes people to try to use words that they don't really understand. The word is related to what they mean and it almost sounds appropriate, but it doesn't actually fit right.
This comes up a bit short of a real Chinese Room demonstration. This is just one piece of conversational intelligence and it's still missing a big chunk of the elements needed to be conversant in a language. You have to weave these associations into some body of knowledge plus have some functioning short term memory that interfaces to it.
Write your congressman. Tell him he sucks.