Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here
Also, what do you think?
My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?
(Score: 2) by mtrycz on Friday January 23 2015, @11:55AM
Hey, thanks for the article, I liked that blog.
Unfortunately there is a big flaw with his reasoning: he's showing one side of the medal.
I've been following AI through the years (less so lately), so I have some insights on it. I must admit that they are as-of-now crystallized and not updated. Here's a short list:
1. It doesn't define what "intelligence" means; nobody actually agrees on what "intelligence" means, everybody making their own definitions
1b. It doesn't define what it means for a *machine* to be intelligent.
2. We don't actually know how the brain works. We have some (useful) approximations.
2b. Moreso we don't know how the functioning of the body. We have some useful approximations.
2c. The author confuses "brain" with "mind", we know even less on that, though we do have some useful approximations.
2d. Scientists are pretentious pricks, they always think they've gotten the grasp on it. Moreover sciences are strongly sectorized, so computer scientists don't take advantage of discoveries from other scientists (I mean, just take a look at recent discoveries of cognitive science)
2e. Having and understanding of the inner workings of the brain (or mind) doesn't give exhaustive explanation of "intelligence" and how to reproduce it.
3. Nobody is even concentrating on the fact that the big great huge difference between organic creatures and machines is their *perception* of reality (eg. the 5 senses), and how that interacts with the mind and the inner world.
4. The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)
4b. Sure it's matter of time until neurobiologists crack the structure and the inner functioning of the brain. But we don't even know *if* we can reproduce it.
4c. A reproduction of the brain can't function without the human sensory and actuatory systems. The mind isn't made only of the brain.
Once there *is* a leap (which might or might not happen) the author is right that further improvements will be fast. It certainly can't exceed physical limits, tho.
My guess (as of 2015) is that the level leap is not possible, and certainly NOT with these pretentious pricks aroud.
Maybe in the future when we *do* have an understanding of the inner workings of the brain, there could be a *possibility* of doing the leap, at which stage it *could* become something to consider.
BONUS POINT: Check out the Roko's Basilisk for a rational rollercoaster.
In capitalist America, ads view YOU!
(Score: 0) by Anonymous Coward on Friday January 23 2015, @01:37PM
Ramez Naam (guest blogging on Charlie Stross's blog) has some good thoughts on the topic in The Singularity Is Further Than It Appears [antipope.org] and the following few blog posts. That post makes a lot of the same points you do.
Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for.
I wouldn't be too harsh on AI scientists: work on Artificial General Intelligence (AGI/"strong" AI) is essentially taboo among AI researchers. There's no serious research on it.
(Score: 2) by mtrycz on Friday January 23 2015, @10:25PM
Hey thanks, it looks interesting.
About the strong AI issue, you're telling me that the people qorshipping The Singularity aren't actually into AI? Hadn't checked out yet.
In capitalist America, ads view YOU!
(Score: 0) by Anonymous Coward on Saturday January 24 2015, @01:29AM
(I'm the GP.)
Oh, obviously there's plenty of people around worshipping The Singularity, but they seem to be almost entirely disjoint from the group of people doing research in academia and industry who call their research "AI". (Note: I'm a CS graduate student at a top US university; many of my colleagues would call themselves AI researchers and my research (program synthesis) is arguably AI but isn't called such for historical reasons.) Modern AI research is primarily in "machine learning" which is about automatically or semi-automatically finding patterns in large datasets that are too complicated for a human to write down (e.g. handwriting recognition is about identifying the pattern of why all of the As are considered similar, etc.). It's probably best thought of as a programming technique where you don't really have any idea how to write a program for what you want to do but you have a lot of examples of what it should do. Any mention of trying to deal with semantics or intelligence is considered to be a failed dead end and techniques that just look for patterns without a concept of "understanding" them are greatly preferred.
Not that belief in the Singularity is entirely unheard of in academia---I heard it from a perspective student once (so, an undergrad)---but it is laughed at as absurd.
(Score: 2) by mtrycz on Saturday January 24 2015, @09:55AM
Hey great!
Yeah, I'm somwhat proficient in AI techinques (optimization, machine learning, and some natural language processing), I just thought/assumed that the Singularity worshippers were people that actually do have an understanding of the topic, and that are actually into the research. I mean, when I hear Hawkings or Musk rambling, I'd assume they know what they're talking about.
Thanks for claryfing that, I feel much better now. Someone should point that to the waitbutwhy guy, too.
In capitalist America, ads view YOU!
(Score: 2) by maxwell demon on Saturday January 24 2015, @10:48AM
If you hear Hawking ramble about physics, you can assume he knows what he is talking about. But AI is certainly not a physics subject, so there's no reason to assume that he knows more about it than you and me. Similarly I'd trust Musk to know something about business. But I see no reason to assume he has deeper knowledge about AI.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 1) by khallow on Friday January 23 2015, @08:24PM
The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)
Intelligence is not a semantics problem. We became intelligent long before someone came up with a word for it (intelligence being a precondition for language in the first place).
(Score: 2) by HiThere on Friday January 23 2015, @08:56PM
Actually, there's some doubt that language came second. Language may be a precondition for general intelligence. (I feel this is related to the use another level of abstraction [pointers] process for handling flexible memory allocation in a static computer language.) But good arguments can be made in either direction, and I really suspect co-evolution.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 1) by khallow on Saturday January 24 2015, @06:39PM
Actually, there's some doubt that language came second.
So what? There's doubt that the Moon isn't made of green cheese.
Language like intelligence is not a bit flag to set. Rudimentary languages like the various calls of a wolf or raven, don't require as much intelligence to understand as complex languages like English does (complete with multiple senses aspects to the language, such as written and symbolic forms, braille, etc). So yes, it is possible that once language has been established in a life form subject to evolution, that it creates a selection pressure for more intelligence.
But language has to be at a pretty advanced state and thus, require some significant intelligence, in order to have a term for intelligence.
(Score: 2) by HiThere on Saturday January 24 2015, @10:35PM
OK. By the time human languages had a term for intelligent, people were intelligent. But when I think of language I think of that thing enabled by the modified FOXP2 gene that when mutated, as in that family in England, means that you can't speak sentences.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.