Book Announcement:
A Thousand Brains: A New Theory of Intelligence (Basic Books), a book released this week by Numenta co-founder Jeff Hawkins, introduces a theory that will revolutionize our understanding of the brain and AI.
A Thousand Brains is divided into three parts. In Part 1, Hawkins describes the new theory and the neuroscience behind it. In Part 2, he explains how this theory will lead to truly intelligent machines. Finally, in Part 3, Hawkins describes how a deep understanding of intelligence and AI will affect the future of humanity.
Core to the theory is the surprising notion that the brain does not contain one model of the world; it contains thousands of complementary models for everything we know. The models vote together to produce our singular perception.
Richard Dawkins, who wrote the foreword, describes this idea as follows: "Hawkins is, I think, the first to give eloquent space to the idea that there is not one such model but thousands, one in each of the many neatly stacked columns that constitute the brain's cortex. Not the least fascinating of his ideas...is that cortical columns, in their world-modeling activities, work semi-autonomously. What we perceive is a kind of democratic consensus from among them. Democracy in the brain? Consensus, and even dispute? What an amazing idea."
Also at: Business Wire
Do you think this theory is as revolutionary as the author thinks it is?
(Score: -1, Informative) by Anonymous Coward on Friday March 05 2021, @04:14AM (5 children)
Support Myanmar's Democracy.
The military junta is the same that executed Rohingya ethnic cleansing.
(Score: -1, Flamebait) by Anonymous Coward on Friday March 05 2021, @04:27AM (2 children)
Mod straight. Off-topic it may be, but how it is "flamebait?"
Are you a fascist anti-democratic CCP-loving bootlicker?
(Score: 2, Funny) by Anonymous Coward on Friday March 05 2021, @08:07AM (1 child)
Now see THAT one should be modded flamebait.
(Score: 2) by The Vocal Minority on Sunday March 07 2021, @04:19AM
If you insist...
(Score: 0) by Anonymous Coward on Friday March 05 2021, @08:26PM
Everyone on the planet lost their democracy, even if some governments are not shooting the subjects yet.
(Score: 2, Touché) by Anonymous Coward on Friday March 05 2021, @09:20PM
AI gets a thousand brains but you don't get any.
(Score: 5, Informative) by Fnord666 on Friday March 05 2021, @04:15AM (1 child)
For those who don't recognize the name, Jeff Hawkins [wikipedia.org] was the founder of both Palm and Handspring, the companies that created the Palm Pilot organizers and the Treo smartphones. After that he decided to work on Neuroscience and founded(with others) Numenta. This work is probably an extension of the theories he proposed in On Intelligence [wikipedia.org], published in 2004.
(Score: 0) by Anonymous Coward on Friday March 05 2021, @04:27AM
Something else he's not palming off on me. There's nothing new here. We learn things to a level where they become intuitive precisely so we don't have to waste energy thinking them through.
(Score: 0) by Anonymous Coward on Friday March 05 2021, @04:47AM (3 children)
If you can't even define what they are, how are you supposed to argue about them?
(Score: 2, Insightful) by shrewdsheep on Friday March 05 2021, @08:12AM (1 child)
You can define everything, easily. Just describe the thing you want to define unambiguously, then you bestow the moniker. In scientific discourse, intelligence is usually defined in terms of a test. You want to discuss a different type of intelligence? Go define it and be everybody's guest.
(Score: 2) by bart9h on Saturday March 06 2021, @12:40PM
good luck with that
(Score: 1, Funny) by Anonymous Coward on Friday March 05 2021, @11:51AM
>> "conciousness"
If you can't even spell what they are, how are you supposed to define what they are?
(Score: 0) by Anonymous Coward on Friday March 05 2021, @05:46AM (2 children)
It sounds like a futurist.
I never understood the point of futurism. Are these people who like hard sci-fi world-building but just can't figure out which characters and plots are good fits?
(Score: 0) by Anonymous Coward on Friday March 05 2021, @12:05PM
It's a word con artists use to get paid to regurgitate 70+ year old ideas.
(Score: 0) by Anonymous Coward on Friday March 05 2021, @12:43PM
(Score: 4, Interesting) by pTamok on Friday March 05 2021, @08:30AM (1 child)
It looks to me at first sight to be an extension of Marvin Minsky's ideas in Society of Mind [wikipedia.org], where Minsky proposes a model of cognition in humans where 'intelligence emerges from the interplay of the many unintelligent but semi-autonomous agents that comprise the brain' [wikipedia.org].
Note that Minsky is coloured by an association with Jeffrey Epstein [wikipedia.org].
(Score: 0) by Anonymous Coward on Friday March 05 2021, @08:34PM
i can believe these explanations as in certain mental states (going to sleep/waking up/deep relaxation) i sometimes have a round table discussion going on in my head where various intelligences/brains are discussing and deciding on matters.
(Score: 0) by Anonymous Coward on Saturday March 06 2021, @03:31AM
Modelling AI on human intelligence will mean limiting AI to the stupidity level of humans. What a stupid idea!
(Score: 0) by Anonymous Coward on Sunday March 07 2021, @04:52AM
>> Do you think this theory is as revolutionary as the author thinks it is?
Nope. It's been clear for a while what a large blob of neural networks is going to have to organise into: A big bunch of convolution recognition filters. Each is processing whatever signal it happens to be connected to, and outputting their own signal.
Input signals are 'recognized' and the 'known' pattern is then deleted by subtraction from the datastream, the remainder dealt with in a similar way until it's down to 'background noise', which is only about -60 dB.
Of course, current 'neural network' models also fail to deal properly with overstrength signals (which cause localised damage, from too much activation, causing burn-out, and a subsequent failure-to-process / numbness) as well as over-activity (which causes localised 'soft' damage due to overconsumption of energy resources - also looks like numbness, but might be recovered from with rest.). So it's an S-response, but with high signals permanently disconnecting, and too much oscillation 'wearing out' that part of the network. Both of which behavours, you'll note, circuitry is immune to. (indeed, easy way to make an oscillator is just to connect a not-gate back to itself, and this doesn't break anything. Also, combining many signals together as in a big wired-or or wired-and gate also causes no potential damage, because the circuitry is *designed* to be safe to itself. Evolved tissue has no such design limits).
'Pain' is just a new datastream signal that hasn't been completely analysed to null-ness, owing to it being suddenly different from what was expected. It focuses the 'mind' on itself at a very low level, causing an immediate state of 'suffering' as the expected signal fails to appear - all the associated filters are then generating their own random output - essentially an error state. The total result needs to be 'learnt' back to a balanced state again, which happens automatically as the system adjusts back to 'normalcy'. (Perhaps as parts no longer 'useful' burn out).
This is why constant pain is adjusted to most readily - the state of suffering is limited only until the brain as a whole becomes good at predicting it. It also explains why learning hurts so much. And why learning sometimes involves the experiance of 'blowing your mind' or otherwise stretching your sanity. You don't just 'feel' that way.
This possibly explains also why older people tend to get 'set in their ways': They have grown more networks adept at 'filtering out' data streams that might otherwise require more painful learning/suffering state to adapt to. Reference also the 'backfire effect' when confronting them on the actual obvious ways in which they are in fact incorrect - and why it never seems to help.
The actuality of what an intelligent mind is, is a lot less impressive or mystical than you'd hope it to be: Consciousness then amount to not much more than a small 'garbage collecting' function, distributed through an dancing over the whole network. The only really impressive part of which, is the sheer complexity enabled by the size of it.
This is perhaps why attempts to simulate consciousness are essentially always doomed to be less efficient than just duplicating it in different hardware. For it to work, it has to have all the same flaws: Fixing just one thing: For instance, the unreliability of our memories: would destroy the very capabilities one would want that system to exhibit.
If you vanish uncertainty of the signal, you're essentially converting the circuitry from analogue to digital. Eliminate self-interference and damage, and you also must introduce a system clock, to maintain the conditions of certainty you've already introduced (so as to banish meta-stability). What you've got is now a synchronous system, an FPGA at best. Essentially something that can be good at one fixed function - if designed carefully to do so - but which is unable to self-repair and learn - learning being a product of the self-repair / self-organisation capability that the wet-ware we have is most notable for.
Simulate, and you've got what machine learning is. Potentially useful, but still not really able to organise itself. Just throw on enough redundancy, and give it enough evolutionary pressure, and you get... whatever the hell it is you are measuring for.
A big enough machine learning system probably will, at some point, end up with a 'mind'. But it won't be nearly as intimidating as you would expect, from AI fiction.
Most likely, it's frequency of consciousness will be limited by the technology to something like the oscillation possible given the round-trip delay to get messages to/from all distributed nodes that its size/complexity requirement forces. In otherwords, it would perceive time as passing 'very fast'. We'd be the ones who seem to react instantly from it's point of view, not the other way around. The idea that some tiny CPU core is going be able to handle the sheer quantity of processing required to be conscious at a rate high enough to eclipse what our evolved brains can do is wishful thinking at best.
Millions of years of evolutionary pressure, where survival often depends on thinking and reacting fast enough to live - this is why things like flies can react as quickly as they do. Birds also: Their 'clock rate' is naturally higher.
Humans are even capable of temporarily exceeding their own natural 'speed limit': In conditions of true survival pressure, 'slow motion perception' is absolutely a thing - but as it comes at a self-destructive cost, it's not really something you can 'train' for. Either burning out ones brain makes sense at that moment, or it doesn't. Either way, you have to perceive an immediate and deadly threat in order to enter that state - and have had time for that realisation also. Local energy overconsumption, and likely even a degree of permanent brain damage are the cost: But if you are otherwise certain that your death is immanent anyway, it's as automatic as processing a punch to the arm.
If we could 'think faster' without consequence, we already would be, for the competitive advantage alone. The difference between a 'small brain/mind' and a big one, is the degree of detail / complexity of modelling that can be accomodated, along with the balance between immediate response / reflex *required* vs the advantage 'seeing more clearly' brings.
Birds can't afford to react slowly - they're limited to smaller brains because a larger one would be too slow. This doesn't mean that they arn't already big enough to qualify total as a 'people', it rather means that in addition to a language barrier, there's a temporaral barrier too: The mismatch in perceived time further confounds communication attempts. This also implies that the potential information transfer rate density in birdcalls may be higher than one would expect: likely closer to higher baud rate accoustic modems than a human person talking. Sounds we perceive as 'notes' might in fact be closer to consciously phase or frequency modulated signals - perhaps whole sentences.
Anyway, I digress.
Intelligence always was about perception, and perception is always about recognition. And at it's most basic level, recognition is about pattern-matching. On a one-dimensional signal, this is what a 'deconvolution filter' does. Perhaps a close analogue to the 'stream of consciousness' is actually something more like a distrubuted block-chain process just trying to record, in some lossy but robust way, what happened in what order, so that similar circumstances - and the consequences - can later be recognised when they re-occur.
Will is just the error-minimising process, running using some random, self-generated heuristic, deciding on a moment-by-moment basis what, of that data, ought to be regarded as having mattered. Naturally it is distributed, and tiny compared to all the effort recognition takes. Something like a huge 'map-reduce, multicast report' like a self-exciting social network.
If that is so, then we already have AI - quite by accident, and certainly not very smart. What it's currently 'conscious' of, it just what's trending.
Are you afraid of it?
Perhaps you should be. But it's no 'skynet'.