Commencez à lire On Intelligence sur votre Kindle dans moins d'une minute. Vous n'avez pas encore de Kindle ? Achetez-le ici Ou commencez à lire dès maintenant avec l'une de nos applications de lecture Kindle gratuites.

Envoyer sur votre Kindle ou un autre appareil


Essai gratuit

Découvrez gratuitement un extrait de ce titre

Envoyer sur votre Kindle ou un autre appareil

Tout le monde peut lire les livres Kindle, même sans un appareil Kindle, grâce à l'appli Kindle GRATUITE pour les smartphones, les tablettes et les ordinateurs.
On Intelligence
Agrandissez cette image

On Intelligence [Format Kindle]

Jeff Hawkins , Sandra Blakeslee
4.5 étoiles sur 5  Voir tous les commentaires (2 commentaires client)

Prix éditeur - format imprimé : EUR 13,65
Prix Kindle : EUR 8,75 TTC & envoi gratuit via réseau sans fil par Amazon Whispernet
Économisez : EUR 4,90 (36%)


Prix Amazon Neuf à partir de Occasion à partir de
Format Kindle EUR 8,75  
Relié --  
Broché EUR 10,29  

Auteurs, publiez directement sur Kindle !

Via notre service de Publication Directe sur Kindle, publiez vous-même vos livres dans la boutique Kindle d'Amazon. C'est rapide, simple et totalement gratuit.

Descriptions du produit

Jeff Hawkins, the high-tech success story behind PalmPilots and the Redwood Neuroscience Institute, does a lot of thinking about thinking. In On Intelligence Hawkins juxtaposes his two loves--computers and brains--to examine the real future of artificial intelligence. In doing so, he unites two fields of study that have been moving uneasily toward one another for at least two decades. Most people think that computers are getting smarter, and that maybe someday, they'll be as smart as we humans are. But Hawkins explains why the way we build computers today won't take us down that path. He shows, using nicely accessible examples, that our brains are memory-driven systems that use our five senses and our perception of time, space, and consciousness in a way that's totally unlike the relatively simple structures of even the most complex computer chip. Readers who gobbled up Ray Kurzweil's (The Age of Spiritual Machines and Steven Johnson's Mind Wide Open will find more intriguing food for thought here. Hawkins does a good job of outlining current brain research for a general audience, and his enthusiasm for brains is surprisingly contagious. --Therese Littleton

From Publishers Weekly

Hawkins designed the technical innovations that make handheld computers like the Palm Pilot ubiquitous. But he also has a lifelong passion for the mysteries of the brain, and he's convinced that artificial intelligence theorists are misguided in focusing on the limits of computational power rather than on the nature of human thought. He "pops the hood" of the neocortex and carefully articulates a theory of consciousness and intelligence that offers radical options for future researchers. "[T]he ability to make predictions about the future... is the crux of intelligence," he argues. The predictions are based on accumulated memories, and Hawkins suggests that humanoid robotics, the attempt to build robots with humanlike bodies, will create machines that are more expensive and impractical than machines reproducing genuinely human-level processes such as complex-pattern analysis, which can be applied to speech recognition, weather analysis and smart cars. Hawkins presents his ideas, with help from New York Times science writer Blakeslee, in chatty, easy-to-grasp language that still respects the brain's technical complexity. He fully anticipates—even welcomes—the controversy he may provoke within the scientific community and admits that he might be wrong, even as he offers a checklist of potential discoveries that could prove him right. His engaging speculations are sure to win fans of authors like Steven Johnson and Daniel Dennett.
Copyright © Reed Business Information, a division of Reed Elsevier Inc. All rights reserved.

Détails sur le produit

En savoir plus sur les auteurs

Découvrez des livres, informez-vous sur les écrivains, lisez des blogs d'auteurs et bien plus encore.

Commentaires en ligne 

3 étoiles
2 étoiles
1 étoiles
4.5 étoiles sur 5
4.5 étoiles sur 5
Commentaires client les plus utiles
1 internautes sur 1 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Ouvre de nouvelles perspectives 1 octobre 2010
Format:Broché|Achat vérifié
Programmeur intéressé par l'intelligence artificielle, j'ai trouvé cet ouvrage passionnant. Non seulement l'auteur y détaille les limites des approches actuelles dans le domaine des réseaux neuronaux, mais surtout il explique sa conception du fonctionnement du cortex, et comment notre cerveau reconnait des assemblages complexes (un visage, une mélodie) à partir d'entrées simples via les nerfs, avec des schémas clairs et des explications détaillées. Peut-être un peu court sur les implémentations possibles de ses idées dans le domaine du logiciel, mais ce n'était pas le but de l'ouvrage.
Avez-vous trouvé ce commentaire utile ?
Format:Broché|Achat vérifié
This is an extremely important book in the field of Artificial Intelligence. The author reject this Artificial Intelligence because it identifies intelligence to the behaviors produced by this intelligence. Hence the machine simulates intelligent behavior but is not intelligent. Three things are essential goals to satisfy if we want to move towards intelligent machines. We have to take into account and integrate time. We have to include as architecturally essential the process of feedback. We have to take into account the physical architecture of the brain as a repetitive hierarchy. Strangely enough the main mistake is already present in this first programmatic intention. Jeff Hawkins does not include the productions of that intelligent brain. I mean language, all ideological representations or models of the world from religion to philosophy and science, not to speak of arts and culture. And strangely enough this mistake is locked up in an irreversible declaration:

“A human is much more than an intelligent machinre . . . The mind is the creation of the cells of the brain . . . Mind and brain are one and the same.” (41-43)*

We cannot but agree with the first sentence, but the mind is not “created” by anything. It is produced, constructed by the brain from the sensorial impulses it gets from the various senses and the way it processes them in its repetitive and parallel hierarchical architecture. But the mind is a level of human intelligence of its own. Unluckily Hawkins will not see it. I have already said what it excludes from this human intelligence, but we must add the fact that this human intelligence lives in a situation that enabled this intelligence to develop and invent its first tools when Homo Sapiens started its journey on earth some 300,000 years ago.
Lire la suite ›
Avez-vous trouvé ce commentaire utile ?
Commentaires client les plus utiles sur (beta) 4.4 étoiles sur 5  195 commentaires
181 internautes sur 203 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Simply Indispensable 8 octobre 2004
Par Bruce Gregory - Publié sur
Format:Relié|Achat vérifié
It is not very often that you encounter a book that alters, not simply what you think, but how you look at the world. On Intelligence is such a book. Jeff Hawkins develops a perspective on intelligence that makes sense of much of what I have discovered about learning over the past twenty years. His focus is on a unified model of how the cortex works, but in truth you do not need to have deep interest in neurobiology to see the power of the model. The book is very clear and readable, something I have learned to associate with Sandra Blakeslee's deft touch (see, for example, Phantoms In the Brain, by Ramachandran and Blakeslee). The heavy lifting occurs in the lengthy sixth chapter, "How the Cortex Works." You might want to skim this chapter or even omit it entirely on your first reading. It is well written, but requires a very thoughtful reading. The model Hawkins develops in this chapter underpins his view of intelligence, but it is not necessary to grasp the details to appreciate the power of the vision. If you have the slightest interest in the role of the brain in making us who we are, you owe it to yourself to read this book. I couldn't recommend it more highly.
73 internautes sur 80 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 A Great Intro to Even Greater Insights 18 février 2005
Par Jane E. Carroll - Publié sur
The accolades previous reviewers have lavished upon this book are all fully deserved. It is not, however, "the first time all these bits and pieces have been put into a coherent framework". The work of Stephen Grossberg explored all of these themes in the 1970s. Unfortunately Grossberg expressed his key insights in systems of differential difference equations that few could understand and fewer still could build upon or contribute to.

To his credit, Hawkins does cite Grossberg approvingly at several junctures in his argument, but he fails to take into account several of Grossberg's greatest insights into neocortical processing: his theory of how serial processing can be accomplised in a parallel anatomy and his theory of "rebounds". The latter is especially important since it explains how new memories are prevented from overwriting old memories. For example, when I learn a second language, it doesn't overwrite my first.

These criticisms, however, are in no way meant to detract in the slightest from Hawkins' superb book. It is an eminently readable account of neocortical computing, and correct in all its broad brush strokes. If you are as beguiled by "On Intelligence" as the other reviewers in this thread, my purpose is only to alert you to the even deeper wonders that are to be found in Grossberg's work. As I have said, his work is difficult, but his 1980 and 1982 Psychological Review articles will provide good entry-points. Those of you with an interest in brain and language will find an even better second course in neocortical computing in Loritz' "How the Brain Evolved Language" (Oxford University Press, 1999).
50 internautes sur 55 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Central Dogma for the Brain 29 septembre 2004
Par Donald B. Siano - Publié sur
Jeff Hawkins is the man who was the architect of the PalmPilot, the Treo, and invented Graffiti, an alphabet for inputing data to a computer with a stylus. But this book is about his other love, the deciphering of the code that makes the human brain work. There is nothing like a big, important puzzle to get the blood working, and mine was powerfully pulled along . With the human genome project's sequencing of human DNA nearly completed, understanding the brain has got to be the most important scientific undertaking one can think of. Hawkins easily persuades us that there is a burning need for a "top down" model for the brain that can play a role something analogous to the Central Dogma of molecular biology, which guides and organizes research, prioritizing the myriad of possible tasks into something like that required for the logistics of a conquering army's march through an alien land.

He also persuaded me that he has some important insights of that model that I found tantalizing, new and exciting. His central model concerns the role of the cortex in producing intelligence. He makes the case for a central dogma he calls "the memory-prediction framework." This idea says that the cortex is a machine for making predictions for temporal sensory patterns based on memories of past patterns. The prediction algorithm carried out in the cortex is the same for all of the senses of vision, touch, hearing, etc., which accounts for, among other things, the basic physiological uniformity of the cortex, and the plasticity of the brain in adapting to such problems as blindness or deafness.

He argues that since the "clock" of the brain operates at a tick-rate on the order of 5 milli-seconds, and most of the functions of the brain (e. g. recognizing that a picture of a cat shows a cat) are carried out in less than 100 ticks. From the time that light enters the eye, to the time it takes to signify recognition takes less than a second. A computer would take billions of instruction steps, and even the fastest parallel computer available would not do it in less than millions of steps. So the brain doesn't really "compute" the answer, it retrieves it from memory, which requires far fewer steps than the computation. Sounds good to me.

His explication of the memory-prediction framework is clear and accessible even to the uninitiated like me, though I found some of it in the middle pretty heavy going. But this is something like reading Watson and Crick's paper on the structure of DNA. The part about turning the diffraction diagram and other insights into a workable model was a little above my head, but I could still see the importance of the answer, and how it addressed the problem of replication and how it gave clues as to how to "read the genes." I can only grasp part of what Hawkins has done, and I can see that there is still a long way to go. But I can still jump up and down about it!
114 internautes sur 132 ont trouvé ce commentaire utile 
2.0 étoiles sur 5 Interesting but Vague and Inaccurate 16 décembre 2005
Par Derek W. Hoiem - Publié sur
The early parts of the book (up to around p 60) were a great read and convinced me to buy the book. But when Hawkins finally laid out his "big ideas", I was deeply disappointed. Hawkins spends considerable space claiming that AI researchers hack up algorithms based on the "how do I do it" approach. He suggests that "real" intelligence requires memory-based hierarchical models.

What is especially frustrating to this AI (specifically vision) researcher, is that Hawkins does not seem to be aware of any AI research that has been going on in the last 15 years, during which data-driven learning approaches have become standard. I was merely suspicious of his ignorance until I checked his bibliography, in which the most recent technical AI citation was from before 1990.

Furthermore, Hawkin's theories on the brain are largely unsubstantiated. He states that his ideas were largely sparked by one dated paper that other researchers have largely ignored - probably for good reason. For instance, he claims that, since different parts of the brain have a similar physical structure, they must function similarly. This is very oversimplistic.

Nevertheless, I did find parts of the book to be entertaining and appreciated his view on the brain's role as a predictor. Although I do not think that I completely wasted my time in reading this book, my time could have been better spent reading something else. Therefore, I recommend this book to non-scientists who want to read about the brain but aren't particularly concerned about the accuracy/usefulness of what they read. Just be a very critical reader and be careful not to be smacked in the course of all the hand-waving!
53 internautes sur 61 ont trouvé ce commentaire utile 
5.0 étoiles sur 5 Loved the book, holes in the theory 19 octobre 2004
Par Gary R. Bradski - Publié sur
This is one of the few books to posit a theory of human and general intelligence, and the only book of the few that is clear and well written. I think it is seminal will re-ignite interest and activity in building intelligent machines. A pleasant interesting read even for those not working in the field.

In one sense, this is a side issue since one can probably build intelligent cars, vacuum cleaners and search agents without consciousness, but in another sense it's a crucial aspect of our experience. Hawkins claims that consciousness is just what it "feels" like to have a cortex. I differ. My guess builds precisely on Hawkins suggestion that the cortex is a generative (my word, his is associative completion) hierarchy. That is, we synthesize/simulate the external world inside our head. But, we're a social creature and place a lot of value, evolutionary and otherwise on being able to imagine/simulate the mental state of other people ("my boss will be angry if I do that", "she likes me", ...). Yet, as a mater of simple functioning, we must also simulate ourselves in the world to know how to act. In my mental world, I simulate myself when I consider whether I can squeeze through a gate or lift a weight. When our simulation of mental state became grafted to our simulation of self, I think consciousness came about as an epiphenomena - consciousness is our simulation of our selves, of our own internal state.

Some holes which might exist either in my brain or in Jeff Hawkin's theory:

P-173 Attention gets pretty short shrift in the presented theory down to an alternative, hierarchy bypassing pathway in the Thalamus that gets turned on by higher regions if unexpected events occur bellow or the higher region is directed externally - the last is somewhat circular reasoning: attention is turned on if attention is directed. John Reynolds at Salk has been studying visual attention in monkeys and is finding evidence of boosting or diminution of contrast is what visual attention is doing so that visual items win or loose the inhibitory competition between features and that this is perhaps what lets some items rise up to conscious notice.

Attention is fairly sequential and substantially bottlenecked for what it can process (see "change blindness" illusions [...] ). In many of these illusions, you don't notice when huge portions of the visual scene change, items appear or disappear etc.

Hebian learning is great, except that it also unlearns equally well. I quote Grossberg's term for the problem in caps above. Memory needs some kind of gating mechanism or it will rapidly turn into mud. Either memory is unidirectional (connections start out high and only shrink, or starts out low and only grows), and/or there is a gating mechanism that isn't well explained here. What stabilizes learning? P-136: a purple "bucket" became "indigo" (or a page earlier, orange is placed in "red"). First of all, this can shoot down a whole painstakingly learned hierarchy of learning above - in general, a bad move. Ever done visual tracking algorithms? - if you allow your template to adapt a little bit in say tracking a face, pretty soon a little bit of background "wall" starts entering the template and pretty soon "wall" becomes your (very stable) "face" template. The same thing will happen here - color buckets will randomly turn into each other, drift around - chaos. Just like our legal system, most new rulings should have very local effects and only very rarely will something ripple changes through the larger system. If this happens too often, the whole structure collapses.

Finally, this ignores all the critical period stuff in learning. Some things are laid down early and in clear order and they don't seem to change and if not learned early just cannot be learned. Famous study of this is "Kitten in the carousal" where kittens are raised in the dark and only get to walk in the light in short intervals where one cat can move but is mechanically yoked to another cat who sees the exact same things, but is stuck in a carousal (a little box) so that it's leg movements don't control it's movements. If this is done too long, the poor kitten in the carousal never learns to see (depth) at all! Even when let go into the light. If let go early enough, it will learn to see normally.

Long winded, but: Seems to me that some basic categories and features must be developed early and not allowed to change in order to have any chance of building a larger structure over them.

Where did it go? I see sequences, but not timing - you can't control your muscles without actual timing, not just sequence. In fact, time itself is yet another unstated sense. There are clearly integration rates that are learned and used in recognition, planning decisions etc.

I still don't get exactly how invariance is found by this architecture beyond things that can be predicted which is somewhat of a tautology - yes, invariant features make your life easy, but beyond dumb luck, how do you find them? When you identify a dirt road by parallel tracks in the soil, what inside you is discovering the cross ratio projective invariance? How did we learn brightness normalization? Color constancy? Some of this stuff involves tricks in active diffusion of color information from edges and clever local integration. Is this learned or built in? Insects must have to deal with this and must be born with it. How do they do it?

P-158: Thinking of doing becomes doing. Yes, but how to you stop this from happening? Indeed, how do you start one invariant representation of say the Gettysburg Address from being spoken, written by all limbs and done in interpretive dance once you think to do it?

Minor nits:
P-71 While I believe that the fundamental unit of processing is a kind of sequential associative memory, the fact that you think or recall serially doesn't prove this - perhaps you can recall everything in you house at once, but internal or external output is one thing at a time and nearby things just have a scotch more support.. Detailed motor execution is more compelling.

I could have done with a final summary 2 side to side page cortical sheet diagram with Thalamus, Hippocampus, and at least 2 layers of hierarchy with all the basic communication channels and their direction shown, even better with text referencing where these things were described.

Ces commentaires ont-ils été utiles ?   Dites-le-nous
Rechercher des commentaires
Rechercher uniquement parmi les commentaires portant sur ce produit

Discussions entre clients

Le forum concernant ce produit
Discussion Réponses Message le plus récent
Pas de discussions pour l'instant

Posez des questions, partagez votre opinion, gagnez en compréhension
Démarrer une nouvelle discussion
Première publication:
Aller s'identifier

Rechercher parmi les discussions des clients
Rechercher dans toutes les discussions Amazon

Rechercher des articles similaires par rubrique