34 internautes sur 35 ont trouvé ce commentaire utile
- Publié sur Amazon.com
This another nice book from David Harel, the author of the delightful
'Algorithmics : the spirit of Computer Science', which introduces the
general reader to the limits of computation (and hence the limits of
what computers can do).
Harel, who's a renowned figure in the field of Theoretical Computer Science,
has the ability to write and explain in a way that makes things seem
wonderfully clear, and indeed it is only such authors who can write good
books for the general reader.
This small (240 pages) book is quite ambitious in its coverage of topics -
starting off with the notion of an algorithm, it goes on to discuss
Efficiency and correctness, Turing machines, Finite state machines,
Decidability, Computability, Complexity, NP-completeness, Recursion,
Parallel algorithms, Probabilistic algorithms, and even touches upon
Quantum Computing and Artificial Intelligence !!
All this is done with almost no mathematics, at least hardly any beyond
high-school level. The reader is gently introduced to some of the most
celebrated problems of Computer Science, and he/she can get a feel of
the nature of this exciting and interesting field.
Throughout the book, the author keeps underscoring the fact that no matter
how far technology progresses, there'll always be problems that we can't
solve cheaply, or can't solve at all, or can't ever know whether they
can be solved or not (!!), ie he stresses that there are problems that
are 'beyond computers', which cannot be tamed by more and more processing
power or any other technological advancements.
This book covers pretty much the same range of topics as Harel's earlier
book, 'Algorithmics : the spirit of Computer Science', but in only half
the number of pages, and with a heavy emphasis on the 'limitations' of
computers, which actually are limitations of our knowledge rather than
of the machines themselves.
How does it compare with the eariler book ? Well, it's more uptodate,
since it was published in 2000, whereas the other one was in 1992 -
so here you find buzzwords like 'Java', 'Dotcom', 'Quantum Computing',
etc, which you wouldn't find in the earlier book, but on the whole
i prefer the earlier one, since it had a little more detail, made you
think a little more, and even had exercises for those who were interested
in probing further.
So all in all, if you want a light, breezy introduction to the basic ideas
of Theoretical Computer Science which doesn't demand too much concentration,
this is a good choice, but if you're willing to put in some time & effort
& enjoy puzzles & logical thinking, then you'll find Harel's other book,
'Algorithmics : the spirit of Computer Science' much more rewarding.
- Publié sur Amazon.com
The January 1990 AT&T telephone network crash and the June 1996 inflight explosion of an Ariane 5 rocket were caused by software failures. These two citations by Harel are two examples of incorrect computer programming that should have been avoidable. With our industrial economy relying to an ever-greater extent on computers for essential functions, the importance of software reliability stands in stark relief.
Harel's third example, that of a 107 year old woman who was mailed registration paperwork for first grade, highlights that even our system of social organization is being dependent on competently run computer networks. Now, this may not be so dramatic as network or rocket crashes, but multiplied by our burgeoning population, it illustrates the fiscal nibbling that computer errors exact on our public budgets.
Thus Harel, having established the stakes (not at the outset, unfortunately, but near the end of Chapter 1), takes up the technical issues having to do with correctness of computation. The book begins with a discussion of the algorithm: the program, inputs, instances, programming languages, and termination. Then in the next chapters he goes on to problems that, even theoretically, defy solution by any means. He describes the Church-Turing Thesis having to do with "effective computability", and the Halting Problem and Rice's Theorem, "No algorithm can decide any nontrivial property of computations.
Even the problems that are solvable in theory just take too much time or machine resources to be economically worthwhile. These are the subject of Ch. 3. Chapter 4 has to do with NP-complete problems: decidable but not known to be tractable (worthwhile). In other words, you know that you can know, but you don't know!
Ch. 5 takes up algorithmic parallelism (mainly), which offers hope. Also touches on randomization, quantum computing, and molecular computing. Ch. 6 takes up cryptography, leading up to the RSA algorithm, and the zero knowledge proofs.
The last chapter takes up the notion of "artificial intelligence", the Turing test, Eliza, searching strategies, etc.
It also touches on issues not unlike those demonstrated by the recent IBM Watson project: "The difficulty is rooted in the observation that human knowledge does not consist merely of a large collection of facts. It is not only the sheer number and volume of facts that is overwhelming...but, to a much larger extent, their interrelationships and dynamics...a human's knowledge base is incredibly complex, and works in ways that are still far beyond our comprehension." Fact is, even now, after Watson, we STILL don't understand how a human knowledge base works, because Watson is not a human and does not employ human search strategies. Despite the media hype that IBM has been trying to work up on the Sunday morning news shows, Watson is still just a souped-up search engine with an English-language front end. Interesting and potentially useful, but no breakthrough.
Seems funny, or perhaps not, that this topic is taken up in the same chapter discussing the Turing test. Watson may produce results competitive with those of humans, but it works in a completely different way -- machine learning. Which means, basically, it is still a rules-based system, but it makes up new rules and modified rules as it operates. Human cognitive machinery is not rules-based. Turing says you can ignore the underlying mechanism; the only way you have to compare a human and a machine is by the results alone. It is a computer equivilant to the behaviorist perspective in psychology: all that matters is what you can see in front of you. Again, nothing new here, this has been apparent since the days of Eliza.
The book is rather theory-oriented but still educational. When Harel cited those three real-world instances I thought the text would be more practically oriented; on this score I was disappointed. But still it's a worthwhile read.