undrgrnd Cliquez ici NEWNEEEW nav-sa-clothing-shoes Cloud Drive Photos FIFA16 cliquez_ici Toys Shop Fire HD 6 Shop Kindle cliquez_ici Jeux Vidéo
  • Tous les prix incluent la TVA.
Il ne reste plus que 2 exemplaire(s) en stock (d'autres exemplaires sont en cours d'acheminement).
Expédié et vendu par Amazon.
Emballage cadeau disponible.
Quantité :1
Learning Bayesian Network... a été ajouté à votre Panier
+ EUR 2,99 (livraison)
D'occasion: Bon | Détails
Vendu par Deal FR
État: D'occasion: Bon
Commentaire: Ce livre a été lu mais il est toujours en bon état. 100% garanti.
Vous l'avez déjà ?
Repliez vers l'arrière Repliez vers l'avant
Ecoutez Lecture en cours... Interrompu   Vous écoutez un extrait de l'édition audio Audible
En savoir plus
Voir les 2 images

Learning Bayesian Networks (Anglais) Broché – 27 mars 2003


Voir les formats et éditions Masquer les autres formats et éditions
Prix Amazon Neuf à partir de Occasion à partir de
Broché
"Veuillez réessayer"
EUR 163,73
EUR 152,80 EUR 73,33

Livres anglais et étrangers
Lisez en version originale. Cliquez ici

Offres spéciales et liens associés


Descriptions du produit

Quatrième de couverture

Learning Bayesian Networks offers the first accessible and unified text on the study and application of Bayesian networks. This book serves as a key textbook or reference for anyone with an interest in probabilistic modeling in the fields of computer science, computer engineering, and electrical engineering. This text is also a valuable supplemental resource for courses on expert systems, machine learning, and artificial intelligence.

Appropriate for classroom teaching or self-instruction, the text is organized to provide fundamental concepts in an accessible, practical format. Beginning with a basic theoretical introduction, the author then provides a comprehensive discussion of inference, methods of learning, and applications based on Bayesian networks and beyond.

Learning Bayesian Networks:
  • Includes hundreds of examples and problems
  • Makes learning easy by introducing complex concepts through simple examples
  • Clarifies with separate discussions on statistical development of Bayesian networks and application to causality

Biographie de l'auteur

Richard E. Neapolitan has been a researcher in Bayesian networks and the area of uncertainty in artificial intelligence since the mid-1980s. In 1990, he wrote the seminal text, Probabilistic Reasoning in Expert Systems, which helped to unify the field of Bayesian networks. Dr. Neapolitan has published numerous articles spanning the fields of computer science, mathematics, philosophy of science, and psychology. Dr. Neapolitan is currently professor and chair of Computer Science at Northeastern Illinois University.




Détails sur le produit


En savoir plus sur l'auteur

Découvrez des livres, informez-vous sur les écrivains, lisez des blogs d'auteurs et bien plus encore.

Commentaires en ligne

Il n'y a pas encore de commentaires clients sur Amazon.fr
5 étoiles
4 étoiles
3 étoiles
2 étoiles
1 étoiles

Commentaires client les plus utiles sur Amazon.com (beta)

Amazon.com: 3 commentaires
65 internautes sur 73 ont trouvé ce commentaire utile 
An excellent overview 17 mai 2004
Par Dr. Lee D. Carlson - Publié sur Amazon.com
Format: Broché Achat vérifié
In just a decade, Bayesian networks have went from being a mere academic curiosity to a highly useful field with myriads of applications. Indeed, the applications of Bayesian networks are wide-ranging and include disparate fields such as network engineering, bioinformatics, medical diagnostics, and intelligent troubleshooting. This book gives a fine overview of the subject, and after reading it one will have an in-depth understanding of both the underlying foundations and the algorithms involved in using Bayesian networks. The reader will have to look elsewhere for applications of Bayesian networks, since they are only discussed briefly in the book. Due to space constraints, only the first four chapters will be reviewed here.
The author defines a Bayesian network as a graphical structure for representing the probabilistic relationship among a large number of variables and for performing probabilistic inference with these variables. Before the advent of Bayesian networks, probabilistic inference depended on the use of Bayes' theorem, which entailed that the problems examined be relatively simple, due to the exponential space and time complexity that can arise in the application of this theorem.
After a short review of probability theory in chapter 1, a discussion of the "philosophical" foundations of probability, and a discussion of the difficulties inherent in representing large instances and in performing inference over a large number of variables, the author introduces Bayesian networks as directed acyclic graphs satisfying the Markov condition. A brief discussion of NasoNet, which is a large-scale Bayesian network used in the diagnosis and prognosis of nasopharyngeal cancer, is given. The author then shows in detail how to create Bayesian networks using causal edges, introducing in the process the notion of manipulating variables and the notion of a causation between two variables. An interesting example of manipulation is given in the context of pharmaceuticals, and an example of bad manipulation is given.
Chapter 2 addresses the nature of dependencies in DAGs via the concept of `faithfulness' and entailed conditional independencies. Very important in this chapter is the notion of `d-separation', which identifies all and only those conditional independencies entailed by the Markov condition for G. An explicit algorithm is given for finding d-separations. D-separation is used to define a notion of Markov equivalence between DAGs containing the same set of nodes. Also discussed is the minimality condition, wherein a DAG will not satisfy the Markov condition with respect to a probability distribution if an edge is removed from it. The author shows every probability distribution satisfies the minimality condition with some DAG. The notion of a `Markov blanket' is introduced, which measures the extent to which the instantiation of a set of nodes close to a particular node can shield the node from the effect of all other nodes. A Markov boundary of a random variable is then defined as a Markov blanket such that none of its proper subsets is a Markov blanket of the random variable. The utility of these concepts lies in the fact that the set of all parents of each variable X, children of X, and parents of children of X are the unique Markov boundary of X, if the DAG satisfies the faithfulness condition.
Inference in Bayesian networks is the topic of chapter 3, with Pearl's message-passing algorithm starting off the discussion for the case of discrete random variables. This algorithm, which applies for Bayesian networks whose DAGs are trees, is based on a theorem, whose statement takes well over a page, and whose proof covers five pages. The author gives detailed examples though, and these are very helpful in understanding the algorithm. The Pearl algorithm is then generalized to singly and multiply connected networks. After a discussion of the computational complexity of the algorithm, the author then overviews the `noisy OR-gate model', which is a model whose complexity is manageable, since each variable in the model has only two values. The author then moves on to doing inference using an approach, called `symbolic probabilistic inference' that approximates finding the optimal way to compute marginal distributions of interest from the joint probability distribution. This algorithm involves a number of multiplications in order to compute the marginal probability. To minimize the computational effort, it would be advantageous to minimize the number of these multiplications, and so the author discusses the `optimal factoring problem', which, once solved for a given factoring instance, will give a factorization that requires a minimal number of multiplications. What follows after this is a very interesting discussion of the relationship of human reasoning to Bayesian networks. This is done via the introduction of the `causal network model', and the author then, quite unexpectedly, overviews the research on the testing of human subjects so as to test the accuracy of the model. These testing studies included those that involve inference based on `discounting', which measures to what degree an individual becomes less confident in the cause when told that a different cause of the effect was present. Another discussed is one that involves larger networks in the context of traffic congestion. This is followed by a discussion of a study of causal reasoning in the context of the debugging of programs.
Inference algorithms are studied for the case of continuous variables in chapter four. After a review of the normal probability distribution, the author discusses an inference algorithm for the case of Gaussian Bayesian networks. An algorithm for doing inference with continuous variables for singly connected Bayesian networks is given, that allows the determination of expected value and variance of each node conditioned on specified values of nodes in some subset. This is followed by several detailed and helpful examples of inference in continuous variables. As expected, issues with computational complexity arise, and so the author discusses approximate inference, via the method of stochastic simulation, which involves a classical sampling method called `logic sampling.' This is then followed by a discussion of likelihood weighting, which cures some of the problems involved with logic sampling. Abductive inference, so important in contemporary applications, is then discussed in detail.
46 internautes sur 55 ont trouvé ce commentaire utile 
Enjoying this book enormously 3 janvier 2004
Par William S. Harlan - Publié sur Amazon.com
Format: Broché
Rarely do I find myself reading a technical book
so carefully as this one. I always enjoy
books on Bayesian inference,
but this is the first that shows me how
to write useful algorithms. I appreciate
the level of mathematical rigor, too, for
such a new subject. Bayesian networks are what
neural networks should be, without the ad-hoc
theory and trial-and-error algorithms.
10 internautes sur 11 ont trouvé ce commentaire utile 
Advanced and pretty mathematical textbook on Bayesian networks 3 février 2009
Par ws__ - Publié sur Amazon.com
Format: Broché
Neapolitan makes an attempt to give an instructive overview of Bayesian networks. Be prepared for an advanced graduate level reading and for encountering some beauty.

An absolute prerequisite is knowledge of college level math. In fact mathematics is the bones of this treatise. And also you should have a good understanding of algorithms - the flesh of this book. It is very helpful to have some previous knowledge on Bayesian statistics and even on Bayesian networks. The corresponding section in the excellent Artificial Intelligence: A Modern Approach (2nd Edition) (Prentice Hall Series in Artificial Intelligence) is probably not enough.

If you are a mathematician you might sometimes be bewildered due to a somewhat loose notation, due to a deep motivation in the algorithmic application of inferring probabilities from evolving knowledge of actual data and due to sometimes a strange usage of theorems. An example is theorem 3.1 (in preparation for Pearl's message passing algorithm) which is more a summary of its "proof" than anything else.

A rare treasure found in "Learning Bayesian Networks" is the delicate treatment of the philosophical issues.
Ces commentaires ont-ils été utiles ? Dites-le-nous

Rechercher des articles similaires par rubrique


Commentaires

Souhaitez-vous compléter ou améliorer les informations sur ce produit ? Ou faire modifier les images?