Capacity Planning for Web Performance: Metrics, Models, and Methods (Anglais) Broché – 12 juin 1998
Il y a une édition plus récente de cet article:
Descriptions du produit
Quatrième de couverture
"This excellent book presents a new way to model, analyze, and plan for these new performance problems associated with the Web's bursty and highly-skewed load characteristics. A valuable resource for students and for Web administrators." -- Jim Gray, Senior Researcher, Microsoft Research
"Many have said that the Web is too amorphous and chaotic to permit meaningful performance forecasts. Menasce and Almeida demolish this myth. Throughput, response time, and congestion can be measured and predicted, all using familiar tools from queueing networks that you can run on your own computer. There is no other book like this. It is a first." -- Peter J. Denning, Professor of Computer Science, George Mason University and former President of the ACM
"This book takes the mystery out of analyzing Web performance. The authors have skillfully culled through more than a 25 years of research, and selected the results most critical to Web performance, and developed important new material that deals directly with the special properties of applications that run on the Web. With everything together in a single volume, Menasce and Almeida have created a superb starting point for anyone wishing to explore the world of Web performance." -- Jeffrey P. Buzen, Chief Scientist and CoFounder, BGS Systems
"This is a welcome approach to the performance analysis of today's web-based Internet. It is a useful and practical treatment that is eminently accessible to the non-mathematical professional. An impressive feature the authors provide is to deal directly with the fractal nature of web-based traffic; no simple and practical treatment has been offered before, and theirs is a timely contribution." -- Leonard Kleinrock, Professor of Computer Science, UCLA
As more and more businesses rely on distributed client/server and Web-based applications, performance considerations become extremely important. Capacity Planning for Web Performance uses quantitative methods to analyze these systems. It leads the capacity planner, in a step-by-step fashion, through the process of determining the most cost-effective system configurations and networking architectures. The quantitative methods lead to the development of performance-predictive models for capacity planning. Instead of relying on intuition, ad hoc procedures, and rules of thumb, Capacity Planning for Web Performance provides a uniform and sound way for dealing with performance problems. A large number of numeric and practical examples help the reader understand the quantitative approach adopted here.
Includes a CD-ROM containing several Microsoft Excel(r) workbooks supported by Visual Basic(r) modules, samples of http logs, and programs to process them. The Excel workbooks allow the readers to immediately put into practice the methods and models discussed here.
Includes the following tools for analyzing client/server systems, intranets, and Internet Web sites:
- Performance-oriented analysis of network protocols
- Modeling of delays
- Workload characterization and forecasting
- Use of industry-standard benchmarks
- Queuing network-based models
Aucun appareil Kindle n'est requis. Téléchargez l'une des applis Kindle gratuites et commencez à lire les livres Kindle sur votre smartphone, tablette ou ordinateur.
Pour obtenir l'appli gratuite, saisissez votre numéro de téléphone mobile.
Détails sur le produit
Commentaires en ligne
Commentaires client les plus utiles sur Amazon.com (beta)
After a brief discussion of the issues concerning capacity planning, Web server, Intranet, and ISP performance in Chapter 1, the authors move on to defining and characterizing client/server systems in the next chapter. After a brief overview of the history of the Internet, they discuss LANs and WANs, and a quick treatment of protocols. The TCP protocol is considered in somewhat more detail because of its importance in network performance.
The quantitative analysis of performance in client/server environments is begun in chapter 3, wherein the authors begin with communication-processing delay diagrams to illustrate how requests spend time at each resource. This is done for both a 2-tier and a 3-tier C/S architecture, and the authors detail how disk subsystems contribute to the service time at a disk. An elementary iteration technique is used to compute the disk utilization. A very interesting and detailed discussion of the RAID-5 disk array is given. Some elementary queuing theory is discussed, using the assumption of flow equilibrium. A simplified summary of the utilization, forced flow, service demand, and Little's laws is also given without resorting to complicated mathematics.
Performance issues in Intranets and Web servers are the topic of the next chapter, and most importantly, the authors outline the differences between HTTP 1.0 and HTTP 1.1. The role of the proxy server and its contribution to performance is also discussed, along with Web cluster architectures. The authors first mention the role of burstiness in this chapter, but do not give an in-depth mathematical discussion.
In chapter 5, the authors give a step-by-step methodology for capacity planning for C/S systems. Workload characterization, data collection issues, model validation, and forecasting are all discussed quantitatively with more details in later chapters.
How to characterize the workload quantitatively is the subject of the next chapter, in terms of a business, functional, and resource-oriented methodology. The authors discuss briefly workload models from a non-mathematical point of view, with parametrized models given the emphasis. The calculation of the parameters is given a more detailed and mathematical treatment, with distance measures and clustering algorithms outlined. Self-similarity in network traffic is first mentioned here, but not discussed from a rigorous mathematical perspective. The authors do however give a rudimentary method for calculating the burstiness.
Benchmarking is discussed in Chapter 7, with the authors detailing the most common approaches to this activity, and mention the most cited benchmark sources, including SPEC, TPC, AIM, and NNBB. The authors divide benchmarks into two categories, component-level and system-level, and discuss CPU performance benchmarking, file server performance, and transaction processing systems as examples of these two categories. Web server benchmarking is also discussed in the context of the two most popular benchmarks: Webstone and SPECweb. Webstone uses Little’s Law to derive a metric called Little’s Load Factor, which gives the average number of connections open at the Web server at a particular time during a network test. Their discussion is very helpful for network modelers who need an introduction to the current benchmarks used in network testing and planning.
The authors fortunately get even more mathematical in the next two chapters on system-level and component-level performance models. Various queuing models are analyzed assuming operational equilibrium, which the authors assume for all models in the book, and which means that the number of requests initially is equal to the number at the end of the observation interval. State transition diagrams are introduced, but the mathematical formalism used is not based on one from stochastic processes, but instead is more phenomenological. The authors employ mean value analysis to solve closed queuing networks with the EXCEL spreadsheets nicely illustrating the results.....
The last chapter of the book discusses how to obtain network performance data experimentally. This can be a difficult task, but the authors do a good job of discussing the possible strategies one can use to collect this data, and give a brief overview of the commercially available network monitors available for this purpose. The difficult job of parameter estimation using measurement data is also discussed in some detail. The authors refer to their other book however for a more thorough treatment of validation and calibration techniques.
The authors have written a fine book here, and will serve well the person first beginning in network modeling and the network designer who needs to understand performance issues. After reading this book, and with some more mathematical preparation, readers can then move on to more sophisticated treatments of the mathematical and simulation modeling of networks.