‘How to Lie with Statistics’ was written and published by Darrell Huff. In the book, the author focuses on outlining various errors that arise as a result of statistical interpretation. He further states how often the interpretation errors lead tacticians to make an incorrect conclusion. The author has tried to assert the aspects under different topics within the book. Therefore, the current paper will provide a summary of the book based on the different topics within each chapter.
The Sample with Built-In Bias
In this chapter, Huff mostly focuses on justifying his claim that sampling is the origin of any statistical problem. The author states that every statistic is based on a given sample since the whole population cannot be subjected to a statistical test, and every sample that is derived from the given population contains some aspect of bias (Huff). To assert his claim, he gave an example of the Statistics of the Yale graduate annual earnings of $25111 that had followed all the statistic standards but had concealed the whole truth either intentionally or unintentionally. Darrell’s assertion in this chapter is that the aspect of the built-in bias comes as a result of the failure of the respondents in giving honest replies, Market Research selecting samples that provide better numbers and personal biases that originate from the perception of the researcher. The author gives an example of a survey that required the respondents to state which book they read the most between Harper and True love story. The feedback from the survey indicated that most respondents preferred Harper. However, the figures from the publisher showed that there was more circulation of a True love story than Harper, which further refuted the sampling results (Huff). The main reason for the discrepancy according to Huff was that most of the respondents had not told the truth when esponding to the survey questions. The author further gives an example of a cancer patient to indicate that a wrong sample or sample selection process often leads the surveyor towards making wrong decisions and wrong direction. Finally Darrell stresses on the aspect of selecting an interviewer and other aspects that should be kept under consideration in the study environment to ensure data collection becomes flawless and smooth.
The Well Chosen Average
In this chapter, Huff talks about the various tricks that a researcher can use for manipulation when using the average to describe any statistical fact. The author's main idea in this chapter is that any individual who uses an average must clearly understand all three types of averages since a similar set of data can produce three sets of values when the three different types of averages are used for calculations (Huff). Darrell states that if a neighborhood has people on pension and retirees, then the income of the two or three millionaires within the same neighborhood is likely to boost the average when we only calculate the arithmetic mean of the neighborhoods income. On the other hand, the median will give the exact value that lies in the middle. Darrell, therefore, claims in this chapter that the median provides a more precise reflection of the sample than the mean since the mean tends to conceal information. To further demonstrate how a published fact can be manipulated from the real facts when the average is not qualified, the author used an example of the average pay of an employee in a corporation that can be interpreted to mean different things to various people. In other words, every scenario within any given context requires an individual to quantify the type of average used in its description.
The Little Figures That Are Not There
The chapter mainly discusses the processs through which sample data is often picked in a way likely to prove a given result especially in the advertising world of consumer products. According to Huff, even though the statistics given favors a particular product, it also reveals some underneath tricks. In the first instance, the sample sizes are small and undergo particular treatments to change the expected treatment outcomes to make the test results fascinating (Huff). According to the author, the result of any study is likely to be diverted to the researcher’s desire by hiding the prevailing condition of the environment. Huff gave an example of tossing a coin whereby he states that if a coin is tossed ten times then there is an 80% probability of getting a head but if it is done severally then one can get a probability of 50% for both head and tail. Therefore to determine if the results have been collected in a valid way, the author has suggested the use of a significance test that is ideal for indicating if the result is based on real change and not on some probability. The author further discusses how incorrectly labeled axes lead to misleading charts.
Top 10 writer 10.95 USD Get
VIP Support 9.99 USD Get order Proofread
by editor 2.40 USD Get extended
Revision 2.00 USD Get SMS
Notifications 3.00 USD Get additional
Plagiarism Check 3.00 USD
Much Ado about Practically Nothing
In this chapter, Huff discusses the need of expressing a sample result in range and error in measurement. Darrell illustrates that at times the sample result may be close, and the difference between the results may not make any meaning since the probable error range may be far much greater than the difference that exists between the sample results. For example, the ranking of the over 600 American colleges by Forbes magazine was achieved by a complex combination of different factors that were weighted for more or less influence (Vedder & Ewalt). The chapter also discusses the process of data collection and states that when the collected data are all combined, then there is a likelihood of increasing the probable error, an aspect he has illustrated with an example of measuring a corn field.