Why the standard deviation would likely not be a reliable measure of variability

  1. Explain why the standard deviation would likely not be a reliable measure of variability for a distribution of data that includes at least one extreme outlier.
  2. Suppose that you collect a random sample of 250 salaries for the salespersons employed by a large PC manufacturer. Furthermore, assume that you find that two of these salaries are considerably higher than the others in the sample. Before analyzing this data set, should you delete the unusual observations? Explain why or why not.
  3. If two variables are highly correlated, does this imply that changes in one cause changes in the other? If not, give at least one example from the real world that illustrates what else could cause a high correlation.
  4. Suppose you have data on student achievement in high school for each of many school districts. In spreadsheet format, the school district is in column A, and various student achievement measures are in columns B, C, and so on. If you find fairly low correlations (magnitudes from 0 to 0.4, say) between the variables in these achievement columns, what exactly does this mean?

This question has been answered.

Get Answer