What Are Inferential Statistics? - Inferential statistics refer to certain procedures that allow researchers to make inferences about a population based on data obtained from a sample.
- The term "probability," as used in research, refers to the predicted relative frequency with which a given event will occur.
Sampling Error - The term "sampling error" refers to the variations in sample statistics that occur as a result of repeated sampling from the same population.
The Distribution of Sample Means - A sampling distribution of means is a frequency distribution resulting from plotting the means of a very large number of samples from the same population.
- The standard error of the mean is the standard deviation of a sampling distribution of means. The standard error of the difference between means is the standard deviation of a sampling distribution of differences between sample means.
Confidence Intervals - A confidence interval is a region extending both above and below a sample statistic (such as a sample mean) within which a population parameter (such as the population mean) may be said to fall with a specified probability of being wrong.
Hypothesis Testing - Statistical hypothesis testing is a way of determining the probability that an obtained sample statistic will occur, given a hypothetical population parameter.
- A research hypothesis specifies the nature of the relationship the researcher thinks exists in the population.
- The null hypothesis typically specifies that there is no relationship in the population.
Significance Levels - The term "significance level" (or "level of significance"), as used in research, refers to the probability of a sample statistic occurring as a result of sampling error.
- The significance levels most commonly used in educational research are the .05 and .01 levels.
- Statistical significance and practical significance are not necessarily the same. Just because a result is statistically significant does not mean that it is practically (i.e., educationally) significant.
Tests of Statistical Significance - A one-tailed test of significance involves the use of probabilities based on one-half of a sampling distribution because the research hypothesis is a directional hypothesis.
- A two-tailed test, on the other hand, involves the use of probabilities based on both sides of a sampling distribution because the research hypothesis is a nondirectional hypothesis.
Parametric Tests for Quantitative Data - A parametric statistical test requires various kinds of assumptions about the nature of the population from which the samples involved in the research study were taken.
- Some of the commonly used parametric techniques for analyzing quantitative data include the t-test for means, ANOVA, ANCOVA, MANOVA, and the t-test for r.
Parametric Tests for Categorical Data - The most common parametric technique for analyzing categorical data is the t-test for differences in proportions.
Nonparametric Tests for Quantitative Data - A nonparametric statistical technique makes few, if any, assumptions about the nature of the population from which the samples in the study were taken.
- Some of the commonly used nonparametric techniques for analyzing quantitative data are the Mann-Whitney U test, the Kruskal-Wallis one-way analysis of variance, the sign test, and the Friedman two-way analysis variance.
Nonparametric Tests for Categorical Data - The chi-square test is the nonparametric technique most commonly used to analyze categorical data.
- The contingency coefficient is a descriptive statistic indicating the degree of relationship that exists between two categorical variables.
Power of a Statistical Test - The power of a statistical test for a particular set of data is the likelihood of identifying a difference between population parameters when it in fact exists.
- Parametric tests are generally, but not always, more powerful than nonparametric tests.
|