These determine if two variables move together. Pearson’s Correlation measures linear relationships, while the Chi-square test evaluates the independence of categorical variables (e.g., does gender affect voting preference?).
The probability that the observed results occurred by chance. Generally, a p-value less than 0.05 suggests the result is "statistically significant." Choosing the Right Tool 100 Statistical Tests
Regardless of which of the 100 tests is used, they almost all follow a unified logic: The assumption that there is no effect or difference. The Alternative Hypothesis ( H1cap H sub 1 ): The claim that there is a significant effect. These determine if two variables move together
The sheer volume of available tests exists because real-world data is messy. You might need a test for circular data (the ), a test for outliers (the Grubbs' test ), or a test for the equality of variances ( Levene's test ). Selecting the wrong test—such as using a parametric test on highly non-normal data—can lead to "Type I errors" (false positives) or "Type II errors" (false negatives). Conclusion Generally, a p-value less than 0
Parametric tests (like the t-test or ANOVA ) assume the data follows a specific distribution, usually the normal distribution. Non-parametric tests (like the Mann-Whitney U or Wilcoxon signed-rank ) make fewer assumptions and are used for skewed data or small samples.
These are the workhorses of research. A One-sample t-test compares a group to a known value, while an Independent samples t-test compares two distinct groups. For three or more groups, the F-test (ANOVA) is used.
Tests like the Kolmogorov-Smirnov or Shapiro-Wilk check if a dataset fits a theoretical distribution, which is often a prerequisite for more complex modeling. The Logic of Hypothesis Testing