Parametric and non-Parametric tests are methods used to analyze and interpret data in order to draw conclusions about a population based on a sample. These tests can help researchers determine whether the differences observed between groups or the relationship between variables in a sample are likely to occur by chance or are statistically significant. In this blog post, you will learn mastering the different commonly used statistical tests.

Parametric tests are used when working with data that follows a specific distribution, such as the normal distribution. These tests involve making assumptions about the underlying population, such as that the data is normally distributed. Examples of parametric tests include t-tests and ANOVA.

Non-Parametric tests, on the other hand, are used when we cannot make assumptions about the underlying population distribution. These tests do not require any specific parameters to be estimated and can be used with a wider range of data types. Examples of non-parametric tests include the Wilcoxon signed-rank test and the Kruskal-Wallis test.

Contents

### Parametric tests

The normal distribution, for example, is assumed to be the distribution that the data will follow when subjected to a statistical test known as a parametric test. Based on sample data, these tests are used to draw conclusions about population parameters like the mean or variance. Typical parametric tests consist of:

#### t test

To evaluate whether there is a significant difference between the means of two independent groups, a t-test is a statistical test that is employed. The t-distribution, a probability distribution that emerges when the sample size is small or the population variance is unknown, serves as the foundation for this model.

T-tests come in two varieties:

##### Independent sample t test

unbiased samples The t-test is utilised when there is no overlap between the participants in the two groups being compared, or when they are independent of one another. The means of the two groups are compared using this kind of t-test to see if they differ substantially from one another.

##### Paired sample t test

Matched examples For comparing two groups that are related or paired, such as when the same participants are assessed twice under various circumstances, the t-test is utilised. To evaluate if there is a significant difference between the two circumstances, this kind of t-test compares the difference between the paired observations.

To do a t-test, you need to calculate the t-statistic and compare it to a critical value from the t-distribution. A difference between two means is deemed statistically significant if the estimated t-value exceeds the critical value.

T-tests are frequently used to compare the means of two groups and ascertain whether there is a statistically significant difference between them in a variety of study domains, including psychology, medicine, and business.

#### Analysis of variance

A statistical test called analysis of variance (ANOVA) is used to compare the means of three or more groups. If there is a substantial difference between the group means, ANOVA is used to identify which groups are distinct from one another.

ANOVA tests come in a variety of forms, including:

- One-way ANOVA: utilised when there is one independent variable (or factor) with three or more levels. For instance, a one-way ANOVA may be used to compare the median test results of students who received instruction using various approaches (e.g. lecture, discussion, online).
- When two independent variables (or factors) are being examined concurrently, a two-way ANOVA is utilised. For instance, a two-way ANOVA may be used to compare the average test results of students who received instruction using various teaching techniques (such as lecture, discussion, and online learning) and at various levels of experience (e.g. novice, intermediate, expert).
- Repeated measures ANOVA: When the same subjects are assessed under several circumstances or at various periods, an ANOVA is utilised. To compare participant mean test scores before and after a training programme, for instance, a repeated measures ANOVA could be utilised.

You must compute the F-statistic and compare it to a crucial value drawn from the F-distribution in order to do an ANOVA. The difference between the means of the groups is regarded as statistically significant if the estimated F-value exceeds the critical threshold.

ANOVA is an effective tool for comparing averages across several groups, but it does have some presumptions that must be met in order for the results to be reliable, such as normality and homogeneity of variance.

#### Regression analysis

A statistical technique called regression analysis is used to look at how one or more independent variables and a dependent variable are related. To evaluate data and generate predictions, it is frequently employed in the social sciences, psychology, and economics.

Regression analysis that assumes a linear relationship exists between the independent variable(s) and the dependent variable is known as linear regression. To put it another way, it is presumptive that variations in the independent variable(s) will cause corresponding variations in the dependent variable.

In a linear regression study, the slope and intercept of the line that best fits the data are estimated. The intercept shows the value of the dependent variable when the independent variable is zero, whereas the slope shows the change in the dependent variable for a one-unit rise in the independent variable. A linear equation describes the relationship between the independent and dependent variables:

$$\gamma=\beta_0+\beta_1X+\epsilon$$

Regression analysis comes in a variety of forms, including multiple regression, polynomial regression, logistic regression, and linear regression. The most popular kind of regression analysis, known as linear regression, assumes a linear connection between the independent and dependent variables. When the relationship between the independent and dependent variables is not linear, polynomial regression is employed, whereas logistic regression is used when the dependent variable is binary (i.e., has two alternative outcomes).

Regression analysis can be used in many sectors, including economics, finance, marketing, psychology, and healthcare, to evaluate and understand the correlations between variables and make predictions about future outcomes.

#### Pearson correlation

A statistical tool called Pearson correlation assesses the magnitude and direction of the linear relationship between two continuous variables. It has the symbol “r” and has a value between -1 and 1. Perfect negative correlation is represented by a value of -1, perfect positive correlation by a value of 1, and no correlation by a value of 0.

The covariance between two variables, which gauges how much two variables fluctuate collectively, is the foundation for Pearson correlation. The product of the standard deviations of the two variables is then used to normalise the covariance. The Pearson correlation is guaranteed to be scale-independent thanks to this normalisation.

While analysing the associations between variables in research, Pearson correlation is frequently used. It can be used to investigate the connections between age and cognitive ability or between income and education, for instance. Identifying potential confounding variables, or variables that could impact the relationship between the two variables under study, is another purpose for Pearson correlation.

The strength of the linear relationship between two variables is the only relationship that Pearson correlation examines; it ignores non-linear correlations. A significant correlation between two variables does not necessarily indicate that one variable is the cause of the other. Correlation does not imply causation.

It’s vital to remember that for parametric tests to produce accurate findings, certain conditions must be met, such as normality and variance homogeneity. Nonparametric tests may be employed in their place if these presumptions are not true.

### Non parametric tests

Nonparametric statistical tests are those that do not rely on any presumptions regarding the distribution of the data being studied. These tests are employed when the data under analysis do not adhere to the presumptions necessary for parametric tests, such as normality or variance homogeneity.

Common nonparametric tests are as follows:

#### Mann-Whitney U test

Mann-Whitney A non-parametric statistical test called the U test, commonly referred to as the Wilcoxon rank-sum test, is used to compare differences between two independent groups. It is employed when the data do not adhere to the normative or homogeneous variance assumptions necessary for parametric tests like the t-test.

Comparing the data ranks between the two groups is how the Mann-Whitney U test operates. The data from both groups is ranked together, and the sum of the ranks for each group is then determined. After that, the test statistic is determined using the variance between the two sums of rankings.

The null hypothesis for the Mann-Whitney U test is that there is no difference between the two groups, while the alternative hypothesis is that there is a difference. The test statistic is compared to a critical value from a table or calculated using software, and the p-value is then calculated to determine the statistical significance of the results.

In several disciplines, including as psychology, social sciences, and medicine, the Mann-Whitney U test is frequently employed. It can be used to compare variables that have ordinal, interval, or ratio data, like the median income of two groups or the outcomes of two different treatments for a group of patients.

#### Wilcoxon signed-rank test

A non-parametric statistical test called the Wilcoxon signed-rank test is used to compare the variations between two related samples. It is utilised when the data are paired or when the data do not adhere to the normalcy assumptions.

By analysing the differences between the paired data, the Wilcoxon signed-rank test analyses the data. It first computes the differences between the paired data before ranking the differences’ absolute values. The test statistic is then calculated using the rank sum of the positive and negative differences. The null hypothesis is that the median difference is zero, while the alternative hypothesis is that the median difference is not zero.

To establish the statistical significance of the results, the test statistic is compared to a critical value obtained from a table or computed using software. The null hypothesis is rejected and it is determined that there is a significant difference between the two linked samples if the p-value is smaller than the selected significance level.

The Wilcoxon signed-rank test is frequently employed in a variety of disciplines, including social sciences, psychology, and medicine. It is possible to use it to compare variables that are ordinal, interval, or ratio data, such as the impact of a treatment on a group of patients before and after it is administered.

#### Kruskal-Wallis test

A non-parametric statistical test called the Kruskal-Wallis test is used to compare the medians of two or more independent groups. It is employed when the data are ordinal or skewed, or when the data do not conform to the assumptions of normalcy.

All the data from all groups are ranked jointly for the Kruskal-Wallis test, which then computes the sum of the ranks for each group independently. On the basis of the discrepancy between the observed and expected rank sums, the test statistic is then computed. The alternative hypothesis is that there is a difference, contrary to the null hypothesis, which states that there is no difference between the group medians.

To establish the statistical significance of the results, the test statistic is compared to a critical value obtained from a table or computed using software. The null hypothesis is disproved and it is determined that there is a significant difference between the medians of the groups if the p-value is smaller than the selected level of significance.

Several disciplines, including psychology, social sciences, and medicine, frequently employ the Kruskal-Wallis test. It can be used to compare ordinal or interval data variables, such as the median income of various groups or the impact of various treatments on a group of patients. In the event that there is a significant difference between the groups according to the Kruskal-Wallis test, the significant differences between the groups can be identified using post-hoc tests such the Mann-Whitney U test or Dunn’s test.

#### Friedman test

The Friedman test is a non-parametric statistical test used to compare the medians of two or more related groups. It is employed when the data are ordinal or skewed, or when the data do not conform to the assumptions of normalcy.

For each participant, all the data from all groups are ranked together to perform the Friedman test, which then computes the total of the ranks for each group independently. On the basis of the discrepancy between the observed and expected rank sums, the test statistic is then computed. The alternative hypothesis is that there is a difference, contrary to the null hypothesis, which states that there is no difference between the medians of the linked groups.

To assess whether the results are statistically significant, the test statistic is contrasted with a crucial value taken from a table or computed using software, and the p-value is then produced. The null hypothesis is disproved and it is determined that there is a significant difference between the medians of the related groups if the p-value is smaller than the selected significance level.

The Friedman test is frequently applied in a variety of disciplines, including psychology, social sciences, and medicine. It can be used to compare variables that are ordinal or interval data, like the effectiveness of various therapies on a group of patients over time or the accomplishment of various tasks by the same group. Post-hoc tests like the Wilcoxon signed-rank test can be used to identify which related groups differ considerably if the Friedman test reveals a difference between them.

Although nonparametric tests are frequently thought of as being less effective than parametric tests, they may be less able to identify significant differences when they do exist. They are a valuable tool in many circumstances where the assumptions of parametric tests are not met, but they are also more robust to outliers and other breaches of assumptions.