With inferential statistics, we try to reach conclusions that go beyond the immediate data. For example, we used inferential statistics to try to infer from the sample data what the population might think. Or, we use inferential statistics to make judgments about the likelihood that an observed difference between groups is reliable or occurred by chance in this study. Thus, we use inferential statistics to make inferences from our data to more general conditions; we use descriptive statistics simply to describe what happens in our data.

Perhaps one of the simplest inferential tests is used when you want to compare the average performance of two groups in a single measure to see if there is a difference. You may want to know if eighth-grade boys and girls differ in math test scores or if a group in the program differs in the extent of a control group’s outcome. Whenever you want to compare the average performance between two groups you should consider the t-test for differences between groups.

The General Linear Model

Most major inferential statisticians come from a general family of statistical models known as the General Linear Model. This includes t-testing, analysis of variance (ANOVA), covariance analysis (ANCOVA), regression analysis and many of the multivariate methods such as factor analysis, multidimensional scaling, cluster analysis, discriminant function analysis, etc. Given the importance of the General Linear Model, it is a good idea for any serious social researcher to become familiar with its workings. The discussion of the General Linear Model here is very elementary and only considers the simplest linear model. However, it will allow you to familiarize yourself with the idea of the linear model and help you prepare for the more complex analyses described below.

One of the keys to understanding how groups compare lies in the notion of the “dummy” variable. Their name does not suggest that we are using unintelligent variables or, worse, that the analyst who uses them is a “dummy”! Perhaps these variables would be better described as “proxy” variables. Essentially, a dummy variable is one that uses discrete numbers, usually 0 and 1, to represent different groups in its study. Fictitious variables are a simple idea that allows you to do quite complicated things. For example, by including a simple dummy variable in a model, I can model two separate lines (one for each treatment group) with a single equation. To see how this works, see the discussion on dummy variables.

One of the most important analyses in evaluations of programme outcomes is to compare the programme group and the non-programme group in the outcome variable(s). Research designs are divided into two main types of designs: experimental and quasi-experimental. As the analyses differ in each of them, they are presented separately.

Experimental analysis

The simple random experiment of two groups with postest is usually analyzed with the simple t-test or the one-way ANOVA. Factorial experimental designs are usually analyzed with the Analysis of Variance (ANOVA) model. Randomized block designs use a special form of ANOVA lock model that uses variables with dummy codes to represent the blocks. The Experimental Design of Covariance Analysis uses, of course, the statistical model of Covariance Analysis.

Quasi-experimental analysis

Quasi-experimental designs differ from experimental designs in that they do not use random assignment to assign units (e.g., people) to program groups. The lack of random assignment in these designs tends to complicate their analysis considerably. For example, to analyze the design of non-equivalent groups (NEGD) we have to adjust the pretest scores for measurement error in what is often called a reliability-corrected covariance analysis model.

In regression-discontinuity design, we must be especially concerned about the curvilinearity and poor specification of the model. Consequently, we tend to use a conservative analysis approach that is based on polynomial regression that begins by oversquading the probable true function and then narrowing down the results-based model. The Regression Point Shift Design has only one unit treated. However, the analysis of the RPD design is based directly on the traditional ANCOVA model.

When you have researched these various analytical models, you will see that they all come from the same family: the General Linear Model. Understanding this model will help you get into the complexities of data analysis in social and applied research contexts.

Descriptive statistics vs. inferential statistics

Descriptive statistics allow you to describe a dataset, while inferential statistics allow you to make inferences based on a dataset.

Descriptive statistics

Using descriptive statistics, you can report the characteristics of your data:

The distribution refers to the frequency of each value.

The central trend refers to the averages of the values.

Variability refers to the dispersion of values.

In descriptive statistics there is no uncertainty: statistics accurately describe the data that has been collected. If data are collected from an entire population, these descriptive statistics can be directly compared with those of other populations.

Example of Descriptive Statistics

You collect data on SAT scores from all 11th graders in a school over three years.

You can use descriptive statistics to get a quick overview of school results in those years. You can then directly compare the average SAT score with the average scores of other schools.

Inferential statistics

Most of the time, you can only get data from samples, because it is too difficult or expensive to collect data from the entire population you are interested in.

While descriptive statistics can only summarize the characteristics of a sample, inferential statistics uses the sample to make reasonable guesses about the general population.

With inferential statistics, it is important to use random and unbiased sampling methods. If the sample is not representative of the population, valid statistical inferences cannot be made.

Inferential Statistics Example

You randomly select a sample of 11th graders in your state and collect data about their SAT scores and other characteristics.

You can use inferential statistics to make estimates and test hypotheses about the entire population of 11th graders in the state based on data from your sample.

Sampling error in inferential statistics

Since the size of a sample is always smaller than the size of the population, a part of the population is not captured by the sample data. This creates a sampling error, which is the difference between the true population values (called parameters) and the measured sample values (called statistical).

Sampling error occurs every time a sample is used, even if it is random and unbiased. For this reason, there is always some uncertainty in inferential statistics. However, the use of probabilistic sampling methods reduces this uncertainty.

Estimation of population parameters from sample statistics

The characteristics of the samples and populations are described by numbers called statistics and parameters:

A statistic is a measure that describes the sample (for example, the sample mean).

A parameter is a measure that describes the entire population (for example, the population mean).

Sampling error is the difference between a parameter and the corresponding statistic. As in most cases the actual population parameter is not known, inferential statistics can be used to estimate these parameters so as to take into account sampling error.

There are two important types of estimates that can be made about the population: point estimates and interval estimates.

A point estimate is an estimate of a single value of a parameter. For example, the sample mean is a point estimate of the population average.

An interval estimate provides a range of values in which the parameter is expected to be located. A confidence interval is the most common type of interval estimation.

Both types of estimates are important to get a clear idea of where a parameter is likely to be found.

Confidence intervals

A confidence interval uses variability around a statistic to obtain an interval estimate for a parameter. Confidence intervals are useful for estimating parameters because they take into account sampling error.

While a point estimate gives you an accurate value of the parameter you are interested in, a confidence interval tells you the uncertainty of the point estimate. The best way to use them is to combine them with each other.

Each confidence interval is associated with a confidence level. A confidence level tells you the probability (in percentage) that the interval will contain the parameter estimate if the study is repeated again.

A 95% confidence interval means that if the study is repeated with a new sample in exactly the same way 100 times, the estimate can be expected to be within the specified range of values 95 times.

Although it can be said that the estimate will be within the range a certain percentage of times, it cannot be assured that the actual population parameter is. This is because the real value of the population parameter cannot be known without collecting data from the entire population.

However, with random sampling and an adequate sample size, the confidence interval can reasonably be expected to contain the parameter a certain percentage of the time.

Example of Point Estimate and Confidence Interval

You want to know the average number of paid vacation days that employees of an international company receive. After collecting survey responses from a random sample, you calculate a point estimate and a confidence interval.

Its point estimate of the population average of paid vacation days is the sample average of 19 days of paid vacation.

With random sampling, a 95% confidence interval [16 – 22] means you can be reasonably sure that the average number of vacation days is between 16 and 22.

Hypothesis testing

Hypothesis checking is a formal process of statistical analysis that uses inferential statistics. The objective of hypothesis testing is to compare populations or evaluate relationships between variables using samples.

Hypotheses, or predictions, are tested by statistical tests. Statistical tests also estimate sampling errors in order to make valid inferences.

In this regard, statistical tests can be parametric or non-parametric. Parametric tests are considered more statistically powerful because they are more likely to detect an effect if it exists at all.

In this way, parametric tests make assumptions that include the following:

The population from which the sample comes follows a normal distribution of the scores

The sample size is large enough to represent the population

The variances, a measure of dispersion, of each group being compared are similar

When the data fails to meet any of these assumptions, non-parametric tests are more appropriate. Nonparametric tests are called “non-distribution tests” because they assume nothing about the distribution of population data.

Statistical Tests

Statistical tests are presented in three forms: comparison, correlation or regression tests.

Comparison tests

Comparison tests assess whether there are differences in the means, medians, or score rankings of two or more groups.

To decide which test fits your goal, consider whether your data meets the conditions for parametric testing, the number of samples, and the measurement levels of your variables.

Means can only be found for interval or proportion data, while medians and classifications are more appropriate measures for ordinal data.

Correlation tests

Correlation tests determine the degree of association of two variables.

Although Pearson’s r is the most statistically powerful test, Spearman’s r is suitable for interval and proportion variables when the data do not follow a normal distribution.

The chi-square independence test is the only one that can be used with nominal variables.

Regression tests

Regression tests demonstrate whether changes in predictor variables cause changes in an outcome variable. You can decide which regression test to use based on the number and types of variables you have as predictors and outcomes.

Most of the most commonly used regression tests are parametric. If your data is not normally distributed, you can perform data transformations.

Data transformations help you make your data distributed normally using mathematical operations, such as taking out the square root of each value.

Our specialists wait for you to contact them through the quote form or direct chat. We also have confidential communication channels such as WhatsApp and Messenger. And if you want to be aware of our innovative services and the different advantages of hiring us, follow us on Facebook, Instagram or Twitter.

If this article was to your liking, do not forget to share it on your social networks.

Bibliographic References

GUJARATI, D. (1997) Basic Econometrics. Bogotá: McGraw-Hill.

KMENTA, J (1980) Elements of Econometrics. Barcelona: Vicens University.

MARTíN PLIEGO, F. (1994) Introduction to Economic and Business Statistics. (Theory and Practice) Madrid: AC.

You might also be interested in: Syllabus

Inferential Statistics

Inferential Statistics. Photo: Unsplash. Credits: Paul Siewert

Shares
Abrir chat
1
Scan the code
Bienvenido(a) a Online Tesis
Nuestros expertos estarán encantados de ayudarte con tu investigación ¡Contáctanos!