The field of statistics exists because it is generally impossible to collect data from all individuals of interest (population). Our only solution is to collect data from a subset (sample) of the individuals of interest, but our true desire is to know the truth about the population. And for this it is essential to understand the concepts Parametric and Non-Parametric. Quantities such as means, standard deviations, and proportions are all important values and are called “parameters” when we speak of a population.

Since we generally cannot obtain data for the entire population, we cannot know the parameter values for that population. However, we can calculate estimates of these quantities for our sample. When calculated from sample data, these amounts are statistical. Thus, a statistic estimates a parameter.

Understanding the Parametric and Non-Parametric concepts

Several fundamental statistical concepts are a useful prerequisite for understanding both terms. These foundations include random variables, probability distributions, parameters, population, sample, sampling distributions, and the central limit theorem. Parametric statistical procedures are based on assumptions about the shape of the distribution, the underlying population, and the shape or parameters (i.e., means and standard deviations) of the assumed distribution. Nonparametric statistical procedures are based on one or a few assumptions about the shape or parameters of the distribution of the population from which the sample was drawn.

Definitionof both Terms

If you’ve ever discussed an analysis plan with a statistician, you’ve probably heard both terms but may not have understood what it means. We can conclude that a statistical procedure is of this type if it has properties that are satisfied with a reasonable approximation when some assumptions are at least moderately general in nature.

For most practical purposes, nonparametric analysis could be defined as a class of statistical procedures that are not based on assumptions about the shape or form of the probability distribution from which the data was extracted. Users often rely on parametric studies to analyze the performance of their devices within different parameter sets (sensitivity analysis) and often wonder if it is possible to go further to directly find the best solution. Optimization methods can help by automatically searching the design space efficiently and finding the optimal solution.

Understanding Non-Parametric Statistics

Nonparametric statistics refers to a statistical method in which the data is not assumed to come from prescribed models that are determined by a small number of parameters; Examples of such models include the normal distribution model and the linear regression model. Nonparametric statistics sometimes use data that is ordinal, which means that it is not based on numbers, but on a classification or type order. For example, a survey that conveys consumer preferences, ranging from likes to dislikes, would be considered ordinal data.

Nonparametric statistics do not assume the sample size or whether the observed data is quantitative. Nonparametric statistics do not assume that the data is drawn from a normal distribution. Instead, the shape of the distribution is estimated under this form of statistical measurement. While there are many situations in which a normal distribution can be assumed, there are also some scenarios where the actual data generation process is far from normally distributed.

What does the Non-Parametric Statistics include?

It includes descriptive statistics, statistical models, inference, and statistical tests. The model structure of the nonparametric models is not specified a priori, but is determined from the data. This does not imply that such models are completely devoid of parameters, but that the number and nature of the parameters are flexible and not fixed in advance. A histogram is an example of a nonparametric estimate of a probability distribution.

Special Considerations

Non-parametric statistics have gained recognition due to their ease of use. As the need for parameters is alleviated, the data becomes more applicable to a greater variety of tests. This type of statistics can be used without the mean, sample size, standard deviation or the estimation of any other related parameter when no information is available. Since non-parametric statistics make fewer assumptions about the sample data, your application has a broader scope than parametric statistics. In cases where parametric tests are more appropriate, nonparametric methods will be less efficient. This is because non-parametric statistics rule out certain information available in the data, unlike parametric statistics.

Understanding Parametric Statistics

This type of analysis, also called sensitivity analysis, is the study of the influence of different geometric or physical parameters or both in solving the problem. This analysis is used to evaluate a range of values for an intervention (independent variable). For example, if we are determining the range of values for “waiting time”, we would carry out such an analysis using intervals of 1 minute, 5 minutes, 10 minutes, etc. This analysis can be considered as the search for “how much” the intervention is needed to be effective.

What does the Parametric Statistics include?

It includes parameters such as the mean, standard deviation, Pearson’s correlation, variance, etc. This form of statistics uses the observed data to estimate the parameters of the distribution. Under parametric statistics, it is often assumed that the data come from a normal distribution with unknown parameters μ (population mean) and σ2 (population variance), which are then estimated using the sample mean and the variance of the shows. Researchers have generally used this raw data analysis to evaluate experimental data for statistical significance. When data is not normally distributed, data transformation or nonparametric analysis is often recommended.

Simple explanation of this type of analysis

An easy way to understand this is to think of the example of baking a cake. The independent variable, the one that we are going to manipulate is the temperature. What happens to the cake if we bake it at 200 degrees? What would happen to the cake if we bake it at 300 or 400 or 500 degrees? As we change the temperature, we will discover the differential effects of this change on our finished cake. At one end of the continuum, the cake does not bake and at the other end it burns. Sometimes there will be more than one independent variable that we want to manipulate. Do this by changing only one independent variable at a time. Suppose you have identified 350 degrees as the best temperature for baking your cake. Should I bake it for 10 minutes or 30 or 50 minutes and so on?

Special Considerations

This analysis is a common exercise used by production engineers in well modeling and decision making. In a typical manual overhaul, the engineer is expected to eliminate some of the possibilities of selecting a configuration that represents the current operating state of the well. Flat lines indicate insensitivity of the well production rate in response to the changing injection rate. In such cases, the operator can save gas by reducing the injection rate without any significant impact on production. Curved lines present the possibility of improving the oil production rate by finding an optimal injection rate and a potential to lose production by injecting at the wrong injection rate, either by insufficient injection or by excessive injection.

Deciding between both types of analysis

Here is a summary of the main points and how they might affect the statistical analyzes.

  • Both are two broad classifications of statistical procedures.
  • Parametric tests are based on assumptions about the underlying distribution of the population from which the sample was taken.
  • Nonparametric tests are not based on assumptions about the shape or parameters of the distribution of the underlying population.
  • If the data deviates strongly from the assumptions of one of the procedures, continuing to use it could lead to incorrect conclusions.
  • The normality assumption is particularly useful for a small sample. Non-parametric tests are usually a good option for this data.
  • It can be difficult to decide whether to use one or the other procedure in some cases. Non-parametric procedures are generally less powerful than the corresponding parametric procedure if the data is really normal.
  • Interpreting non-parametric procedures can also be more difficult for parametric procedures,

In case of any doubt, contact us. At Online-tesis.com we are here to fulfill your dream.

Bibliographic References

Walsh, J.E. (1962) Handbook of Nonparametric Statistics, New York: D.V. Nostrand.

Conover, W.J. (1980). Practical Nonparametric Statistics, New York: Wiley & Sons.

Rosner, B. (2000). Fundamentals of Biostatistics, California: Duxbury Press. 4. Motulsky, H. (1995). Intuitive Biostatistics, New York: Oxford University Press.

Parametric and Non-Parametric Analysis

Parametric Statistics

Shares
Abrir chat
1
Scan the code
Bienvenido(a) a Online Tesis
Nuestros expertos estarán encantados de ayudarte con tu investigación ¡Contáctanos!