Important statistical principles

Information on research, statistics and publications - tips including how to recruit participants, gain funding, understand your results and get them published.
Post Reply
User avatar
Site Admin
Posts: 7905
Joined: Sat Mar 24, 2007 11:20 pm
Location: Bucks

Important statistical principles

Post by miriam » Sun Mar 25, 2007 2:17 am

Normal Distribution This is the bell shaped distribution common when measuring naturally distributed attributes (eg height, weight, IQ) where the frequency of finding any particular value reduces as you move away from the mean. Wikipedia entry here.

Parametric Statistics These are the most powerful statistics, but they rely on certain assumptions, including the data being normally distributed. Wikipedia entry here.

Significance – the likelihood of a result occurring by chance. We would normally want this no higher than 5%, but the lower it is the better. Significance levels are per test, that means if you do a lot of different calculations the probability of a type one error is cumulative. That is why you make a correction to reduce the threshold you use for significance if doing multiple tests. It very much relates to statistical power, balancing the likelihood of missing something real with concluding something false. Wikipedia entry here.

A type 1 error is the probability of incorrectly rejecting the null hypothesis - i.e. concluding that the difference between two groups (e.g.) is significant, when it actually is not. When rejecting a null hypothesis at any given significance level (e.g. the 5% level - p < 0.05) we have that percentage (5% here) of making a type 1 error. It follows that when a statistical test reports an exact significance value (p-value or alpha), e.g. p = 0.003, then we have that chance (0.3% here) of making a type 1 error. The take home message is that the lower the number, the smaller the type 1 error, so the more confident you can be that you are concluding that there is a significant difference, when one is indeed present.

A type 2 error
is the probability of incorrectly confirming the null hypothesis - i.e. concluding that the difference between two groups (e.g.) is not significant, when actually it is. This can often be due to an insufficient sample size.

Power is the balance between the risk of type 1 and type 2 errors, that is, how you ensure that your results are meaningful and effects are not missed, or over-valued. Power calculations inform the sample size needed to do effective research. Wikipedia entry here.

Effect Size A measure of the degree to which the differences in the dependent variable are due to the independent variable. The larger the sample size, the less likely that sampling error caused differences in the dependent variable.
Last edited by miriam on Tue May 27, 2008 6:15 pm, edited 5 times in total.

See my blog at

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest