Statistically significant is a term used in research to determine if the results really do show that one variable directly affected another variable, or if the results are all just due to chance and coincidence.
There are many statistical tests that scientists use, and which test they carry out on their data depends on what they are looking for. Are they looking to find if one thing (like a new drug) has one specific impact (i.e., just on heart rate), or if it affects a number of variables (i.e., heart rate, blood pressure, breathing rate).
There are so many combinations of ways of analysing data, but the results will be determined as either statistically significant or NOT statistically significant - How?
An analysis test will provide several numerical values, and for significance we look for a P-Value. The P-Value shouldn't be greater than 1. Depending on how specific the researcher is with making sure the extent of significance, they choose a threshold for the P-Value, usually it is 0.05.
For the results to be significant, the P-Value should be less than 0.05 (in this example). This is usually written like this: p<0.05
If the P-Value is less than 0.05 this means that there is less than 5% that these results are due to chance. This tells us that it's incredibly unlikely that these results are a coincidence, and therefore there was a significant cause and effect in the results.
If the P-Value is greater than 0.05, this means that there is over 5% these results are due to chance, and there's no certainty that there was a direct cause and effect in the results.
Comments