I need some guidance with my project. We are supposed to conduct a real world ap
ID: 3359838 • Letter: I
Question
I need some guidance with my project. We are supposed to conduct a real world application for hypothesis testing. My hypothesis is that the warmer the temperture for the year the increase in percipitation (rain fall). Would this be a double hypothesis test? Does this seem like a valid experiment or am I wasting my time?
This is along the lines of global warming and an explination that the increase in rainfall and the castrophes happening can be attributed to global warminig.
I can not conduct the experiment, I have to use existing data from databases. I am most familure with environment data which is why I took this route.
Excel File Edit View Insert Format Tools Data Window Help 100% Sat Nov 11 7:04 PM Search Sheet Home Insert Page Layout FormulasDataReview View Share ^ Calibri (Body) , | 12 | A- Av , = TWrap Text copy Fill ', | !..:.: $, % , +24%| Conditional Format Cell insert Delete Format Paste -Merge & center Sort à Format BIU Clear Formatting as Table Styles R14 1 Year 9.0024 Sheet1 + ReadyExplanation / Answer
The results of a science investigation often contain much more data or information than the researcher needs. This data-material, or information, is called raw data.
To be able to analyze the data sensibly, the raw data is processed into "output data". There are many methods to process the data, but basically the scientist organizes and summarizes the raw data into a more sensible chunk of data. Any type of organized information may be called a "data set".
Then, researchers may apply different statistical methods to analyze and understand the data better (and more accurately). Depending on the research, the scientist may also want to use statistics descriptively or for exploratory research.
Reliability and Experimental Error
Statistical tests make use of data from samples. These results are then generalized to the general population.
Contrary to what some might believe, errors in research are an essential part of significance testing. Ironically, the possibility of a research error is what makes the research scientific in the first place. If a hypothesis cannot be falsified
e.g. the hypothesis has circular logic, it is not testable, and thus not scientific, by definition.
If a hypothesis is testable, to be open to the possibility of going wrong. Statistically this opens up the possibility of getting experimental errors in results due to random errors or other problems with the research. Experimental errors may also be broken down into Type-I error and Type-II error. ROC Curves are used to calculate sensitivity between true positives and false positives.
A power analysis of a statistical test can determine how many samples a test will need to have an acceptable p-value in order to reject a false null hypothesis.
The margin of error is related to the confidence interval and the relationship between statistical significance, sample size and expected results. The effect size estimate the strength of the relationship between two variables in a population. It may help determine the sample size needed to generalize the results to the whole population.
Replicating the research of others is also essential to understand if the results of the research were a result which can be generalized or just due to a random "outlier experiment". Replication can help identify both random errors and systematic errors (test validity).
Cronbach's Alpha is used to measure the internal consistency or reliability of a test score.
Replicating the experiment/research ensures the reliability of the results statistically.
What you often see if the results have outliers, is a regression towards the mean, which then makes the result not be statistically different between the experimental and control group.