#### Question: Interpreting Correlations

A meta-analysis (Anderson & Bushman, 2001) reported that the average correlation between time spent playing video games (X) and engaging in aggressive behavior (Y) in a set of 21 well-controlled experimental studies was *r*+ = .19. This correlation was judged to be statistically significant. In your own words, what can you say about the nature of the relationship? Write a one-page response to this question.

# Helpful Context to Answer Question:

In many data analyses, it is desirable to compute a coefficient of association. Coefficients of association are quantitative measures of the amount of relationship between two variables. Ultimately, most techniques can be reduced to a coefficient of association and expressed as the amount of relationship between the variables in the analysis. There are many types of coefficients of association. They express the mathematical association in different ways, usually based on assumptions about the data. The most common coefficient of association you will encounter is the Pearson product-moment correlation coefficient (symbolized as the italicized r), and it is the only coefficient of association that can safely be referred to as simply the “correlation coefficient”. It is common enough so that if no other information is provided, it is reasonable to assume that is what is meant.

Correlation coefficients are numbers that give information about the strength of relationship between two variables, such as two different test scores from a sample of participants. The coefficient ranges from -1 through +1. Coefficients between 0 and +1 indicate a positive relationship between the two scores, such as high scores on one test tending to come from people with high scores on the second. The other possible relationship, which is every bit as useful, is a negative correlation between -1 and 0. A negative correlation possesses no less predictive power between the two scores. The difference is that high scores on one measure are associated with low scores on the other.

An example of the kinds of measures that might correlate negatively is absences and grades. People with higher absences will be expected to have lower grades. When a correlation is said to be significant, it can be shown that the correlation is significantly different form zero in the population. A correlation of zero means no relationship between variables. A correlation other than zero means the variables are related. As the coefficient gets further from zero (toward +1 or -1), the relationship becomes stronger.

#### Interpreting Correlation: Magnitude and Sign

Interpreting a Pearson’s **correlation coefficient **(*r*XY) requires an understanding of two concepts:

- Magnitude.
- Sign (+/-).
- Describe the context of the data set.
- Specify the variables used and their scale of measurement.
- Specify sample size (N).
- Articulate the assumptions of a statistical test.
- Provide SPSS output that tests assumptions and interpret them.
- Articulate a research question relevant to the statistical test.
- Articulate the null hypothesis and alternative hypothesis.
- Specify the alpha level.
- Provide SPSS output for an inferential statistic and report it.
- Interpret statistical results against the null hypothesis.
- State conclusions.
- Analyze strengths and limitations of the statistical test.
- State conclusions.
- Analyze strengths and limitations of the statistical test.

The **magnitude **refers to the strength of the linear relationship between Variable X and Variable

The *r*XY ranges in values from -1.00 to +1.00. To determine magnitude, ignore the sign of the correlation, and the absolute value of *r*XY indicates the extent to which Variable X and Variable Y are linearly related. For correlations close to 0, there is no linear relationship. As the correlation approaches either -1.00 or +1.00, the magnitude of the correlation increases. Therefore, for example, the magnitude of *r *= -.65 is greater than the magnitude of *r *= +.25 (|.65| > |.25|).

In contrast to magnitude, the **sign **of a non-zero correlation is either **negative **or **positive**.

These labels are not interpreted as “bad” or “good.” Instead, the sign represents the slope of the linear relationship between X and Y. A **scatter plot **is used to visualize the slope of this linear relationship, and it is a two-dimensional graph with dots representing the combined X, Y score. Interpreting scatter plots is necessary to check assumptions of correlation discussed below.

A positive correlation indicates that, as values of X increase, the values of Y also increase (for example, grip strength and arm strength). You may wish to view examples of positive and negative correlations as viewed by a scatter plot.

#### Assumptions of Correlation

All inferential statistics, including correlation, operate under assumptions that are checked prior to calculating them in SPSS. Violations of assumptions can lead to erroneous inferences regarding a null hypothesis. The first assumption is **independence of observations **for X and Y scores. The measurement of individual X and Y scores should not be influenced by errors in measurement or problems in research design (for example, a student completing an IQ test should not be looking over the shoulder of another student taking that test; his or her IQ score should be independent). This assumption is not statistical in nature; it is controlled by using reliable and valid instruments and by maintaining proper research procedures to maintain independence of observations.

The second assumption is that, for Pearson’s *r*, X and Y are quantitative, and that each variable is **normally distributed**. Other correlations discussed below do not require this assumption, but Pearson’s *r *is the most widely used and reported type of correlation. It is therefore important to check this assumption when calculating Pearson’s *r *in SPSS. This assumption is checked by a visual inspection of X and Y histograms and calculations of skew and kurtosis values.

The third assumption of correlation is that X, Y scores are linearly related. Correlation does not detect strong curvilinear relationships. This assumption is checked by a visual inspection of the X, Y scatter plot.

The fourth assumption of correlation is that the X, Y scores should not have extreme **bivariate **outliers that influence the magnitude of the correlation. Bivariate outliers are also detected by a visual examination of a scatter plot. Outliers can dramatically influence the magnitude of the correlation, which sometimes leads to errors in null hypothesis testing. Bivariate outliers are particularly problematic when a sample size is small and suggests an N of at least 100 for studies that report correlations.

The fifth assumption of correlation is that the variability in Y scores is uniform across levels of X. This requirement is referred to as the homogeneity of variance assumption, which is usually difficult to assess in scatter plots with a small sample size. This assumption is typically emphasized when checking the homogeneity of variance for a t-test or analysis of variance (ANOVA) studied later in the course.

#### Hypothesis Testing of Correlation

The **null hypothesis **for correlation predicts no significant linear relationship between X and Y, or *H*0: *r*XY = 0. A **directional alternative hypothesis **for correlation is either an expected significant positive relationship (*H*1: *r*XY > 0) or significant negative relationship (*H*1: *r*XY < 0). A **non-directional alternative hypothesis **would simply predict that the correlation is significantly different from 0, but it does not stipulate the sign of the relationship (*H*1: *r*XY â‰ 0). For correlation as well as t-tests and ANOVA studied later in the course, the standard alpha level for rejecting the null hypothesis is set to .05. SPSS output for a correlation showing a p value of less than .05 indicates that the null hypothesis should be rejected; there is a significant relationship between X and Y. A p value greater than .05 indicates that the null hypothesis should not be rejected; there is not a significant relationship between X and Y.

#### Effect Size in Correlation

Even if the null hypothesis is rejected, how large is the association between X and Y? To provide additional context, the interpretation of all inferential statistics, including correlation, should include an estimate of **effect size**. An effect size is articulated along a continuum from “small,” to “medium,” to “large.”

An effect size for correlation is an estimate of the strength of association between X and Y in unit-free terms (that is, effect size estimation is independent of how X and Y are originally measured). Another advantage is that an effect size is calculated independently from the sample size of the study, as any non-zero correlation will be significant if the sample size is large enough. The effect size for correlation is calculated as *r*2 (pronounced “r-square”), and it is simply the squared value of *r*. For example, r = .50 results in an effect size r2= .25 (.52 = .25).

Roughly speaking, a correlation less than or equal to .10 is “small,” a correlation between .10 and .25 is “medium,” and a correlation above .25 is “large” (Warner, 2013).

#### Alternative Correlation Coefficients

The most widely used correlation is referred to as Pearson’s r. Pearson’s r is calculated between X and Y variables that are measured on either the interval or ratio scale of measurement (for example, height and weight). There are other types of correlation that depend on other scales of measurement for X and Y. A **point biserial** (rpb) correlation is calculated when one variable is dichotomous (for example, gender) and the other variable is interval/ratio data (for example, weight). If both variables are ranked (ordinal) data, the correlation is referred to as Spearman’s r (rs). Although the underlying scales of measurement differ from the standard Pearson’s r, rpb and rs values are both calculated between -1.00 and +1.00 and are interpreted similarly.

If both variables are dichotomous, the correlation is referred to as phi (É¸). A final test of association is referred to as chi-square. Phi and chi-square are studied in Advanced Inferential Statistics.

#### Correlation â€“ Application

We will apply our understanding of correlation in the third IBM SPSS assessment. For the remaining data analysis assessments in this course, we will use the Data Analysis and Application (DAA) template. The DAA is separated into five sections:

#### Section 1: Data File Description

#### Section 2: Testing Assumptions

#### Section 3: Research Question, Hypotheses, and Alpha Level

#### Section 4: Interpretation

#### Section 5: Conclusions

#### Proper Reporting of Correlations

Reporting a correlation in proper APA style requires an understanding of the following elements, including the statistical notation for a Pearson’s correlation ( *r *), the degrees of freedom, the correlation coefficient, the probability value, and the effect size. Consider the following example:

Only the correlation between organizational commitment (OC) and organizational citizenship behavior (OCB) was statistically significant, *r*(110) = +.22, p < .05 (two-tailed). The *r*2 was .05; thus, only about 5% of the variance in OC scores could be predicted from OCB scores; this is a weak positive relationship.

#### r, Degrees of Freedom, and Correlation Coefficient

The statistical notation for Pearson’s correlation is *r*, and following it is the degrees of freedom for this statistical test (for example, 110). The degrees of freedom for Pearson’s *r *is N â€“ 2, so there were 112 participants in the sample cited above (112 â€“ 2 = 110). Note that SPSS output for Pearson’s *r *provides N, so you must subtract 2 from N to correctly report degrees of freedom. Next is the actual correlation coefficient including the sign. After the correlation coefficient is the probability value.

#### Probability Values

Prior to the widespread use of SPSS and other statistical software programs, *p *values were often calculated by hand. The convention in reporting p values was to simply state, “*p** *< .05″ to reject the null hypothesis and “*p** *> .05″ to not reject the null hypothesis. However, SPSS provides an “exact” probability value, so it should be reported instead.

Hypothetical examples would be “*p** *= .02″ to reject the null hypothesis and “*p** *= .54″ to not reject the null hypothesis (round exact *p *values to two decimal places). One confusing point of SPSS output is that highly significant *p *values are reported as “.000,” because SPSS only reports probability values out to three decimal places. Remember that there is a “1” out there somewhere, such as *p *= .000001, as there is always some small chance that the null hypothesis is true. When SPSS reports a *p *value of “.000,” report “*p** *< .001″ and reject the null hypothesis.

The “(two-tailed)” notation after the p value indicates that the researcher was testing a nondirectional alternative hypothesis (*H*1: *r*XY â‰ 0). He or she did not have any a priori justification to test a directional hypothesis of the relationship between commitment and length of the relationship. In terms of alpha level, the region of rejection was therefore 2.5% on the left side of the distribution and 2.5% on the right side of the distribution (2.5% + 2.5% = 5%, or alpha level of .05). A “(one-tailed)” notation indicates a directional alternative hypothesis. In this case, all 5% of the region of rejection is established on either the left side (negative; (*H*1: *r*XY < 0) or right side (positive; (*H*1: *r*XY > 0) of the distribution. A directional hypothesis must be justified prior to examining the results. In this course, we will always specify a two-tailed (non-directional) test, which is more conservative relative to a one-tailed test. The advantage is that a nondirectional test detects relationships or differences on either side of the distribution, which is recommended in exploratory research.

#### Effect Size

Effect sizes provide additional context for the strength of the relationship in correlation. Effect sizes are important because any non-zero correlation will be statistically significant if the sample size is large enough. After the probability value is stated, provide the *r*2 effect size and interpret it as small, medium, or large. It is good form to report the effect size for both significant and nonsignificant statistics for meta-analyses (that is, statistical studies that combine the results across multiple independent research studies), but in journal articles where space is limited, authors will often just report effect sizes for statistics that reject the null hypothesis.

##### References

###### Lane, D. M. (2013). *HyperStat online statistics textbook*. Retrieved from http://davidmlane.com/hyperstat/index.html

###### Warner, R. M. (2013). *Applied statistics: From bivariate through multivariate techniques *(2nd ed.). Thousand Oaks, CA: Sage Publications.

##### Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!

Use Discount Code “Newclient” for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

The post interpreting correlations 2 appeared first on Custom Nursing Help.