# Relationship between correlation coefficient and regression line

### Difference Between Correlation and Regression in Statistics - Data Science Central

The correlation coefficient measures the "tightness" of linear relationship Intuitively, the easier it is for you to draw a line of best fit through a scatterplot, the . Coefficient of correlation is “R” value which is given in the summary table in the Regression output. R square is also called coefficient of. In correlation analysis, we estimate a sample correlation coefficient, more A correlation close to zero suggests no linear association between.

The regression line known as the least squares line is a plot of the expected value of the dependent variable for all values of the independent variable. Technically, it is the line that "minimizes the squared residuals". The regression line is the one that best fits the data on a scatterplot. Using the regression equation, the dependent variable may be predicted from the independent variable.

The slope of the regression line b is defined as the rise divided by the run. The y intercept a is the point on the y axis where the regression line would intercept the y axis.

## Difference Between Correlation and Regression

The slope and y intercept are incorporated into the regression equation. Nature of data The data for regression and correlation consist of pairs in the form x,y. The independent variable x is determined by the experimenter.

This means that the experimenter has control over the variable during the experiment. In our experiment, the temperature was controlled during the experiment. The dependent variable y is the effect that is observed during the experiment. It is assumed that the values obtained for the dependent variable result from the changes in the independent variable.

Regression and correlation analyses will determine the nature of this relationship, if any, and the strength of the relationship. It can be a consideration that all of the x,y pairs form a population. In some experiments, numerous observations of y are taken at each value of x. And "r" or perhaps better R-squared is a measure of how much of the variability in the dependent variable can be accounted for by differences in the independent variable.

The analogous measure for a dichotomous variable and a dichotomous outcome would be the attributable proportion, i. Therefore, it is always important to evaluate the data carefully before computing a correlation coefficient. Graphical displays are particularly useful to explore associations between variables. The figure below shows four hypothetical scenarios in which one continuous variable is plotted along the X-axis and the other along the Y-axis.

Scenario 3 might depict the lack of association r approximately 0 between the extent of media exposure in adolescence and age at which adolescents initiate sexual activity.

### Regression and Correlation

Example - Correlation of Gestational Age and Birth Weight A small study is conducted involving 17 infants to investigate the association between gestational age at birth, measured in weeks, and birth weight, measured in grams.

We wish to estimate the association between gestational age and infant birth weight. In this example, birth weight is the dependent variable and gestational age is the independent variable.

The data are displayed in a scatter diagram in the figure below. Each point represents an x,y pair in this case the gestational age, measured in weeks, and the birth weight, measured in grams. Note that the independent variable is on the horizontal axis or X-axisand the dependent variable is on the vertical axis or Y-axis.

The scatter plot shows a positive or direct association between gestational age and birth weight. Infants with shorter gestational ages are more likely to be born with lower weights and infants with longer gestational ages are more likely to be born with higher weights.

The formula for the sample correlation coefficient is where Cov x,y is the covariance of x and y defined as are the sample variances of x and y, defined as The variances of x and y measure the variability of the x scores and y scores around their respective sample meansconsidered separately.

The covariance measures the variability of the x,y pairs around the mean of x and mean of y, considered simultaneously.