# Simple Linear Regression

Simple linear regression is the statistic method used to make summary of and provide the association between variables that are continues and quantitative ,basically it deals with two measures that describes how strong the linear relationship we can compute in data .Simple linear regression consist of one variable known as the predictor variable and the other variable denote y known as response variable .

It is expected that when we talk of simple linear regression to touch on deterministic relationship and statistical relationship, the concept of least mean square .the interpretation of the b0 and b1 that they are used to interpret the estimate regression . There is also what is known as the population regression line and the estimate regression line .

This linearity is measured using the correlation coefficient (r), that can be -1,0,1.The strength of the association is determined from the value of r .( https://onlinecourses.science.psu.edu/stat501/node/250). History of simple linear regression Karl Pearson established a demanding treatment of Applied statistical measure known as Pearson Product Moment Correlation .

This come from the thought of Sir Francis Galton ,who had the idea of the modern notions of correlation and regression ,Sir Galton contributed in science of Biology ,psychology and Applied statistics . It was seen that Sir Galton is fascinated with genetics and heredity provided the initial inspiration that led to regression and Pearson Product Moment Correlation .

The thought that encouraged the advance of the Pearson Product Moment Correlation began with vexing problem of heredity to understand how closely features of generation of living things exhibited in the next generation. Sir Galton took the approach of using the sweet pea to check the characteristic similarities. ( Bravais, A. (1846).

The use of sweet pea was motivated by the fact that it is self- fertilize ,daughter plants shows differences in genetics from mother with-out the use of the second parent that will lead to statistical problem of assessing the genetic combination for both parents .The first insight came about regression came from two dimensional diagram plotting the size independent being the mother peas and the dependent being the daughter peas.

He used this representation of data to show what statisticians call it regression today ,from his plot he realised that the median weight of daughter seeds from a particular size of mother seed approximately described a straight line with positive slope less than 1. “Thus he naturally reached a straight regression line ,and the constant variability for all arrays of character for a given character of second .It was ,perhaps best for the progress of the correlational calculus that this simple special case should promulgated first .It so simply grabbed by the beginner (Pearson 1930,p.5).

Then it was later generalised to more complex way that is called the multiple regression. Galton, F. (1894),Importance of linear regressionStatistics usually uses the term linear regression in interpretation of data association of a particular survey, research and experiment .The linear relationship is used in modelling .The modelling of one explanatory variable x and response variable y will require the use of simple linear regression approach .

The simple linear regression is said to be broadly useful in methodology and the practical application. This method on simple linear regression model is not used in statistics only but it is applied in many biological, social science and environmental research. The simple linear regression is worth importance because it gives indication of what is to be expected, mostly in monitoring and amendable purposes involved on some disciplines(April 20, 2011 , plaza ,).

Description of linear regression The simple linear regression model is described by Y=(?0 + ?1 +E), this is the mathematical way of showing the simple linear regression with labelled x and y .This equation gives us a clear idea on how x is associated to y, there is also an error term shown by E. The term E is used to justification for inconsistency in y, that we can be able to detect it by the use of linear regression to give us the amount of association of the two variables x and y .

Then we have the parameters that are use to represent the population (?0 + ?1x) .We then have the model given by E(y)= (?0 + ?1x), the ?0 being the intercept and ?1 being the slope of y ,the mean of y at the x values is E(y) . The hypothesis is assumed is we assume that there is a linear association between the two variables ,that being our H0 and H1 we assume that there is no linear relationship between H0 and H1. Background of simple linear regression Galton used descriptive statistics in order for him to be able to generalise his work of different heredity problems .

The needed opportunity to conclude the process of analysing these data, he realised that if the degree of association between variables was held constant,then the slope of the regression line could be described if variability of the two measure were known . Galton assumed he estimated a single heredity constant that was generalised to multiple inherited characteristics .

He was wondering why, if such a constant existed ,the observed slopes in the plot of parent child varied too much over these characteristics .He realise variation in variability amongst the generations, he attained at the idea that the variation in regression slope he obtained were solely due to variation in variability between the various set of measurements .

In resent terms ,the principal this principal can be illustrated by assuming a constant correlation coefficient but varying the standard deviations of the two variables involved . On his plot he found out that the correlation in each data set. He then observe three data sets ,on data set one he realised that the standard deviation of Y is the same as that of X , on data set two standard deviation of Y is less than that of X ,third data set standard deviation of Y is great than that of X .

The correlation remain constant for three sets of data even though the slope of the line changes as an outcome of the differences in variability between the two variables.The rudimentary regression equation y=r(Sy / Sx)x to describe the relationship between his paired variables .He the used an estimated value of r , because he had no knowledge of calculating it The (Sy /Sx) expression was a correction factor that helped to adjust the slope according to the variability of measures .

He also realised that the ratio of variability of the two measures was the key factor in determining the slope of the regression line .The uses of simple linear regression Simple linear regression is a typical Statistical Data Analysis strategy. It is utilized to decide the degree to which there is a direct connection between a needy variable and at least one free factors. (e.g. 0-100 test score) and the free variable(s) can be estimated on either an all out (e.g. male versus female) or consistent estimation scale.

There are a few different suppositions that the information must full fill keeping in mind the end goal to meet all requirements for simple linear regression. Basic linear regression is like connection in that the reason for existing is to scale to what degree there is a direct connection between two factors.

The real contrast between the two is that relationship sees no difference amongst the two variables . Specifically, the reason for simple linear regression “anticipate” the estimation of the reliant variable in light of the estimations of at least one free factors. https://www.statisticallysignificantconsulting.com/RegressionAnalysis.htm

Reference

- Bravais, A. (1846), “Analyse Mathematique sur les Probabilites des Erreurs de Situation d’un Point,” Memoires par divers Savans, 9, 255-332.Duke, J. D. (1978),
- “Tables to Help Students Grasp Size Differences in Simple Correlations,” Teaching of Psychology, 5, 219-221.FitzPatrick, P. J. (1960),
- “Leading British Statisticians of the Nineteenth Century,” Journal of the American Statistical Association, 55, 38-70.Galton, F. (1894),
- Natural Inheritance (5th ed.), New York: Macmillan and Company.
- https://onlinecourses.science.psu.edu/stat501/node/250.https://www.statisticallysignificantconsulting.com/RegressionAnalysis.htmGhiselli, E. E. (1981),
- Measurement Theory for the Behavioral Sciences, San Francisco: W. H. Freeman.Goldstein, M. D., and Strube, M. J. (1995), “Understanding Correlations: Two Computer Exercises,” Teaching of Psychology, 22, 205-206.Karylowski, J. (1985),
- “Regression Toward the Mean Effect: No Statistical Background Required,” Teaching of Psychology, 12, 229-230.Paul, D. B. (1995),
- Controlling Human Heredity, 1865 to the Present, Atlantic Highlands, N.J.: Humanities Press.Pearson, E. S. (1938),
- Mathematical Statistics and Data Analysis (2nd ed.), Belmont, CA: Duxbury.Pearson, K. (1896),
- “Mathematical Contributions to the Theory of Evolution. III. Regression, Heredity and Panmixia,” Philosophical Transactions of the Royal Society of London, 187, 253-318.Pearson, K. (1922),
- Francis Galton: A Centenary Appreciation, Cambridge University Press.Pearson, K. (1930),
- The Life, Letters and Labors of Francis Galton, Cambridge University Press.Williams, R. H. (1975), “A New Method for Teaching Multiple Regression to Behavioral Science Students,” Teaching of Psychology, 2, 76-78.