Accounting
Anthropology
Archaeology
Art History
Banking
Biology & Life Science
Business
Business Communication
Business Development
Business Ethics
Business Law
Chemistry
Communication
Computer Science
Counseling
Criminal Law
Curriculum & Instruction
Design
Earth Science
Economic
Education
Engineering
Finance
History & Theory
Humanities
Human Resource
International Business
Investments & Securities
Journalism
Law
Management
Marketing
Medicine
Medicine & Health Science
Nursing
Philosophy
Physic
Psychology
Real Estate
Science
Social Science
Sociology
Special Education
Speech
Visual Arts
Counseling
Q:
3+ When we are using a two-tailed hypothesis test, the null hypothesis is of the form
Q:
2 I want to test the hypothesis that children who experience daycare before the age of 3 do better in school than those who do not experience daycare. I have just described the
a) alternative hypothesis.
b) research hypothesis.
c) experimental hypothesis.
d) all of the above
Q:
1+ In one-sample tests of means we
a) compare one sample mean with another.
b) compare one sample mean against a population mean.
c) compare two sample means with each other.
d) compare a set of population means.
Q:
58 Based on the previous regression equation you just created, estimate cancer anxiety given the following values.
a) social support = 100; general anxiety = 50
b) social support = 25; general anxiety = 7
Q:
57 Given the information in the following table, create the corresponding regression equation.
Q:
56 If you wanted to identify mothers who needed a parenting intervention to enhance sensitivity and could only collect two pieces of information from each family due to time and costs, which of the measures in the previous example would you select? Why?
Q:
55 How do the regression results vary from the simple correlations presented below? Explain why this may be the case.
Q:
54 How much variability in maternal sensitivity is accounted for by the set of predictors?
Q:
53 Which individual predictors are significantly associated with maternal sensitivity?
Q:
52 Are the set of predictors significantly associated with maternal sensitivity?
Q:
51 Write a sentence explaining the analysis presented in the following table (i.e., what are the predictor variables, what is the criterion variable).
Q:
50 Based on the same formula (= .75 X -.40 Z + 5), calculate the missing predictor variables based on the following information.
a) = 100; X = 0
b) = 0; Z = -20
Q:
49 Estimate Y based on the equation = .75 X -.40 Z + 5 using the following values.
a) X = 10; Z = 0
b) X = 0; Z = 0
c) X = 20; Z = 100
Q:
48 Individual predictors cannot be individually associated with the criterion variable if R is not different from 0 (i.e., if the entire model is not significant).
Q:
47 R2 can range from -1 to 1.
Q:
46 Multiple regression examines the degree of association between any predictor and the criterion variable controlling for other predictors in the equation.
Q:
45 Stepwise regression procedures capitalize on chance.
Q:
44 Any association that was significant as a simple correlation will be significant in a multiple regression equation predicting the same criterion variable.
Q:
43 Multiple regression allows you to examine the degree of association between individual independent variables and the criterion variable AND the degree of association between the set of independent variables and the criterion variable.
Q:
42 In multiple regression, the criterion variable is predicted by more than one independent variable.
Q:
41 In a regression predicting adolescent delinquent behavior from gender, the number of delinquent peers in the social network, and parental under control, R2 = .60. This means each of the variables accounted for 36% of the variability in delinquent behavior.
Q:
40 Multiple regression means there is more than one criterion variable.
Q:
39 Multicollinearity occurs when the predictor variables are highly correlated with one another.
Q:
38 A table in which each variable is correlated with every other variable is called
a) a multivariate table
b) an intercorrelation matrix
c) a contingency table
d) a pattern matrix
Q:
37 We want to predict a person's happiness from the following variables: degree of optimism, success in school, and number of close friends. What type of statistical test can tell us whether these variables predict a person's happiness?
a) factorial ANOVA
b) multiple comparison
c) regression
d) multiple regression
Q:
36 Multiple regression analysis yielded the following regression equation:
Predicted Happiness = .36 friends - .13 stress + 1.23
Which of the following is true?
a) Happiness increases as Friends increase.
b) Happiness increases as Stress increases.
c) Happiness decreases as Friends and Stress increase.
d) none of the above
Q:
35+ The following regression equation was found for a sample of college students.
predicted happiness = 32.8 GPA + 17.3 pocket money + 7.4
Which of the following can be concluded?
a) The correlation between pocket money and happiness is larger than the correlation between GPA and happiness.
b) GPA is less useful than pocket money in predicting happiness.
c) For students with no pocket money, a one-unit increase in GPA will increase the value of predicted happiness by 32.8 units.
d) The r-squared value for GPA must be greater than the r-squared value for pocket money.
Q:
34 A multiple regression analysis was used to test the values of visual acuity, swing power, and cost of clubs for predicting golf scores. The regression analysis showed that visual acuity and swing power predicted significant amounts of the variability in golf scores, but cost of clubs did not. What can be concluded from these results?
a) Cost of clubs and golf scores are not correlated.
b) Cost of clubs adds predictive value above and beyond the predictive value of visual acuity and swing power.
c) The regression coefficient of cost of clubs is equal to zero.
d) Removing cost of clubs from the overall model will not reduce the model's R2 value significantly.
Q:
33 The example in the text predicting distress in cancer patients used distress at an earlier time as one of the predictors. This was done
a) because the authors wanted to be able to report a large correlation.
b) because the authors wanted to see what effect earlier distress had.
c) because the authors wanted to look at the effects of self-blame after controlling for initial differences in distress.
d) because the authors didn"t care about self-blame, but wanted to control for it.
Q:
32 The example in Chapter 11 of predicting weight from height and sex showed that
a) adding sex as a predictor accounted for an important source of variability.
b) there is a much stronger relationship between height and weight in males than in females.
c) sex is not a useful predictor in this situation.
d) we cannot predict very well, even with two predictors.
Q:
31+ The text generally recommended against formal procedures for finding an optimal regression procedure because
a) those procedures don"t work.
b) those procedures pay too much attention to chance differences.
c) the statistical software won"t handle those procedures.
d) all of the above
Q:
30 Many of the procedures for finding an optimal regression equation (whatever that means) are known as
a) hunting procedures.
b) trial and error procedures.
c) trialwise procedures.
d) stepwise procedures.
Q:
29+ If you drop a predictor from the regression equation
a) the correlation could increase.
b) the correlation will probably go down.
c) the correlation could stay the same.
d) both b and c
Q:
28 If the overall analysis of variance is NOT significant
a) we need to look particularly closely at the tests on the individual variables.
b) it probably doesn"t make much sense to look at the individual variables.
c) the multiple correlation is too large to worry about.
d) none of the above
Q:
27 The Analysis of Variance section in computer results for multiple regression
a) compares the means of several variables.
b) tests the overall significance of the regression.
c) tests the significance of each predictor.
d) compares the variances of the variables.
Q:
26 In an example in Chapter 10 we found that the relationship between how a student evaluated a course, and that student's expected grade was significant. In this chapter Grade was not a significant predictor. The difference is
a) we had a new set of data.
b) grade did not predict significantly once the other predictors were taken into account.
c) the other predictors were correlated with grade.
d) both b and c
Q:
25+ If we know that a regression coefficient is statistically significant, we know that
a) it is positive.
b) it is not 0.0.
c) it is not 1.0.
d) it is large.
Q:
24 The statistical tests on regression coefficients are usually
a) t tests.
b) z tests.
c) F tests.
d) r tests.
Q:
23+ When testing null hypotheses about multiple regression we
a) only look at the significance test on the overall multiple correlation.
b) have a separate significance test for each predictor and for overall significance.
c) don"t have to worry about significance testing.
d) know that if one predictor is significant, the others won"t be.
Q:
22 If we predict anxiety from stress and intrusive thoughts, and if the multiple regression is significant, that means that
a) the regression coefficient for stress will be significant.
b) the regression coefficient for intrusive thoughts will be significant.
c) both variables will be significant predictors.
d) We can"t tell.
Q:
21+ If the multiple correlation is high, we would expect to have _______ residuals than if the multiple correlation is low.
a) smaller
b) larger
c) the same as
d) We can"t tell.
Q:
20 If we find all of the residuals when predicting our obtained values of Y from the regression equation, the sum of squared residuals would be expected to be _______ the sum of the squared residuals for a new set of data.
a) less than
b) greater than
c) the same as
d) We can"t tell.
Q:
19 Suppose that in the previous question another subject had a predicted score of 10.3, and actually obtained a score of 12.4. For this subject the residual score would be
a) 2.1
b) -0.7
c) 12.4
d) 0.0
Q:
18+ If our regression equation is = 0.75 age + 0.50 experience - 0.10 grade point average " 2.0, and if our first subject had scores of 16, 4, and 3.0 on those three variables, respectively, then that subject's predicted score would be
a) 11.7
b) 10
c) 16
d) -3
Q:
17+ If two variables taken together account for 65% of the variability in Y, and a third variable has a simple squared correlation with Y of .10, then adding that variable to the equation will allow us to account for
a) 65% of the variability in Y.
b) 75% of the variability in Y.
c) 10% of the variability in Y.
d) at least 65% of the variability in Y.
Q:
16 In simple correlation a squared correlation coefficient tells us the percentage of variability in Y associated with variability in X. In multiple regression, the squared multiple correlation coefficient
a) has the same kind of meaning.
b) has no meaning.
c) overestimates the degree of variance accounted for.
d) underestimates the degree of variance accounted for.
Q:
15 The multiple correlation of several variables with a dependent variable is
a) less than the largest individual correlation.
b) equal to the correlation of the dependent variable to the values predicted by the regression equation.
c) noticeably less than the correlation of the dependent variable to the values predicted by the regression equation.
d) It could take on any value.
Q:
14+ If two variables are each correlated significantly with the dependent variable, then the multiple correlation will be
a) the sum of the two correlations.
b) the sum of the two correlations squared.
c) no less than the larger of the two individual correlations.
d) It could take on any value.
Q:
13 If we have three predictors and they are all individually correlated with the dependent variable, we know that
a) each of them will play a significant role in the regression equation.
b) each of them must be correlated with each other.
c) each regression coefficient will be significantly different from zero.
d) none of the above
Q:
12+ If you have a number of scores that are outliers you should
a) throw them out.
b) run the analysis with and without them, to see what difference they make.
c) try to identify what is causing those scores to be outliers.
d) both b and c
Q:
11 In multiple regression an outlier is one that
a) is reasonably close to the regression surface.
b) is far from the regression surface.
c) is extreme on at least one variable.
d) will necessarily influence the final result in an important way.
Q:
10 Before running a multiple regression, it is smart to look at the distribution of each variable. We do this because
a) we want to see that the distributions are not very badly skewed.
b) we want to look for extreme scores.
c) we want to pick up obvious coding errors.
d) all of the above
Q:
9+ When we speak of the correlations among the independent variables, we are speaking of
a) homoscedasticity.
b) multicollinearity.
c) independence.
d) multiple correlation.
Q:
8 If we want to compare the contribution of several predictors to the prediction of a dependent variable, we can get at least a rough idea by comparing
a) the regression coefficients.
b) the standardized regression coefficients.
c) the variances of the several variables.
d) the simple Pearson correlations of each variable with the dependent variable.
Q:
7 If one independent variable has a larger coefficient than another, this means
a) that the variable with the larger coefficient is a more important predictor.
b) that the variable with the larger coefficient is a more statistically significant predictor.
c) that the variable with the larger coefficient contributes more to predicting the variability in the criterion.
d) We can"t say anything about relative importance or significance from what is given here.
Q:
6+ In the previous question, a student who scored 0 on both X1 and X2 would be expected to have a dependent variable score of
a) 0.
b) 3.5.
c) 12.
d) the mean of Y.
Q:
5+ Given the following regression equation (= 3.5 X1 + 2X2 + 12), the coefficient for X1 would mean that
a) two people who differ by one point on X1 would differ by 3.5 points on .
b) two people who differ by one point on X1 would differ by 3.5 points on , assuming that they did not differ on X2.
c) X1 causes a 3.5 unit change in the dependent variable.
d) X1 is more important than X2.
Q:
4 In multiple regression the intercept is usually denoted as
a) a
b) b1
c) b0
d) 0
Q:
3+ In the previous question the intercept would be
a) 1.0
b) 0.0
c) 3.0
d) There would be no way to know.
Q:
2+ Assume that we generated a prediction just by adding together the number of stressful events you report experiencing over the last month, the number of close friends you have, and your score on a measure assessing how much control you feel you have over events in your life (i.e., prediction = stress + friends + control). The regression coefficient for stressful events would be
a) 1.0
b) 4.0
c) 0.0
d) There is no way to know.
Q:
1 The difference between multiple regression and simple regression is that
a) multiple regression can have more than one dependent variable.
b) multiple regression can have more than one independent variable.
c) multiple regression does not produce a correlation coefficient.
d) both b and c
Q:
20+ When we think in terms of standardized data, the slope represents
a) the change in X for a one unit change in Y.
b) the number of standard deviations will differ for a one standard deviation difference in X.
c) the height of the regression line.
d) 0.
Q:
19 When we have standardized data, the slope will be denoted as
Q:
18+ When we standardize paired data we
a) divide everything by the standard deviation of X.
b) convert X to a T score.
c) convert both X and Y to z scores.
d) subtract the mean from each value of X and Y.
Q:
17 In the equation = 12.6 X + 5
a) a difference of one unit in X will lead to a 5 point difference in the prediction.
b) will decrease as X increases.
c) the correlation is certain to be significant.
d) a difference of one unit in X will lead to a 12.6 point difference in the prediction.
Q:
16 The symbols a and b are frequently referred to as
a) regression coefficients.
b) constants.
c) slopes.
d) regression correlations.
Q:
15 In calculating the regression coefficients we square the errors of prediction because
a) statisticians square everything.
b) the sum of the errors would always be 0 for a great many lines we could draw.
c) squaring makes the errors more striking.
d) little errors are more important than big errors.
Q:
14 The notation (Y - ) represents
a) our best prediction.
b) the regression line.
c) the predicted value.
d) error in prediction.
Q:
13 The "best fitting line" is that regression line that
a) minimizes the errors of prediction.
b) minimizes each squared error of prediction.
c) minimizes the sum of squared errors of prediction.
d) hits the most points as it goes through the scatterplot.
Q:
12 The notation is used instead of Y
a) to indicate that the answer is only approximate.
b) to indicate that we have an equation for a straight line.
c) to indicate that the result is a prediction.
d) because this is a mathematical equation.
Q:
11+ In the previous problem your best estimate of the intercept relating the total earning from the hours worked is
a) -10.
b) 0.
c) 10.
d) We have no idea.
Q:
10+ Suppose that you sell ice cream from a cart on the street. After you pay the ice cream supplier, the regression line that predicts your ice cream profits from the number of hours you work has a slope of 15. But the man who owns the cart charges you $5 per hour in rent. How much money will you earn per hour?
a) $15
b) $10
c) $5
d) nothing
Q:
9 If we have a regression line predicting the amount of improvement in your performance as a function of the amount of tutoring you receive, an intercept of 12 would mean that
a) you need to have 12 hours of tutoring to get an A.
b) if you don"t have any tutoring, the best you can do is a grade of 12.
c) even without tutoring you will improve.
d) tutoring helps.
Q:
8 When the slope of the regression line is positive, the line goes from
a) upper left to lower right.
b) lower left to upper right.
c) the line is flat.
d) It depends on the intercept.
Q:
7 In the equation for a straight line used in the text, the slope is represented by
a) a
b) b
c) X
d) Y
Q:
6 In the equation for a straight line used in the text, the intercept is represented by
a) a
b) b
c) X
d) Y
Q:
5+ The equation for a straight line is an equation of the form
a) Y = bX a
b) Y = bX
c) Y = bX2 + a
d) Y = bX + a
Q:
4+ When we have considerable spread of the points about the regression line, the slope of that line will be _______ the slope of a similar line when there is less scatter.
a) less than
b) more than
c) the same as
d) more extreme than
Q:
3+ If the correlation between X and Y is negative, the slope of the regression equation must be
a) negative.
b) positive.
c) non-significant.
d) It could be either a or b.
Q:
2 When we make a prediction using a regression equation, our prediction is _______ on X.
a) dependent
b) conditional
c) correlated
d) both a and b