Solution Manual for Probability and Statistics, 4th Edition

Get detailed solutions to your textbook questions with Solution Manual for Probability and Statistics, 4th Edition, a guide designed to make studying easier.

Isabella White
Contributor
4.2
33
5 months ago
Preview (16 of 446 Pages)
100%
Purchase to unlock

Page 1

Solution Manual for Probability and Statistics, 4th Edition - Page 1 preview image

Loading page image...

SOLUTIONSMANUAL(ONLINE ONLY)MARKSCHERVISHCarnegie Mellon UniversityPROBABILITY ANDSTATISTICSFOURTHEDITIONMorris DeGrootCarnegie Mellon UniversityMark SchervishCarnegie Mellon University

Page 2

Solution Manual for Probability and Statistics, 4th Edition - Page 2 preview image

Loading page image...

Page 3

Solution Manual for Probability and Statistics, 4th Edition - Page 3 preview image

Loading page image...

ContentsPreface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vi1Introduction to Probability11.2Interpretations of Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.4Set Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11.5The Definition of Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31.6Finite Sample Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61.7Counting Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71.8Combinatorial Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81.9Multinomial Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131.10 The Probability of a Union of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161.12 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202Conditional Probability252.1The Definition of Conditional Probability. . . . . . . . . . . . . . . . . . . . . . . . . . . . .252.2Independent Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282.3Bayes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342.4The Gambler’s Ruin Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402.5Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .413Random Variables and Distributions493.1Random Variables and Discrete Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . .493.2Continuous Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503.3The Cumulative Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .533.4Bivariate Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .583.5Marginal Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .643.6Conditional Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .703.7Multivariate Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .763.8Functions of a Random Variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .813.9Functions of Two or More Random Variables. . . . . . . . . . . . . . . . . . . . . . . . . . .853.10 Markov Chains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .933.11 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .974Expectation1074.1The Expectation of a Random Variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1074.2Properties of Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1104.3Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1134.4Moments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1154.5The Mean and the Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118

Page 4

Solution Manual for Probability and Statistics, 4th Edition - Page 4 preview image

Loading page image...

ivCONTENTS4.6Covariance and Correlation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1214.7Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1244.8Utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1294.9Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1345Special Distributions1415.2The Bernoulli and Binomial Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1415.3The Hypergeometric Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1455.4The Poisson Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1495.5The Negative Binomial Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1555.6The Normal Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1595.7The Gamma Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1655.8The Beta Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1715.9The Multinomial Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1745.10 The Bivariate Normal Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1775.11 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1826Large Random Samples1876.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1876.2The Law of Large Numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1886.3The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1946.4The Correction for Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1986.5Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1997Estimation2037.1Statistical Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2037.2Prior and Posterior Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2047.3Conjugate Prior Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2077.4Bayes Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2147.5Maximum Likelihood Estimators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2177.6Properties of Maximum Likelihood Estimators. . . . . . . . . . . . . . . . . . . . . . . . . .2207.7Sufficient Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2257.8Jointly Sufficient Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2287.9Improving an Estimator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2307.10 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2348Sampling Distributions of Estimators2398.1The Sampling Distribution of a Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2398.2The Chi-Square Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2418.3Joint Distribution of the Sample Mean and Sample Variance. . . . . . . . . . . . . . . . . .2458.4ThetDistributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2478.5Confidence Intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2508.6Bayesian Analysis of Samples from a Normal Distribution. . . . . . . . . . . . . . . . . . . .2548.7Unbiased Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2588.8Fisher Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2638.9Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .267

Page 5

Solution Manual for Probability and Statistics, 4th Edition - Page 5 preview image

Loading page image...

CONTENTSv9Testing Hypotheses2739.1Problems of Testing Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2739.2Testing Simple Hypotheses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2789.3Uniformly Most Powerful Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2849.4Two-Sided Alternatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2899.5ThetTest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2939.6Comparing the Means of Two Normal Distributions. . . . . . . . . . . . . . . . . . . . . . .2969.7TheFDistributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2999.8Bayes Test Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3039.9Foundational Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3079.10 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30910 Categorical Data and Nonparametric Methods31510.1 Tests of Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31510.2 Goodness-of-Fit for Composite Hypotheses. . . . . . . . . . . . . . . . . . . . . . . . . . . .31710.3 Contingency Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32010.4 Tests of Homogeneity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32310.5 Simpson’s Paradox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32510.6 Kolmogorov-Smirnov Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32710.7 Robust Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33310.8 Sign and Rank Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33710.9 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34211 Linear Statistical Models34911.1 The Method of Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34911.2 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35311.3 Statistical Inference in Simple Linear Regression. . . . . . . . . . . . . . . . . . . . . . . . .35611.4 Bayesian Inference in Simple Linear Regression. . . . . . . . . . . . . . . . . . . . . . . . . .36411.5 The General Linear Model and Multiple Regression . . . . . . . . . . . . . . . . . . . . . . . .36611.6 Analysis of Variance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37311.7 The Two-Way Layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37811.8 The Two-Way Layout with Replications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38311.9 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38912 Simulation39912.1 What is Simulation?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40012.2 Why Is Simulation Useful? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40012.3 Simulating Specific Distributions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40412.4 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41012.5 Markov Chain Monte Carlo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41412.6 The Bootstrap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42112.7 Supplementary Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425R Code For Two Text Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432

Page 6

Solution Manual for Probability and Statistics, 4th Edition - Page 6 preview image

Loading page image...

viCONTENTSPrefaceThis manual contains solutions to all of the exercises inProbability and Statistics, 4th edition, by MorrisDeGroot and myself. I have preserved most of the solutions to the exercises that existed in the 3rd edition.Certainly errors have been introduced, and I will post any errors brought to my attention on my web pagehttp://www.stat.cmu.edu/ mark/along with errors in the text itself. Feel free to send me comments.For instructors who are familiar with earlier editions, I hope that you will find the 4th edition at least asuseful. Some new material has been added, and little has been removed. Assuming that you will be spendingthe same amount of time using the text as before, something will have to be skipped. I have tried to arrangethe material so that instructors can choose what to cover and what not to cover based on the type of coursethey want. This manual contains commentary on specific sections right before the solutions for those sections.This commentary is intended to explain special features of those sections and help instructors decide whichparts they want to require of their students. Special attention is given to more challenging material and howthe remainder of the text does or does not depend upon it.To teach a mathematical statistics course for students with a strong calculus background, one could safelycover all of the material for which one could find time. The Bayesian sections include 4.8, 7.2, 7.3, 7.4, 8.6,9.8, and 11.4. One can choose to skip some or all of this material if one desires, but that would be ignoringone of the unique features of the text. The more challenging material in Sections 7.7–7.9, and 9.2–9.4 is reallyonly suitable for a mathematical statistics course. One should try to make time for some of the material inSections 12.1–12.3 even if it meant cutting back on some of the nonparametrics and two-way ANOVA. To teacha more modern statistics course, one could skip Sections 7.7–7.9, 9.2–9.4, 10.8, and 11.7–11.8.This wouldleave time to discuss robust estimation (Section 10.7) and simulation (Chapter 12). Section 3.10 on Markovchains is not actually necessary even if one wishes to introduce Markov chain Monte Carlo (Section 12.5),although it is helpful for understanding what this topic is about.Using Statistical SoftwareThe text was written without reference to any particular statistical or mathematical software.However,there are several places throughout the text where references are made to what general statistical softwaremight be able to do. This is done for at least two reasons. One is that different instructors who wish to usestatistical software while teaching will generally choose different programs. I didn’t want the text to be tiedto a particular program to the exclusion of others. A second reason is that there are still many instructorsof mathematical probability and statistics courses who prefer not to use any software at all.Given how pervasive computing is becoming in the use of statistics, the second reason above is becomingless compelling. Given the free and multiplatform availability and the versatility of the environmentR, eventhe first reason is becoming less compelling. Throughout this manual, I have inserted pointers to whichRfunctions will perform many of the calculations that would formerly have been done by hand when using thistext. The software can be downloaded for Unix, Windows, or Mac OS fromhttp://www.r-project.org/That site also has manuals for installation and use. Help is also available directly from within theRenvi-ronment.Many tutorials for getting started withRare available online. At the officialRsite there is the detailedmanual:http://cran.r-project.org/doc/manuals/R-intro.htmlthat starts simple and has a good table of contents and lots of examples. However, reading it from start tofinish isnotan efficient way to get started. The sample sessions should be most helpful.One major issue with using an environment likeRis that it is essentially programming. That is, studentswho have never programmed seriously before are going to have a steep learning curve. Without going intothe philosophy of whether students should learn statistics without programming, the field is moving in thedirection of requiring programming skills.People who want only to understand what a statistical analysis

Page 7

Solution Manual for Probability and Statistics, 4th Edition - Page 7 preview image

Loading page image...

CONTENTSviiis about can still learn that without being able to program. But anyone who actually wants to do statisticsas part of their job will be seriously handicapped without programming ability. At the end of this manualis a series of heavily commentedRprogramms that illustrate many of the features ofRin the context of aspecific example from the text.Mark J. Schervish

Page 8

Solution Manual for Probability and Statistics, 4th Edition - Page 8 preview image

Loading page image...

Chapter 1Introduction to Probability1.2Interpretations of ProbabilityCommentaryIt is interesting to have the students determine some of their own subjective probabilities. For example, letX denote the temperature at noon tomorrow outside the building in which the class is being held. Have eachstudent determine a numberx1such that the student considers the following two possible outcomes to beequally likely:Xx1andX > x1. Also, have each student determine numbersx2andx3(withx2< x3) suchthat the student considers the following three possible outcomes to be equally likely:Xx2, x2< X < x3,andXx3. Determinations of more than three outcomes that are considered to be equally likely can alsobe made. The different values ofx1determined by different members of the class should be discussed, andalso the possibility of getting the class to agree on a common value ofx1.Similar determinations of equally likely outcomes can be made by the students in the class for quantitiessuch as the following ones which were found in the 1973 World Almanac and Book of Facts:the numberof freight cars that were in use by American railways in 1960 (1,690,396), the number of banks in theUnited States which closed temporarily or permanently in 1931 on account of financial difficulties (2,294),and the total number of telephones which were in service in South America in 1971 (6,137,000).1.4Set TheorySolutions to Exercises1. Assume thatxBc.We need to show thatxAc.We shall show this indirectly.Assume, to thecontrary, thatxA. ThenxBbecauseAB. This contradictsxBc. HencexAis false andxAc.2. First, show thatA(BC)(AB)(AC). LetxA(BC). ThenxAandxBC.That is,xAand eitherxBorxC(or both).So either (xAandxB) or (xAandxC) or both.That is, eitherxABorxAC.This is what it means to say thatx(AB)(AC). ThusA(BC)(AB)(AC). Basically, running these steps backwardsshows that (AB)(AC)A(BC).3. To prove the first result, letx(AB)c.This means thatxis not inAB. In other words,xisneither inAnor inB. HencexAcandxBc. SoxAcBc. This proves that (AB)cAcBc.Next, suppose thatxAcBc. ThenxAcandxBc. Soxis neither inAnor inB, so it can’t beinAB. Hencex(AB)c. This shows thatAcBc(AB)c. The second result follows fromthe first by applying the first result toAcandBcand then taking complements of both sides.

Page 9

Solution Manual for Probability and Statistics, 4th Edition - Page 9 preview image

Loading page image...

2Chapter 1.Introduction to Probability4. To see thatABandABcare disjoint, letxAB. ThenxB, hencexBcand soxABc. Sono element ofABis inABc, hence the two events are disjoint. To prove thatA= (AB)(ABc),we shall show that each side is a subset of the other side. First, letxA. EitherxBorxBc. IfxB, thenxAB. IfxBc, thenxABc. Either way,x(AB)(ABc). So everyelement ofAis an element of (AB)(ABc) and we conclude thatA(AB)(ABc). Finally,letx(AB)(ABc).Then eitherxAB, in which casexA, orxABc, in whichcasexA.Either wayxA, so every element of (AB)(ABc) is also an element ofAand(AB)(ABc)A.5. To prove the first result, letx(iAi)c. This means thatxis not iniAi. In other words, for everyiI,xis not inAi. Hence for everyiI,xAci. Sox∈ ∩iAci. This proves that (iAi)c⊂ ∩iAci.Next, suppose thatx∈ ∩iAci. ThenxAcifor everyiI. So for everyiI,xis not inAi. Soxcan’t be iniAi. Hencex(iAi)c. This shows thatiAci(iAi)c. The second result follows fromthe first by applying the first result toAciforiIand then taking complements of both sides.6.(a) Blue card numbered 2 or 4.(b) Blue card numbered 5, 6, 7, 8, 9, or 10.(c) Any blue card or a red card numbered 1, 2, 3, 4, 6, 8, or 10.(d) Blue card numbered 2, 4, 6, 8, or 10, or red card numbered 2 or 4.(e) Red card numbered 5, 7, or 9.7.(a) These are the points not inA, hence they must be either below 1 or above 5. That isAc={x:x <1 orx >5}.(b) These are the points in eitherAorBor both. So they must be between 1 and 5 or between 3 and7. That is,AB={x: 1x7}.(c) These are the points inBbut not inC. That isBCc={x: 3< x7}. (Note thatBCc.)(d) These are the points in none of the three sets, namelyAcBcCc={x: 0< x <1 orx >7}.(e) These are the points in the answer to part (b) and inC. There are no such values and (AB)C=.8. Blood type A reacts only with anti-A, so type A blood corresponds toABc. Type B blood reactsonly with anti-B, so type B blood corresponds toAcB.Type AB blood reacts with both, soABcharacterizes type AB blood. Finally, type O reacts with neither antigen, so type O blood correspondsto the eventAcBc.9.(a) For eachn,Bn=Bn+1An, henceBnBn+1for alln.For eachn,Cn+1An=Cn, soCnCn+1.(b) Suppose thatx∈ ∩n=1Bn. ThenxBnfor alln. That is,x∈ ∪i=nAifor alln. Forn= 1, thereexistsinsuch thatxAi. Assume to the contrary that there are at most finitely manyisuchthatxAi. Letmbe the largest suchi. Forn=m+ 1, we know that there isinsuch thatxAi. This contradictsmbeing the largestisuch thatxAi. Hence,xis in infinitely manyAi. For the other direction, assume thatxis in infinitely manyAi. Then, for everyn, there is avalue ofj > nsuch thatxAj, hencex∈ ∪i=nAi=Bnfor everynandx∈ ∩n=1Bn.(c) Suppose thatx∈ ∪n=1Cn.That is, there existsnsuch thatxCn=i=nAi, soxAiforallin. So, there at most finitely manyi(a subset of 1, . . . , n1) such thatxAi. Finally,suppose thatxAifor all but finitely manyi. Letkbe the lastisuch thatxAi. ThenxAifor allik+ 1, hencex∈ ∩i=k+1Ai=Ck+1. Hencex∈ ∪n=1Cn.

Page 10

Solution Manual for Probability and Statistics, 4th Edition - Page 10 preview image

Loading page image...

Section 1.5.The Definition of Probability310.(a) All three dice show even numbers if and only if all three ofA,B, andCoccur. So, the event isABC.(b) None of the three dice show even numbers if and only if all three ofAc,Bc, andCcoccur. So, theevent isAcBcCc.(c) At least one die shows an odd number if and only if at least one ofAc,Bc, andCcoccur. So, theevent isAcBcCc.(d) At most two dice show odd numbers if and only if at least one die shows an even number, sothe event isABC. This can also be expressed as the union of the three events of the formABCcwhere exactly one die shows odd together with the three events of the formABcCcwhere exactly two dice show odd together with the evenABCwhere no dice show odd.(e) We can enumerate all the sums that are no greater than 5: 1 + 1 + 1, 2 + 1 + 1, 1 + 2 + 1, 1 + 1 + 2,2 + 2 + 1, 2 + 1 + 2, and 1 + 2 + 2. The first of these corresponds to the eventA1B1C1, thesecond toA2B1C1, etc. The union of the seven such events is what is requested, namely(A1B1C1)(A2B1C1)(A1B2C1)(A1B1C2)(A2B2C1)(A2B1C2)(A1B2C2).11.(a) All of the events mentioned can be determined by knowing the voltages of the two subcells. Hencethe following set can serve as a sample spaceS={(x, y) : 0x5 and 0y5},where the first coordinate is the voltage of the first subcell and the second coordinate is the voltageof the second subcell. Any more complicated set from which these two voltages can be determinedcould serve as the sample space, so long as each outcome could at least hypothetically be learned.(b) The power cell is functional if and only if the sum of the voltages is at least 6. Hence,A={(x, y)S:x+y6}.It is clear thatB={(x, y)S:x=y}andC={(x, y)S:x > y}.Thepowercell is not functional if and only if the sum of the voltages is less than 6. It needs less thanone volt to be functional if and only if the sum of the voltages is greater than 5. The intersectionof these two is the eventD={(x, y)S: 5< x+y <6}. The restriction “S” that appearsin each of these descriptions guarantees that the set is a subset ofS.One could leave off thisrestriction and add the two restrictions 0x5 and 0y5 to each set.(c) The description can be worded as “the power cell is not functional, and needs at least one morevolt to be functional, and both subcells have the same voltage.” This is the intersection ofAc,Dc,andB. That is,AcDcB. The part ofDcin whichx+y6 is not part of this set because ofthe intersection withAc.(d) We need the intersection ofAc(not functional) withCc(second subcell at least as big as first) andwithBc(subcells are not the same). In particular,CcBcis the event that the second subcell isstrictly higher than the first. So, the event isAcBcCc.1.5The Definition of ProbabilitySolutions to Exercises1. Define the following events:A={the selected ball is red},B={the selected ball is white},C={the selected ball is either blue, yellow, or green}.

Page 11

Solution Manual for Probability and Statistics, 4th Edition - Page 11 preview image

Loading page image...

4Chapter 1.Introduction to ProbabilityWe are asked to find Pr(C).The three eventsA,B, andCare disjoint andABC=S.So1 = Pr(A) + Pr(B) + Pr(C).We are told that Pr(A) = 1/5 and Pr(B) = 2/5.It follows thatPr(C) = 2/5.2. LetBbe the event that a boy is selected, and letGbe the event that a girl is selected. We are toldthatBG=S, soG=Bc. Since Pr(B) = 0.3, it follows that Pr(G) = 0.7.3.(a) IfAandBare disjoint thenBAcandBAc=B, so Pr(BAc) = Pr(B) = 1/2.(b) IfAB, thenB=A(BAc) withAandBAcdisjoint. So Pr(B) = Pr(A) + Pr(BAc). That is,1/2 = 1/3 + Pr(BAc), so Pr(BAc) = 1/6.(c) According to Theorem 1.4.11,B= (BA)(BAc). Also,BAandBAcare disjoint so, Pr(B) =Pr(BA) + Pr(BAc). That is, 1/2 = 1/8 + Pr(BAc), so Pr(BAc) = 3/8.4. LetE1be the event that studentAfails and letE2be the event that studentBfails.We wantPr(E1E2).We are told that Pr(E1) = 0.5, Pr(E2) = 0.2, and Pr(E1E2) = 0.1.According toTheorem 1.5.7, Pr(E1E2) = 0.5 + 0.20.1 = 0.6.5. Using the same notation as in Exercise 4, we now want Pr(Ec1Ec2).According to Theorems 1.4.9and 1.5.3, this equals 1Pr(E1E2) = 0.4.6. Using the same notation as in Exercise 4, we now want Pr([E1Ec2][Ec1E2]). These two events aredisjoint, soPr([E1Ec2][Ec1E2]) = Pr(E1Ec2) + Pr(Ec1E2).Use the reasoning from part (c) of Exercise 3 above to conclude thatPr(E1Ec2)=Pr(E1)Pr(E1E2) = 0.4,Pr(Ec1E2)=Pr(E2)Pr(E1E2) = 0.1.It follows that the probability we want is 0.5.7. Rearranging terms in Eq. (1.5.1) of the text, we getPr(AB) = Pr(A) + Pr(B)Pr(AB) = 0.4 + 0.7Pr(AB) = 1.1Pr(AB).So Pr(AB) is largest when Pr(AB) is smallest and vice-versa.The smallest possible value forPr(AB) occurs when one of the events is a subset of the other. In the present exercise this could onlyhappen ifAB, in which case Pr(AB) = Pr(B) = 0.7, and Pr(AB) = 0.4. The largest possiblevalue of Pr(AB) occurs when eitherAandBare disjoint or whenAB=S. The former is notpossible since the probabilities are too large, but the latter is possible. In this case Pr(AB) = 1 andPr(AB) = 0.1.8. LetAbe the event that a randomly selected family subscribes to the morning paper, and letBbe theevent that a randomly selected family subscribes to the afternoon paper. We are told that Pr(A) = 0.5,Pr(B) = 0.65, and Pr(AB) = 0.85. We are asked to find Pr(AB). Using Theorem 1.5.7 in the textwe obtainPr(AB) = Pr(A) + Pr(B)Pr(AB) = 0.5 + 0.650.85 = 0.3.

Page 12

Solution Manual for Probability and Statistics, 4th Edition - Page 12 preview image

Loading page image...

Section 1.5.The Definition of Probability59. The required probability isPr(ABC) + Pr(ACB)=[Pr(A)Pr(AB)] + [Pr(B)Pr(AB)]=Pr(A) + Pr(B)2 Pr(AB).10. Theorem 1.4.11 says thatA= (AB)(ABc). Clearly the two eventsABandABcare disjoint.It follows from Theorem 1.5.6 that Pr(A) = Pr(AB) + Pr(ABc).11.(a) The set of points for which (x1/2)2+ (y1/2)2<1/4 is the interior of a circle that is containedin the unit square. (Its center is (1/2,1/2) and its radius is 1/2.) The area of this circle isπ/4, sothe area of the remaining region (what we want) is 1π/4.(b) We need the area of the region between the two linesy= 1/2xandy= 3/2x. The remainingarea is the union of two right triangles with base and height both equal to 1/2. Each triangle hasarea 1/8, so the region between the two lines has area 12/8 = 3/4.(c) We can use calculus to do this. We want the area under the curvey= 1x2betweenx= 0 andx= 1. This equals10(1x2)dx=xx331x=0= 23.(d) The area of a line is 0, so the probability of a line segment is 0.12. The eventsB1, B2, . . .are disjoint, because the eventB1contains the points inA1, the eventB2containsthe points inA2but not inA1, the eventB3contains the points inA3but not inA1orA2, etc. Bythis same reasoning, it is seen thatni=1Ai=ni=1Biandi=1Ai=i=1Bi. Therefore,Pr(ni=1Ai)= Pr(ni=1Bi)andPr(i=1Ai)= Pr(i=1Bi).However, since the eventsB1, B2, . . .are disjoint,Pr(ni=1Bi)=ni=1P r(Bi)andPr(i=1Bi)=i=1Pr(Bi).13. We know from Exercise 12 thatPr(ni=1Ai)=ni=1Pr(Bi).

Page 13

Solution Manual for Probability and Statistics, 4th Edition - Page 13 preview image

Loading page image...

6Chapter 1.Introduction to ProbabilityFurthermore, from the definition of the eventsB1, . . . , Bnit is seen thatBiAifori= 1, . . . , n.Therefore, by Theorem 1.5.4, Pr(Bi)Pr(Ai) fori= 1, . . . , n. It now follows thatPr(ni=1Ai)ni=1Pr(Ai).(Of course, if the eventsA1, . . . , Anare disjoint, there is equality in this relation.)For the second part, apply the first part withAireplaced byAcifori= 1, . . . , n. We getPr(⋃Aci)ni=1Pr(Aci).(S.1.1)Exercise 5 in Sec. 1.4 says that the left side of (S.1.1) is Pr ([Ai]c). Theorem 1.5.3 says that this lastprobability is 1Pr (Ai). Hence, we can rewrite (S.1.1) as1Pr(⋂Ai)ni=1Pr(Aci).Finally take one minus both sides of the above inequality (which reverses the inequality) and producesthe desired result.14. First, note that the probability of type AB blood is 1(0.5+0.34+0.12) = 0.04 by using Theorems 1.5.2and 1.5.3.(a) The probability of blood reacting to anti-A is the probability that the blood is either type A ortype AB. Since these are disjoint events, the probability is the sum of the two probabilities, namely0.34 + 0.04 = 0.38.Similarly, the probability of reacting with anti-B is the probability of beingeither type B or type AB, 0.12 + 0.04 = 0.16.(b) The probability that both antigens react is the probability of type AB blood, namely 0.04.1.6Finite Sample SpacesSolutions to Exercises1. The safe way to obtain the answer at this stage of our development is to count that 18 of the 36outcomes in the sample space yield an odd sum.Another way to solve the problem is to note thatregardless of what number appears on the first die, there are three numbers on the second die that willyield an odd sum and three numbers that will yield an even sum. Either way the probability is 1/2.2. The event whose probability we want is the complement of the event in Exercise 1, so the probabilityis also 1/2.3. The only differences greater than or equal to 3 that are available are 3, 4 and 5. These large differenceonly occur for the six outcomes in the upper right and the six outcomes in the lower left of the arrayin Example 1.6.5 of the text. So the probability we want is 112/36 = 2/3.4. Letxbe the proportion of the school in grade 3 (the same as grades 2–6). Then 2xis the proportion ingrade 1 and 1 = 2x+ 5x= 7x. Sox= 1/7, which is the probability that a randomly selected studentwill be in grade 3.

Page 14

Solution Manual for Probability and Statistics, 4th Edition - Page 14 preview image

Loading page image...

Section 1.7.Counting Methods75. The probability of being in an odd-numbered grade is 2x+x+x= 4x= 4/7.6. Assume that all eight possible combinations of faces are equally likely. Only two of those combinationshave all three faces the same, so the probability is 1/4.7. The possible genotypes of the offspring areaaandAa, since one parent will definitely contribute ana, while the other can contribute eitherAora. Since the parent who isAacontributes each possibleallele with probability 1/2 each, the probabilities of the two possible offspring are each 1/2 as well.8.(a) The sample space contains 12 outcomes: (Head, 1), (Tail, 1), (Head, 2), (Tail, 2), etc.(b) Assume that all 12 outcomes are equally likely.Three of the outcomes have Head and an oddnumber, so the probability is 1/4.1.7Counting MethodsCommentaryIf you wish to stress computer evaluation of probabilities, then there are programs for computing factorialsand log-factorials. For example, in the statistical softwareR, there are functionsfactorialandlfactorialthat compute these. If you cover Stirling’s formula (Theorem 1.7.5), you can use these functions to illustratethe closeness of the approximation.Solutions to Exercises1. Each pair of starting day and leap year/no leap year designation determines a calendar, and eachcalendar correspond to exactly one such pair. Since there are seven days and two designations, thereare a total of 7×2 = 14 different calendars.2. There are 20 ways to choose the student from the first class, and no matter which is chosen, there are 18ways to choose the student from the second class. No matter which two students are chosen from the firsttwo classes, there are 25 ways to choose the student from the third class. The multiplication rule can beapplied to conclude that the total number of ways to choose the three members is 20×18×25 = 9000.3. This is a simple matter of permutations of five distinct items, so there are 5! = 120 ways.4. There are six different possible shirts, and no matter what shirt is picked, there are four different slacks.So there are 24 different combinations.5. Let the sample space consist of all four-tuples of dice rolls.There are 64= 1296 possible outcomes.The outcomes with all four rolls different consist of all of the permutations of six items taken four at atime. There areP6,4= 360 of these outcomes. So the probability we want is 360/1296 = 5/18.6. With six rolls, there are 66= 46656 possible outcomes.The outcomes with all different rolls arethe permutations of six distinct items.There are 6! = 720 outcomes in the event of interest, so theprobability is 720/46656 = 0.01543.7. There are 2012possible outcomes in the sample space. If the 12 balls are to be thrown into differentboxes, the first ball can be thrown into any one of the 20 boxes, the second ball can then be throwninto any one of the other 19 boxes, etc. Thus, there are 20·19·18· · ·9 possible outcomes in the event.So the probability is 20!/[8!2012].

Page 15

Solution Manual for Probability and Statistics, 4th Edition - Page 15 preview image

Loading page image...

8Chapter 1.Introduction to Probability8. There are 75possible outcomes in the sample space.If the five passengers are to get off at differentfloors, the first passenger can get off at any one of the seven floors, the second passenger can then getoff at any one of the other six floors, etc. Thus, the probability is7·6·5·4·375=3602401.9. There are 6! possible arrangements in which the six runners can finish the race. If the three runnersfrom team A finish in the first three positions, there are 3! arrangements of these three runners amongthese three positions and there are also 3! arrangements of the three runners from team B among thelast three positions.Therefore, there are 3!×3!arrangements in which the runners from team Afinish in the first three positions and the runners from team B finish in the last three positions. Thus,the probability is (3!3!)/6! = 1/20.10. We can imagine that the 100 balls are randomly ordered in a list, and then drawn in that order. Thus,the required probability in part (a), (b), or (c) of this exercise is simply the probability that the first,fiftieth, or last ball in the list is red. Each of these probabilities is the samer100 , because of the randomorder of the list.11. In terms of factorials,Pn,k=n!/[k!(nk)!].Since we are assuming thatnandn=kare large, wecan use Stirling’s formula to approximate both of them. The approximation ton! is (2π)1/2nn+1/2en,and the approximation to (nk)! is (2π)1/2(nk)nk+1/2en+k. The approximation to the ratio isthe ratio of the approximations because the ratio of each approximation to its corresponding factorialconverges to 1. That is,n!k!(nk)!(2π)1/2nn+1/2enk!(2π)1/2(nk)nk+1/2en+k=eknkk!(1kn)nk1/2.Further simplification is available if one assumes thatkis small compared ton, that isk/n0. In thiscase, the last factor is approximatelyek, and the whole approximation simplifies tonk/k!. This makessense because, ifn/(nk) is essentially 1, then the product of theklargest factors inn! is essentiallynk.1.8Combinatorial MethodsCommentaryThis section ends with an extended example called “The Tennis Tournament”.This is an application ofcombinatorics that uses a slightly subtle line of reasoning.Solutions to Exercises1. We have to assign 10 houses to one pollster, and the other pollster will get to canvas the other 10houses.Hence, the number of assignments is the number of combinations of 20 items taken 10 at atime,(2010)= 184,756.2. The ratio of(9330)to(9331)is 31/63<1, so(9331)is larger.

Page 16

Solution Manual for Probability and Statistics, 4th Edition - Page 16 preview image

Loading page image...

Section 1.8.Combinatorial Methods93. Since 93 = 63 + 30, the two numbers are the same.4. Let the sample space consist of all subsets (not ordered tuples) of the 24 bulbs in the box. There are(244)= 10626 such subsets. There is only one subset that has all four defectives, so the probability wewant is 1/10626.5. The number is4251!(97!4154!) =(425197), an integer.6. There are(n2)possible pairs of seats thatAandBcan occupy. Of these pairs,n1 pairs comprisetwo adjacent seats. Therefore, the probability isn1(n2)= 2n .7. There are(nk)possible sets ofkseats to be occupied, and they are all equally likely. There arenk+1sets ofkadjacent seats, so the probability we want isnk+ 1(nk)= (nk+ 1)!k!n!.8. There are(nk)possible sets ofkseats to be occupied, and they are all equally likely. Because the circlehas no start or end, there arensets ofkadjacent seats, so the probability we want isn(nk)= (nk)!k!(n1)!.9. This problem is slightly tricky. The total number of ways of choosing thenseats that will be occupiedby thenpeople is(2nn). Offhand, it would seem that there are only two ways of choosing these seatsso that no two adjacent seats are occupied, namely:X0X0. . .0and0X0X. . .0XUpon further consideration, however,n1 more ways can be found, namely:X00X0X. . .0X,X0X00X0X. . .0X, etc.Therefore, the total number of ways of choosing the seats so that no two adjacent seats are occupied isn+ 1. The probability is (n+ 1)/(2nn).
Preview Mode

This document has 446 pages. Sign in to access the full document!

Study Now!

XY-Copilot AI
Unlimited Access
Secure Payment
Instant Access
24/7 Support
Document Chat

Document Details

Subject
Statistics

Related Documents

View all