diff options
Diffstat (limited to 'zhilova/final.tex')
-rw-r--r-- | zhilova/final.tex | 179 |
1 files changed, 179 insertions, 0 deletions
diff --git a/zhilova/final.tex b/zhilova/final.tex new file mode 100644 index 0000000..10b93f7 --- /dev/null +++ b/zhilova/final.tex @@ -0,0 +1,179 @@ +\newfam\rsfs +\newfam\bbold +\def\scr#1{{\fam\rsfs #1}} +\def\bb#1{{\fam\bbold #1}} +\let\oldcal\cal +\def\cal#1{{\oldcal #1}} +\font\rsfsten=rsfs10 +\font\rsfssev=rsfs7 +\font\rsfsfiv=rsfs5 +\textfont\rsfs=\rsfsten +\scriptfont\rsfs=\rsfssev +\scriptscriptfont\rsfs=\rsfsfiv +\font\bbten=msbm10 +\font\bbsev=msbm7 +\font\bbfiv=msbm5 +\textfont\bbold=\bbten +\scriptfont\bbold=\bbsev +\scriptscriptfont\bbold=\bbfiv + +\def\E{\bb E} +\def\P{\bb P} +\newcount\qnum +\def\fr#1#2{{#1\over #2}} +\def\var{\mathop{\rm var}\nolimits} +\def\cov{\mathop{\rm cov}\nolimits} +\def\dd#1#2{\fr{\partial #1}{\partial #2}} +\def\bmatrix#1{\left[\matrix{#1}\right]} + +\def\problem#1{\vskip0pt plus 2in\goodbreak +\vskip0pt plus -2in\medskip\noindent{\bf #1)}\smallskip\penalty500} +\def\part#1{\goodbreak\smallskip\noindent{\bf (#1)}\penalty500\par} + +\problem{1} +\part{a} +\let\truebeta\beta +\def\beta{\hat\truebeta} +$${\cal L}(\beta;X_1,\ldots,X_n) = \prod_{i=1}^{n} {1\over +\Gamma(\alpha)\beta^\alpha}X_i^{\alpha-1}e^{-X_i/\beta}$$ +$$\ell(\beta) = \ln{\cal L}(\beta) = -n\ln(\Gamma(\alpha)) - \alpha n\ln\beta + + \sum_{i=1}^n [(\alpha-1)\ln(X_i) - X_i/\beta].$$ +$${d\ell(\beta)\over d\beta} = -{\alpha n\over\beta} + \sum_{i=1}^n +{X_i\over\beta^2} = 0 \to \sum_{i=1}^n X_i = \beta \alpha n \to \beta += \overline X/\alpha = \overline X/4,$$ +where $\overline X = \fr1n \sum_{i=1}^n X_i.$ + +\part{b} +\let\beta\truebeta +$$\E(\overline X) = \fr1n\sum_{i=1}^n \E X = \E X,$$ +so $\E\hat\beta = \E X / \alpha.$ +$$\E X = \int_0^\infty {x^\alpha e^{-x/\beta}\over\Gamma(\alpha)\beta^\alpha} dx += \alpha\beta\int_0^\infty {x^\alpha e^{-x/\beta}\over +\Gamma(\alpha+1)\beta^{\alpha+1}} dx += \alpha\beta,$$ +because the integrand is the gamma distribution for $\alpha' = +\alpha+1.$ Therefore, the estimator is unbiased because +$\E\hat\beta = \alpha\beta/\alpha = \beta.$ + +\part{c} +By the central limit theorem, as $n\to\infty,$ $\hat\beta \sim +{\cal N}(\E (\hat\beta), \var(\hat\beta)/n) = {\cal N}(\beta, +\var(\hat\beta)/n),$ so if +$\var \hat\beta<\infty,$ the estimator is consistent. + +The mgf of $X_k$ is $(1-\beta t)^\alpha,$ so the mgf of +$Y = \sum_{i=1}^n X_i$ is $(1-\beta t)^{n\alpha}.$ +$$Y \sim \Gamma(\alpha n, \beta) \to \overline X \sim \Gamma(\alpha n, +\beta/n) \to \hat\beta \sim \Gamma(\alpha n, \beta/n\alpha),$$ +by change of variable in the gamma distribution.% +\footnote{*}{$X\sim \Gamma(\alpha, \beta)$ has pdf +${1\over\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x\over\beta},$ so +$kX$ has pdf +${1\over\Gamma(\alpha)\beta^\alpha}(kx)^{\alpha-1}e^{-kx\over\beta},$ +giving $kX \sim \Gamma(\alpha, k\beta).$} +The gamma function has variance $\alpha\beta^2,$ so the variance of +$\hat\beta$ is $\beta^2/n\alpha,$ which is less than infinity, so the +estimator is consistent (and it will tend to zero). +% to prove, and to find variance. actually, we don't need to find +% variance for this problem, but we do for (d). find the lecture!! + +\part{d} + +Since we've already proven $\E(\hat\beta) = \beta$ and obtained a value +for $\var\hat\beta,$ we get the following result by the Central Limit +Theorem, +$\sqrt n(\hat\beta - \beta) \to {\cal N}(0, \var(\hat\beta)) = +{\cal N}(0, \beta^2/n\alpha)$ + +\problem{2} + +\part{a} + +With $\theta = 0,$ by Student's Theorem, +$${\overline X\over S/\sqrt{n}} \sim t(n-1).$$ +Taking $t_{1-\alpha}(n-1)$ to be the $1-\alpha$ quantile of the +t-distribution with $n-1$ degrees of freedom, we have a test ${\overline +X\over S/\sqrt{n}} \geq t_{1-\alpha}(n-1).$ + +Substituting in our given values ($\overline X = .8,$ $n = 25,$ $S^2 = +2.56$), +$${.8\over 1.6/\sqrt{25}} \geq 1.711 \to 2.5 \geq 1.711,$$ +so our data does pass the test. + +\part{b} + +\def\cdf{{\bf T}} +Letting $\cdf$ be the cdf of a t-distribution with $n-1$ degrees of +freedom, we use the following result of student's theorem +$${\overline X - \theta\over S/\sqrt n} \sim t(n-1)$$ +to get the probability this static is in the critical set as +$$\gamma_C(\theta) = \P({\overline X\over S/\sqrt n} > 1.711) = +\P({\overline X-\theta\over S/\sqrt n} > 1.711-{\theta\over S/\sqrt n}) +$$$$ += 1-\cdf(1.711-{\theta\over S/\sqrt n}) = +1-\cdf(1.711-{\theta\over .32}) += \cdf({25\theta\over8} - 1.711). +$$ + +\problem{3} + +\part{a} + +The cdf of $X_i$ on the support of its pdf $[0,1],$ is $x,$ (zero on +$x<0$ and one on $x>1$) +With $X_{(1)} = \min(X_1,\ldots,X_6)$ and $X_{(6)} = +\max(X_1,\ldots,X_6),$ +$$\P(X_{(1)} \geq x) = \P(X_1 \geq x)\P(X_2 \geq x)\cdots\P(X_6 \geq x) += (1-x)^6 \to \P(X_{(1)} \leq x) = 1-(1-x)^6.$$ +$$\P(X_{(6)} \leq x) = \P(X_1 \leq x)\P(X_2 \leq x)\cdots\P(X_6 \leq x) += x^6.$$ + +These give pdfs for $X_{(1)}$ and $X_{(6)},$ respectively, $6(1-x)^5$ +and $6x^5.$ + +\part{b} + +$$\E(X_{(1)} + X_{(6)}) = \int_0^1 x(6(1-x)^5 + 6x^5) dx =$$$$ +\left[ x(-(1-x)^6 + x^6) \right]_0^1 - \int_0^1 -(1-x)^6 + x^6 dx = +1 - \fr17\left[-(1-x)^7 + x^7\right]_0^1 = 1. +$$ + +This makes sense by symmetry. The maximum value of the set has the same +pdf as the minimum value reflected about .5, so we expect $\E X_{(6)} = +1 - \E X_{(1)}.$ + +\problem{4} + +\part{a} + +$$Y := 2X_2 - X_1 = \pmatrix{-1&2&0}{\bf X} = t^T{\bf X}.$$ +$$\E Y = t^T\E{\bf X} = -1(-1) + 2(2) = 5.$$ +$$\var Y = \E(t^TXX^Tt) - \E(t^TX)\E(X^Tt) = t^T(\var X)t = +\bmatrix{-1&2&0}\bmatrix{4&-1&0\cr-1&1&0\cr0&0&2}\bmatrix{-1\cr2\cr0} = +12. +$$ + +\def\cdf{{\bf\Phi}} +Where $\cdf$ is the cdf of the standard normal distribution, we +normalize $2X_2-X_1,$ to reach the standard normal and get +$$\P(2X_2 > X_1 + 3) = \P(2X_2 - X_1 - 5 > -2) = \P((2X_2 - X_1 - +5)/\sqrt{12} > -2/\sqrt{12}) = 1-\cdf(-1/\sqrt{3}) \approx .793.$$ + +\part{b} + +If $X,Y \sim {\cal N}(0,1)$ and $X\perp Y,$ $X^2 + Y^2 \sim \chi^2(2)$ +by the definition of $\chi^2.$ + +For our problem, $Y^TY \sim \chi^2(2)$ if $Y$ is a multivariate normal +with +$$\E Y = \pmatrix{0\cr0} \qquad \var Y = \pmatrix{1&0\cr0&1}.$$ + +$$\var (X_1,X_3)^T = \pmatrix{4&0\cr0&2} \to Y = +\pmatrix{1/2&0\cr0&1/\sqrt2}(X_1,X_3)^T + \mu,$$ +$A$ given by $(\var (X_1,X_3)^T)^{-1/2},$ from $A = \Sigma^{-1/2}$ +(3.5.12) in the textbook. + +$$\E Y = \bmatrix{-1/2\cr\sqrt2} + \mu = 0 \to \mu = +\bmatrix{1/2\cr -\sqrt2}\qquad A = \bmatrix{1/2&0\cr0&1/\sqrt2}$$ + +\bye |