aboutsummaryrefslogtreecommitdiff
path: root/houdre
diff options
context:
space:
mode:
Diffstat (limited to 'houdre')
-rw-r--r--houdre/hw7.tex135
1 files changed, 135 insertions, 0 deletions
diff --git a/houdre/hw7.tex b/houdre/hw7.tex
new file mode 100644
index 0000000..0d4a65e
--- /dev/null
+++ b/houdre/hw7.tex
@@ -0,0 +1,135 @@
+\newfam\rsfs
+\newfam\bbold
+\def\scr#1{{\fam\rsfs #1}}
+\def\bb#1{{\fam\bbold #1}}
+\let\oldcal\cal
+\def\cal#1{{\oldcal #1}}
+\font\rsfsten=rsfs10
+\font\rsfssev=rsfs7
+\font\rsfsfiv=rsfs5
+\textfont\rsfs=\rsfsten
+\scriptfont\rsfs=\rsfssev
+\scriptscriptfont\rsfs=\rsfsfiv
+\font\bbten=msbm10
+\font\bbsev=msbm7
+\font\bbfiv=msbm5
+\textfont\bbold=\bbten
+\scriptfont\bbold=\bbsev
+\scriptscriptfont\bbold=\bbfiv
+
+\def\Pr{\bb P}
+\def\E{\bb E}
+\newcount\qnum
+\def\q{\afterassignment\qq\qnum=}
+\def\qq{\qqq{\number\qnum}}
+\def\qqq#1{\bigskip\goodbreak\noindent{\bf#1)}\smallskip}
+\def\align#1{\vcenter{\halign{$\displaystyle##\hfil$\tabskip1em&&
+ $\hfil\displaystyle##$\cr#1}}}
+\def\fr#1#2{{#1\over #2}}
+\def\var{\mathop{\rm var}\nolimits}
+\def\cov{\mathop{\rm cov}\nolimits}
+\def\infint{\int_{-\infty}^\infty}
+\def\pa#1#2{\partial#1/\partial#2}
+
+\q2
+
+%Let the mgf of $X_i$ be $M_{X_i}(x).$
+%The sum of $n$ $X_i$ has mgf $M_{X_i}(x)^n.$
+%This has variance $(M_{X_i}(x)^n)''(0)-(M_{X_i}(x)^n)'(0)^2 =
+%(nM_{X_i}(x)^{n-1}M_{X_i}'(x))'(0)-n^2\mu^2 =
+%n(n-1)M_{X_i}(0)^{n-2}M_{X_i}'(0)^2 + nM_{X_i}(0)^{n-1}M_{X_i}''(0)
+%-n^2\mu^2 = n(n-1)\mu^2 + n\sigma^2 - n^2\mu^2 = n\sigma^2 - n\mu^2.$
+
+%The mgf of $\overline X$ is $$M_{\overline X}(x)=e^{tn^{-1}}M_{X_i}(x)^n.
+%\var\overline X = (e^{tn^{-1}}(n^{-1}M_{X_i}(x)^n+nM_{X_i}(x)^{n-1}M_{X_i}'(x)^n))' $$
+
+$$\E\left(\fr1{n-1}\sum_{i=1}^n(X_i-\overline X)^2\right)
+= \E\left(\fr1{n-1}\sum_{i=1}^n(X_i-\mu - (\overline X -
+\mu))^2\right)$$$$
+= \E\left(\fr1{n-1}\sum_{i=1}^n((X_i-\mu)^2-2(X_i-\mu)(\overline X-\mu)
++ (\overline X-\mu)^2)\right)$$$$
+= \E\left(\fr1{n-1}\right)
+= \E\left(\fr1{n-1}(\sum_{i=1}^n(X_i-\mu)^2-n(\overline X-\mu)^2)\right)
+= \fr1{n-1}(n\sigma^2 - n\E(\overline X-\mu)^2)$$$$
+= \fr1{n-1}(n\sigma^2 - n\E(({1\over n}(X_1-\mu)+\cdots+(X_n-\mu))^2)
+= \fr1{n-1}(n\sigma^2 - n\E(({1\over n}(X_1-\mu)+\cdots+(X_n-\mu))^2)
+$$$$
+= \fr1{n-1}(n\sigma^2 - {1\over n}
+(\E((X_1-\mu)^2)+\cdots+\E((X_n-\mu)^2) + \E((X_1-\mu)(X_2-\mu))+\cdots)
+= \fr1{n-1}(n\sigma^2 - \sigma^2) = \sigma^2.$$
+Note $\E((X_i-\mu)(X_j-\mu)) = \E(X_i-\mu)\E(X_j-\mu) = 0,$ when $i\neq
+j,$ by independence.
+
+\q3
+
+$1 = \E(S_n/S_n) = n\E(X_1/S_n) \to \E(X_1/S_n) = 1/n,$ using linearity
+and symmetry.
+$\E(S_m/S_n) = m\E(X_1/S_n) = m/n.$
+However, it is not necessary that $\E(X_k/S_n) \neq \E(X_1/S_n)$ for
+$k>n,$ so the equality doesn't necessarily hold for $m>n.$
+
+\q8
+
+The mgf is the product of the sum's components' mgf, or in other words
+$M_{X_1}(x)^n$ for $N=n.$
+$M_U(ln(s)) = \E(e^{\ln(s)X}) = \E(s^X) = G_U(s),$ and the general mgf
+is $G_N(M_{X_1}(x)) = M_N(\ln(M_{X_1}(x))).$
+
+\q10
+
+Expanding $\cov(X,Y) = \E(XY)-\E(X)\E(Y)$ gives
+$$\cov(X,Y) = \E((X_1+\cdots+X_n)(Y_i+\cdots+Y_n))-\E(X_1+\cdots+X_n)
+\E(Y_1+\cdots+Y_n),$$
+becomes, by linearity,
+$$\cov(X,Y) = \E(X_1Y_1)+\cdots+\E(X_1Y_n)\cdots+\E(X_nY_n) - (\E X_1 +
+\cdots + \E X_n)(\E Y_1 + \cdots + \E Y_n)$$
+$$= \E(X_1Y_1) + \E(X_2Y_2) + \cdots + \E(X_nY_n),$$
+by independence for $X_i,Y_j:i\neq j.$
+
+By the earlier theorem, since each game is independent and identical,
+$\cov(X,Y) = n\cov(X_1,Y_1).$
+$$\cov(X_1,Y_1) = \E(XY) - \E(X)\E(Y) = 2pq - (2p^2 + 2pq)(2q^2 + 2pq) =
+2pq - 4p^2q^2 - 4p^3q - 4p^2q^2 - 4pq^3$$$$= 2pq - 4pq(2pq+p^2+q^2) = -2pq.$$
+So $\cov(X,Y) = -2npq.$
+
+\q14
+
+The coupon collecting time to all coupons is the sum of geometric
+distributions with parameters $p=c/c,(c-1)/c\ldots1/c.$
+The moment generating function of a geometric distribution is
+$$\sum_x^\infty e^{tx}\Pr(X=x) = \sum_x^\infty e^{tx}pq^x =
+\sum_x^\infty p(e^tq)^x = {p\over 1-e^tq}.$$
+
+The moment generating function is therefore the product of this series
+of geometric moment generating functions:
+$${c!\over c^c}{1\over(1-e^t)(1-2e^t)\cdots(1-ce^t)}$$
+
+\q17
+
+$$f(x) = {1\over2}e^{-|x|} = {1\over2\pi}\infint e^{-itx}\phi(t)dt =
+{1\over2\pi}\infint e^{-itx}{1\over1+t^2}dt =
+{1\over2\pi}\infint -e^{itx}{1\over1+t^2}dt =
+{1\over2}e^{-|x|},$$
+based on 7.80, using $\phi(t) = {1\over1+t^2}.$
+
+\q18
+
+The characteristic function of the Cauchy distribution is $e^{-|t|}.$
+$$M_{A_n}(t) = \E(e^{itn^{-1}(X_1+\cdots+X_n)}) =
+\root n\of{M_{X_1}(t)^n} = M_{X_1}(t).$$
+
+\q20
+
+%deeply unsure
+
+By applying the partition theorem to $M(t) = \E(e^{tX}),$ we trivially
+get $$M(t) = \sum_{k=1}^\infty \Pr(N=k)\E(e^{tX}|N=k).$$
+
+Given these specific distributions,
+$$\Pr(X\leq x) = \Pr(U_1\leq x)\cdots\Pr(U_k\leq x) = x^n \to
+f_X(x) = nx^{n-1} \to \E(e^{tX}|N=k) = \sum_{j=0}^\infty {1\over j!}t^j
+\E(n^jx^{j(n-1)}) = \sum_{j=0}^\infty {1\over
+j!}t^jn^j{1\over1+j(n-1)}.$$
+using $\E(e^{tX}) = \sum_{n=0}^\infty {t^j\over j!}\E(X^j).$
+
+\bye