aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHolden Rohrer <hr@hrhr.dev>2022-01-05 00:02:12 -0500
committerHolden Rohrer <hr@hrhr.dev>2022-01-05 00:02:12 -0500
commit6ce1db1f867c545ebdb1afb705580514b356f883 (patch)
treecfcc54ce75bb434a7857a68e4151872e1e4c941c
parent6cf6c1977c1cda2a7c123368630cc7703ddc5172 (diff)
more math stuff
-rw-r--r--.gitignore3
-rw-r--r--li/Makefile19
-rw-r--r--li/hw10.tex157
-rw-r--r--li/hw9.tex185
-rw-r--r--li/matlab_hw.py512
-rw-r--r--zhilova/final.tex179
-rw-r--r--zhilova/midterm2.tex146
7 files changed, 1109 insertions, 92 deletions
diff --git a/.gitignore b/.gitignore
index 48b55b0..dedd592 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,2 +1,5 @@
*.log
*.pdf
+*.ps
+li/output.txt
+li/insurance.csv
diff --git a/li/Makefile b/li/Makefile
new file mode 100644
index 0000000..76cbdb8
--- /dev/null
+++ b/li/Makefile
@@ -0,0 +1,19 @@
+.POSIX:
+
+final.pdf: matlab_hw.pdf output.pdf
+ pdftk matlab_hw.pdf output.pdf output final.pdf
+
+matlab_hw.pdf: matlab_hw.ps
+ ps2pdf matlab_hw.ps
+
+output.pdf: output.ps
+ ps2pdf output.ps
+
+matlab_hw.ps: matlab_hw.py
+ vim -R matlab_hw.py -c "hardcopy > matlab_hw.ps | q"
+
+output.ps: output.txt
+ vim -R output.txt -c "hardcopy > output.ps | q"
+
+output.txt: matlab_hw.py
+ python3 matlab_hw.py > output.txt
diff --git a/li/hw10.tex b/li/hw10.tex
new file mode 100644
index 0000000..2b372de
--- /dev/null
+++ b/li/hw10.tex
@@ -0,0 +1,157 @@
+\def\bmatrix#1{\left[\matrix{#1}\right]}
+\def\dmatrix#1{\left|\matrix{#1}\right|}
+\def\fr#1#2{{#1\over #2}}
+
+ {\bf Section 6.1}
+
+\noindent{\bf 5.}
+
+$$A = \bmatrix{1&b\cr b&9}.$$
+
+{\it (a)}
+$a_{11} = 1 > 0,$
+and $\det A = 9 - b^2 > 0$ when $-3 < b < 3.$
+
+{\it (b)}
+
+$$A = \bmatrix{1&0\cr b&1}\bmatrix{1&b\cr 0&9-b^2}
+ = \bmatrix{1&0\cr b&1}\bmatrix{1&0\cr 0&9-b^2}\bmatrix{1&b\cr 0&1}
+$$
+
+{\it (c)}
+
+$(x-v)^TA(x-v) + c = x^TAx - 2x^TAv + v^TAv + c$ lets us generate
+arbitrary first and zeroth order terms while retaining the same
+second-order terms ($v^TAx = x^TAv$ by symmetry) since $x^TAx$ are the
+only second order terms.
+
+$$\fr12(x^2+2bxy+9y^2)-y = \fr12(x^TAx - 2x^TAv + v^TAv + c) \Rightarrow
+y = x^TAv - v^TAv - c \Rightarrow
+y = x^TAv, c = -v^TAv.$$
+
+$$y = x^TAv \Rightarrow \bmatrix{0\cr1} = Av \Rightarrow
+v = \bmatrix{-b/(9-b^2)\cr 1/(9-b^2)},$$
+by row reduction.
+
+The minimum is
+$$\fr12c = -\fr12v^TAv = -\fr12v^T\bmatrix{0\cr1} = \fr1{2(b^2-9)},$$
+when $A$ is positive definite ($|b| < 3$).
+
+{\it (d)}
+
+There is no minimum if $b=3$ because $x^2+2bxy+9y^2 = x^2+6xy+9y^2 =
+(x+3y)^2$ is zero where $y = -x/3,$ giving nonzero values of $y$ with a
+zero quadratic component and therefore arbitrarily small values of
+$\fr12(x^2+2bxy+9y^2) - y.$
+
+ {\bf Section 6.2}
+
+\noindent{\bf 4.}
+
+Since $A$ is positive definite, $A$ has eigenvalue $\lambda > 0$ and
+eigenvector $v,$ $A^2v = \lambda^2v,$ giving that $A^2$ has eigenvalue
+$\lambda^2 > 0$ and no others (because the complete set of eigenvectors,
+from symmetry of $A^2,$ span $R^n.$)
+
+Similarly, $A^{-1}$ is symmetric ($(A^{-1}A)^T = I \to (A^{-1})^TA = I
+\to (A^{-1})^T = A^{-1}$), so its eigenvectors will span the codomain,
+and $Av = \lambda v \to A^{-1}Av = A^{-1}\lambda v \to (1/\lambda)v =
+A^{-1}v,$ and $\lambda > 0 \to 1/\lambda > 0,$ so it is also
+positive-definite.
+
+\noindent{\bf 24.}
+
+Letting $s=0$ gives us a set of eigenvalues with minimum
+$\lambda_{\rm min},$ and the set of eigenvalues for arbitrary $s$ has
+minimum $\lambda_{\rm min}+s > 0 \to s > -\lambda_{\rm min}.$
+$$
+\bmatrix{0&-4&-4\cr -4&0&-4\cr -4&-4&0} \to \lambda = 4,-8 \to s > 8.$$
+$$
+\bmatrix{0&3&0\cr 3&0&4\cr 0&4&0} \to \lambda = -4,0,4 \to s > 4.$$
+
+\noindent{\bf 27.}
+
+$$A = CC^T = \bmatrix{3&0\cr1&2}\bmatrix{3&1\cr0&2} = \bmatrix{10&3\cr
+3&5}.$$
+
+$$A = \bmatrix{4&8\cr8&25} = \bmatrix{2&0\cr 4&3}\bmatrix{2&4\cr 0&3}.$$
+(This was found by inspection after a first-order approximation, with
+$U = \bmatrix{4&8\cr0&9},$ (in the LU decomposition) giving $\sqrt D =
+\bmatrix{2&0\cr0&3}$)
+
+\noindent{\bf 31.}
+
+For $A,$ it has eigenvalue $3$ corresponding to $(1,-1,0)^T$ and
+$(1,1,-2)^T.$
+$$x^TAx = 3/2(x-y)^2+1/2(x+y-2z)^2.$$
+
+$$x^TBx = x^T\bmatrix{x+y+z\cr x+y+z\cr x+y+z} = (x+y+z)^2.$$
+
+\goodbreak
+ {\bf Section 6.3}
+
+\noindent{\bf 12.}
+
+{\it (a)}
+
+If $A' = 4A,$ $\Sigma' = 4\Sigma,$ because $(A')^TA' = 4A^T\cdot4A =
+16A^TA,$ meaning the diagonal of $\Sigma'$ will have values $4$ times
+$\Sigma.$
+
+{\it (b)}
+
+The SVD of $A^T$ is $V\Sigma U^T,$ by $(AB)^T = B^TA^T,$ and that
+$\Sigma$ is symmetric (and $V$ and $U$ still satisfy orthogonality
+because orthogonality is preserved for square matrices across a
+transpose).
+The SVD of $A^{-1} = A^+$ is $V\Sigma^{-1} U^T,$ because the
+pseudoinverse is the complete inverse for an invertible matrix.
+
+\noindent{\bf 15.}
+
+$$A = \bmatrix{1&1&1&1} = \bmatrix{1}\bmatrix{2&0&0&0}
+ \bmatrix{1/2&1/2&1/2&1/2\cr
+ -1/\sqrt2&1/\sqrt2&0&0\cr
+ -1/\sqrt2&0&1/\sqrt2&0\cr
+ -1/\sqrt2&0&0&1/\sqrt2\cr}$$
+
+$$A^+ = \bmatrix{1/2&-1/\sqrt2&-1/\sqrt2&-1/\sqrt2\cr
+ 1/2&1/\sqrt2&0&0\cr
+ 1/2&0&1/\sqrt2&0\cr
+ 1/2&0&0&1/\sqrt2\cr}\bmatrix{1/2\cr 0\cr 0\cr 0}
+ \bmatrix{1}
+ = \bmatrix{1/4\cr1/4\cr1/4\cr1/4}.
+$$
+
+$$B = \bmatrix{0&1&0\cr 1&0&0} = \bmatrix{1&0\cr0&1}\bmatrix{1&0&0\cr0&1&0}
+ \bmatrix{0&1&0\cr1&0&0\cr0&0&1}
+$$
+
+$$B^+ = \bmatrix{0&1&0\cr 1&0&0\cr 0&0&1}\bmatrix{1&0\cr0&1\cr0&0}
+ \bmatrix{1&0\cr0&1}
+= \bmatrix{0&1\cr1&0\cr0&0}$$
+
+$$C = \bmatrix{1&1\cr 0&0} =
+\bmatrix{1&0\cr0&1}\bmatrix{\sqrt2&0\cr0&0}\bmatrix{1/\sqrt2&1/\sqrt2\cr1/\sqrt2&-1/\sqrt2}$$
+
+$$C^+ =
+\bmatrix{1/\sqrt2&1/\sqrt2\cr1/\sqrt2&-1/\sqrt2}\bmatrix{1/\sqrt2&0\cr0&0}
+\bmatrix{1&0\cr0&1} =
+\bmatrix{1/2&1/2\cr0&0}$$
+
+\noindent{\bf 18.}
+
+With $r_i$ as the $i$th row of $A,$ $A,$ $\hat x = c_1r_1 + c_3r_3.$
+
+$$A^TA\hat x = A^Tb =
+A^T\bmatrix{1&0&0\cr1&0&0\cr1&1&1}(c_1\bmatrix{1\cr0\cr0} +
+c_3\bmatrix{1\cr1\cr1}) =
+\bmatrix{1&1&1\cr0&0&1\cr0&0&1}(c_1\bmatrix{1\cr1\cr1} +
+c_3\bmatrix{1\cr1\cr3}) = $$$$
+c_1\bmatrix{3\cr1\cr1} + c_3\bmatrix{5\cr3\cr3}
+= \bmatrix{1&1&1\cr0&0&1\cr0&0&1}\bmatrix{0\cr2\cr2} =
+\bmatrix{4\cr2\cr2} = (1/2)\bmatrix{3\cr1\cr1} +
+(1/2)\bmatrix{5\cr3\cr3} \to \hat x = \bmatrix{1\cr1/2\cr1/2}
+$$
+
+\bye
diff --git a/li/hw9.tex b/li/hw9.tex
new file mode 100644
index 0000000..ee99439
--- /dev/null
+++ b/li/hw9.tex
@@ -0,0 +1,185 @@
+\def\bmatrix#1{\left[\matrix{#1}\right]}
+\def\dmatrix#1{\left|\matrix{#1}\right|}
+\def\fr#1#2{{#1\over #2}}
+
+ {\bf Section 5.4}
+
+\noindent{\bf 8.}
+
+$${dr\over dt} = 4r-2w.$$
+$${dw\over dt} = r+w.$$
+$${du\over dt} = \bmatrix{4&-2\cr1&1}u.$$
+
+The system has eigenvalues $\lambda = 2, 3,$ with eigenvectors
+$(1, 1)^T$ and $(2, 1)^T,$
+
+{\it (a)} This system is unstable because all eigenvalues are greater
+than zero.
+
+{\it (b)} $$\bmatrix{300\cr 200} = 100\bmatrix{1\cr 1} +
+100\bmatrix{2\cr 1},$$
+giving
+$$u = \bmatrix{1&2\cr 1&1}\bmatrix{e^{2t}&0\cr 0&e^{3t}}\bmatrix{100\cr100} =
+\bmatrix{100e^{2t} + 200e^{3t}\cr 100e^{2t} + 100e^{3t}}.$$
+
+{\it (c)}
+
+The dominant growth is $e^{3t},$ so the ratio will eventually be
+2 to 1.
+
+\noindent{\bf 24.}
+
+$v+w$ is constant if $${d\over dt}(v+w) = {dv\over dt} + {dw\over dt} =
+w-v + v-w = 0.$$
+
+$${du\over dt} = \bmatrix{-1&1\cr1&-1}u,$$
+with eigenvalues $-2$ and $0$ and corresponding eigenvectors $(1, -1)^T$
+and $(1, 1)^T$ respectively. This gives a particular solution (with
+$v(0) = 30$ and $w(0) = 10$)
+$$u = \bmatrix{-1&1\cr1&-1}\bmatrix{20\cr10e^{-2t}} = \bmatrix{-1\cr
+1}20 + \bmatrix{1\cr -1}10e^{-2t}$$
+
+\noindent{\bf 42.}
+
+The reason this (falsely) appears to work is because the right side of
+the matrix is still multiplying from
+$$u = \bmatrix{x\cr y},$$ while the left side is supposed to be
+$${du\over dt} = \bmatrix{dy/dt\cr dx/dt}.$$
+In order to make this correct, there should also be a column exchange
+(changing $u$ to represent $(y, x)^T,$)
+giving
+$$\bmatrix{2&-2\cr-4&0},$$
+which is unstable.
+
+ {\bf Section 5.5}
+
+\noindent{\bf 11.}
+
+$$P = 0\bmatrix{1/2\cr -1/2}\bmatrix{1/2&-1/2}
+ + 1\bmatrix{1/2\cr 1/2}\bmatrix{1/2&1/2}.$$
+
+$$Q = 1\bmatrix{1/2\cr -1/2}\bmatrix{1/2&-1/2}
+ -1\bmatrix{1/2\cr 1/2}\bmatrix{1/2&1/2}.$$
+
+$$R = 5\bmatrix{2/\sqrt5\cr 1/\sqrt5}\bmatrix{2/\sqrt5&1/\sqrt5}
+ -5\bmatrix{1/\sqrt5\cr -2/\sqrt5}\bmatrix{1/\sqrt5&-2/\sqrt5}.$$
+
+\noindent{\bf 13.}
+
+{\it (a)}
+
+The eigenvectors $u,$ $v,$ and $w$ are pairwise orthogonal because $A$
+is symmetric.
+
+{\it (b)}
+The null space is $\mathop{\rm sp}(u)$ because it's, by definition, in
+$\mathop{\rm null}(A-0I),$ and the column space is $\mathop{\rm
+sp}(v,w),$ because an eigenvalue's multiple must be a valid output of
+the matrix.
+The left null space is also $\mathop{\rm sp}(u)$ by
+orthogonality to the column space, and the row space is $\mathop{\rm
+sp}(v,w)$ by orthogonality to the null space.
+
+{\it (c)}
+
+No. $x+u$ also satisfies. $A(x+u) = Ax + Au = v + w + 0.$
+
+{\it (d)}
+
+If $b\in\mathop{\rm sp}(v,w),$ this equation has a solution, by
+definition of the column space.
+
+{\it (e)}
+
+$S^TS = I,$ because all the vectors are orthogonal, and normal/unit, so
+the dot product with eachother is zero and the dot product with
+themselves is one. Thus, $S^{-1} = S^T.$
+
+$$S^{-1}AS = \Lambda,$$
+where $\Lambda$ is the diagonal matrix of eigenvalues
+$$\bmatrix{0&0&0\cr 0&1&0\cr 0&0&2}.$$
+
+\noindent{\bf 14.}
+
+$A$ is orthogonal (all columns have norm one and are orthogonal to each
+other), permutation (all rows and columns have exactly one non-zero
+entry which is a one) and therefore invertible, diagonalizable (on the
+complex numbers), and Markov.
+By row reducing $A-\lambda I,$ we get
+$$\bmatrix{-\lambda&1&0&0\cr
+ 0&-\lambda&1&0\cr
+ 0&0&-\lambda&1\cr
+ 0&0&0&-\lambda+\lambda^{-3}},$$
+which has determinant equal to product of the diagonal, or
+characteristic polynomial
+$$\lambda^4 - 1 = 0,$$
+giving $A$ eigenvalues of the fourth roots of unity, $\{1,-1,i,-i\}.$
+
+$B$ is a projection (because $B^2 = B,$ and symmetric), Hermitian (by
+symmetric), rank-1 (because all columns are identical), diagonalizable
+(by symmetric), and Markov (all columns sum to one).
+Rank-1/singular excludes orthogonal and invertible.
+$B$ has eigenvalues $0$ and $1.$
+
+ {\bf Section 5.6}
+
+\noindent{\bf 30.}
+
+$M$ is the matrix which transforms the eigenvectors of $A$ to the
+corresponding eigenvectors (based on eigenvalues) of $B.$
+
+{\it (a)}
+$$M = M^{-1} = \bmatrix{0&1\cr1&0}.$$
+$$B = M^{-1}AM =
+\bmatrix{0&1\cr1&0}\bmatrix{0&1\cr0&1}\bmatrix{0&1\cr1&0} =
+\bmatrix{1&0\cr1&0}\bmatrix{0&1\cr1&0} = \bmatrix{0&1\cr0&1}.$$
+
+{\it (b)}
+
+$$M = M^{-1} = \bmatrix{1&0\cr0&-1}.$$
+$$B = M^{-1}AM =
+\bmatrix{1&0\cr0&-1}\bmatrix{1&1\cr1&1}\bmatrix{1&0\cr0&-1} =
+\bmatrix{1&0\cr0&-1}\bmatrix{1&-1\cr1&-1} =
+\bmatrix{1&-1\cr-1&1}.$$
+
+{\it (c)}
+
+$$A = \bmatrix{1&2\cr3&4}$$
+has eigenvalues $\lambda^2 - 5l - 2 = 0 \to \lambda = 5/2\pm
+\sqrt{33}/2,$
+corresponding to eigenvectors $(\fr16(-3+\sqrt33), 1)^T$ and
+$(\fr16(-3-\sqrt33),1)^T$ in decreasing order.
+$$B = \bmatrix{4&3\cr2&1},$$
+with the same eigenvalues but corresponding eigenvectors (also in
+decreasing order of eigenvalue)
+$(\fr14(3+\sqrt33),1)$ and
+$(\fr14(3-\sqrt33),1).$
+
+This corresponds to
+$$M = \bmatrix{3/2&3/2\cr0&1}.$$
+For this $M,$ $B = M^{-1}AM.$
+
+\noindent{\bf 32.}
+
+There is the zero family (with one matrix), the family with all ones
+(with one matrix), the permutation matrix $\bmatrix{0&1\cr1&0}$ (with
+a family of one), and the identity matrix (with one in its family).
+
+Then, there are the families with two members: those with one nonzero
+entry on the antidiagonal, like
+$$\bmatrix{0&1\cr0&0},$$
+those with a full diagonal and one entry on the antidiagonal, like
+$$\bmatrix{1&1\cr0&1},$$
+and those with a full antidiagonal and one entry on the diagonal, like
+$$\bmatrix{1&1\cr1&0}.$$
+
+Then, there is the largest family, with six members, consisting of one
+entry on the diagonal and zero or one entry on the antidiagonal, like
+$$\bmatrix{1&1\cr0&0}.$$
+
+I found these distinct families with the number of possible
+characteristic polynomials, based on trace having a value $0,1,2$ and
+determinant having a value $-1,0,1,$ and then splitting up repeated
+eigenvalue matrices based on the number of distinct eigenvalues.
+
+\bye
diff --git a/li/matlab_hw.py b/li/matlab_hw.py
index 3473934..cf20709 100644
--- a/li/matlab_hw.py
+++ b/li/matlab_hw.py
@@ -1,12 +1,22 @@
import numpy as np
+from time import perf_counter
import scipy.linalg
+import pandas as pd
+import sympy
+
+timer_problems = True
+regression_problems = True
+np.set_printoptions(precision=3, suppress=True)
+# if you want to run the regression problems, you need this dataset:
+# `insurance.csv` in the current directory from
+# https://www.kaggle.com/mirichoi0218/insurance/version/1
-print("Section 1.3: #30")
+print("\nSection 1.3: #30")
coeff = (
- [[ 1, 1, 1 ],
- [ 1, 2, 2 ],
- [ 2, 3, -4]]
+ ( 1, 1, 1 ),
+ ( 1, 2, 2 ),
+ ( 2, 3, -4)
)
print(
np.linalg.solve(
@@ -20,7 +30,7 @@ np.linalg.solve(
[7, 10, 3])
)
-print("Section 1.3: #32")
+print("\nSection 1.3: #32")
dimension = 3
tot = [0]*3
@@ -31,7 +41,7 @@ for x in range(ct):
tot[i] += U[i][i]
print([x/ct for x in tot]);
-print("Section 1.4: #21")
+print("\nSection 1.4: #21")
A = [[.5, .5], [.5, .5]]
B = [[1, 0], [0, -1]]
@@ -52,20 +62,66 @@ print("C^3")
print(np.linalg.matrix_power(C, 3))
print("C^k = 0")
-print("Section 1.4: #59")
+print("\nSection 1.4: #59")
+
+v = [3,4,5]
+A = np.identity(3)
+print(f"A * v = {np.matmul(A, v)}")
+print(f"v' * v = {np.dot(v,v)}")
+print(f"v * A = {np.matmul(v, A)}")
+
+print("\nSection 1.4: #60")
+
+print(f"Av = {np.matmul(np.ones((4,4)), np.ones((4,1)))}")
+print(f"Bw = {np.matmul(np.identity(4)+np.ones((4,4)), np.zeros((4,1))+2*np.ones((4,1)))}")
+
+print("\nSection 1.6: #12")
+
+tridiag_mask = np.identity(11);
+i,j = np.indices(tridiag_mask.shape)
+tridiag_mask[i==j+1] = 1
+tridiag_mask[i==j-1] = 1
+tridiag_preserved = True
+triangle_preserved = True
+sym_preserved = True
+
+for x in range(1000):
+ a = np.random.rand(11,11)
+ inv = np.linalg.inv(a*tridiag_mask)
+ if (inv != inv*tridiag_mask).any():
+ tridiag_preserved = False
+
+ triu = np.triu(a)
+ inv = np.linalg.inv(triu)
+ if (inv != np.triu(inv)).any() and (inv != np.tril(inv)).any():
+ triangle_preserved = False
+
+ sym = np.transpose(triu) + triu - triu*np.identity(11)
+ if (sym != np.transpose(sym)).any():
+ sym_preserved = False
-print("A * v = [3, 4, 5]")
-print("v' * v = 50")
-print(f"v * A = {np.matmul([3,4,5], np.identity(3))}")
+if triangle_preserved:
+ print("Triangular matrices' inverses are triangular.")
+else:
+ print("Triangular matrices' inverses are not triangular.")
-print("Section 1.4: #60")
+if sym_preserved:
+ print("Symetric matrices' inverses are symmetric.")
+else:
+ print("Symmetric matrices' inverses are not symmertic.")
-print("Av = [4. 4. 4. 4.]")
-print("Bw = [10. 10. 10. 10.]")
+if tridiag_preserved:
+ print("Tridiagonal matrices' inverses are tridiagonal.")
+else:
+ print("Tridiagonal matrices' inverses are not tridiagonal.")
-print("Section 1.6: #12")
+print("Entries being integers isn't preserved. This is trivially shown"
+" the 1x1 [ 2. ] matrix's inverse [ 1/2. ]")
+print("Entries being fractions (rationals) is preserved because the"
+" matrix augmented with an identity and then reduced only requires"
+" rational multiplications.")
-print("Section 1.6: #32")
+print("\nSection 1.6: #32")
print("the inverse of the first matrix is")
print(np.linalg.inv(5*np.identity(4) - np.ones((4,4))))
@@ -75,9 +131,9 @@ print(np.matmul(np.ones((4,4)),np.ones((4,4))))
print("so we can solve this like the following equation: (a*eye +\n\
b*ones)(c*eye + d*ones) = ac*eye + (ad+bc+dimension*bd)*ones = eye")
print("c = 1/a. d = -b/a/(a+dimension*b), giving for the next matrix")
-print("a = 1/6, d = 6/(1/6+5*-1)") # WRONG
+print("a = 1/6, d = 6/(1/6+5*-1)") # WRONG (TODO?)
-print("Section 1.6: #47")
+print("\nSection 1.6: #47")
print("I'm not sure. This is Python. It says (in either case):")
print("numpy.linalg.LinAlgError: Singular matrix")
@@ -86,54 +142,49 @@ print(np.linalg.solve(np.ones((4, 4)), np.random.rand(4, 1)))
print(np.linalg.solve(np.ones((4, 4)), np.ones((4, 1))))
'''
-''' #68
-dimension = 500
-northwest = np.random.rand(dimension, dimension)
-for x in range(0, dimension):
- for y in range(x+1, dimension):
- northwest[y][x] = 0
-print(northwest)
-'''
-print("Section 1.6: #69")
-
-
-from time import perf_counter
-matrix = np.random.rand(500, 500)
-start = perf_counter()
-np.linalg.inv(matrix)
-first_time = perf_counter() - start
-print("500x500 inverse:", str(first_time) + "s")
-matrix = np.random.rand(1000, 1000)
-start = perf_counter()
-np.linalg.inv(matrix)
-second_time = perf_counter() - start
-print("1000x1000 inverse:", str(second_time) + "s")
-print("factor:", second_time / first_time)
-
-print("Section 1.6: #70")
-
-dimension = 1000
-I = np.identity(dimension)
-A = np.random.rand(dimension, dimension)
-U = np.random.rand(dimension, dimension)
-for x in range(0, dimension):
- for y in range(x+1, dimension):
- U[y][x] = 0
-
-start = perf_counter()
-np.linalg.inv(U)
-print("inverse of U:", str(perf_counter() - start) + "s")
-start = perf_counter()
-np.linalg.solve(U, I)
-print("U\I:", str(perf_counter() - start) + "s")
-start = perf_counter()
-np.linalg.inv(A)
-print("inverse of A:", str(perf_counter() - start) + "s")
-start = perf_counter()
-np.linalg.solve(A, I)
-print("A\I:", str(perf_counter() - start) + "s")
-
-print("Section 1.6: #71")
+if timer_problems:
+ print("\nSection 1.6: #69")
+
+ matrix = np.random.rand(500, 500)
+ start = perf_counter()
+ np.linalg.inv(matrix)
+ first_time = perf_counter() - start
+ print("500x500 inverse:", str(first_time) + "s")
+ matrix = np.random.rand(1000, 1000)
+ start = perf_counter()
+ np.linalg.inv(matrix)
+ second_time = perf_counter() - start
+ print("1000x1000 inverse:", str(second_time) + "s")
+ print("factor:", second_time / first_time)
+ print("This matches theory. We also expect that almost every random"
+ " matrix is invertible")
+
+ print("\nSection 1.6: #70")
+
+ dimension = 1000
+ I = np.identity(dimension)
+ A = np.random.rand(dimension, dimension)
+ U = np.random.rand(dimension, dimension)
+ for x in range(0, dimension):
+ for y in range(x+1, dimension):
+ U[y][x] = 0
+
+ start = perf_counter()
+ np.linalg.inv(U)
+ print("inverse of U:", str(perf_counter() - start) + "s")
+ start = perf_counter()
+ np.linalg.solve(U, I)
+ print("U\I:", str(perf_counter() - start) + "s")
+ start = perf_counter()
+ np.linalg.inv(A)
+ print("inverse of A:", str(perf_counter() - start) + "s")
+ start = perf_counter()
+ np.linalg.solve(A, I)
+ print("A\I:", str(perf_counter() - start) + "s")
+
+ print("There is no significant performance deviation in Python.")
+
+print("\nSection 1.6: #71")
def proof(dim):
L = np.identity(dim) - np.diag([x/(1+x) for x in range(1,dim)], -1)
@@ -141,19 +192,17 @@ def proof(dim):
Linv = np.linalg.inv(L)
print("L^{-1}:")
print(Linv)
- right = True
for j in range(1,dim+1):
for i in range(1,j+1):
if not np.isclose(Linv[j-1][i-1], i/j):
print("the assertion in the book is wrong at",i,j)
- right = False
- if right:
- print("The book is right")
+ return
+ print("The book is right")
proof(4)
proof(5)
-print("Section 2.2: #33")
+print("\nSection 2.2: #33")
print("For the first matrix:")
A = [[1, 3, 3], [2, 6, 9], [-1, -3, 3]]
@@ -170,17 +219,36 @@ print(scipy.linalg.null_space(A))
print("particular solution:")
print(np.linalg.lstsq(A, b, rcond=None)[0])
-print("Section 2.2: #35")
-print("TODO")
+print("\nSection 2.2: #35")
+# b is, trivially, in the column space.
+
+def restrictions(A):
+ p,l,u = scipy.linalg.lu(A)
+ # PLU = A \to col(A) = col(PL) because U is full rank
+ # PLx = b
+ order = np.arange(len(A)) # [0,1,2,3]
+ for col in reversed(np.transpose(p@l)):
+ print(' + '.join(
+ str(col[int(pos)]) + '*x_' + str(int(pos))
+ for pos in order
+ if not np.isclose(col[int(pos)], 0)
+ )
+ + ' = 0.')
+
print("For the first system:")
A = [[1, 2], [2, 4], [2, 5], [3, 9]]
+restrictions(A)
-print("Section 2.2: #36")
+print("For the second system:")
+A = [[1,2,3], [2,4,6], [2,5,7], [3,9,12]]
+restrictions(A)
+
+print("\nSection 2.2: #36")
print("(a)")
A = np.array([[1, 2, 1], [2, 6, 3], [0, 2, 5]])
print("Basis of the column space:")
-print(scipy.linalg.orth(A))
+print(np.linalg.matrix_rank(A))
print("Multiplying the rows by this gives zero (for this matrix, \
equivalent to [0 0 0]^T)")
print(scipy.linalg.null_space(A.transpose()))
@@ -188,12 +256,12 @@ print(scipy.linalg.null_space(A.transpose()))
print("(b)")
A = np.array([[1, 1, 1], [1, 2, 4], [2, 4, 8]])
print("Basis of the column space:")
-print(scipy.linalg.orth(A))
+print(scipy.linalg.orth(A)) # ugh this ain't right
print("Multiplying the rows by this gives zero (for this matrix, \
equivalent to [0 2 -1]^T)")
print(scipy.linalg.null_space(A.transpose()))
-print("Section 2.3: #2")
+print("\nSection 2.3: #2")
print("Largest possible number of independent vectors is the dimension \
of the space spanned by these vectors, or the number of vectors in the \
@@ -202,9 +270,9 @@ v = [[ 1, 1, 1, 0, 0, 0],
[-1, 0, 0, 1, 1, 0],
[ 0, -1, 0, -1, 0, 1],
[ 0, 0, -1, 0, -1, -1]]
-print(len(scipy.linalg.orth(v)))
+print(np.linalg.matrix_rank(v))
-print("Section 2.3: #5")
+print("\nSection 2.3: #5")
print("(a)")
@@ -222,17 +290,277 @@ v = [[ 1, 2, -3],
if len(scipy.linalg.orth(v)) == 3: print("they are independent")
else: print("they are not independent")
-print("Section 2.3: #13")
-# probably out of spirit of the matlab hw
-print("This one can be done easily without a calculator because the \
- echelon matrix has already been computed.")
-print("The dimensions of all of these is 2, and the row spaces are the \
- same.")
-
-print("Section 2.3: #16")
-print("TODO")
-
-print("Section 2.3: #18")
-print("TODO")
-
-print("Section 2.3: #24")
+print("\nSection 2.3: #13")
+print("U is the echelon martix of A, and for any matrix and its echelon"
+" form, the dimensions of these spacees are the same, and the row"
+" spaces are equal.")
+
+A = [[1,1,0], [1,3,1], [3,1,-1]]
+U = [[1,1,0], [0,2,1], [0,0,0]]
+print(f"(a) dim col A = {np.linalg.matrix_rank(A)}")
+print(f"(b) dim row A = {np.linalg.matrix_rank(A)}")
+print(f"(c) dim col U = {np.linalg.matrix_rank(U)}")
+print(f"(d) dim row U = {np.linalg.matrix_rank(U)}")
+stack = np.vstack((A, U))
+print("Since the row spaces are the same, the dimension of the space"
+" both their rows span should be 2, like the dimension of each matrix's"
+f" own row space. The dimension is {np.linalg.matrix_rank(stack)}.")
+
+print("\nSection 2.3: #16")
+v = np.transpose([[1,1,0,0], [1,0,1,0], [0,0,1,1], [0,1,0,1]])
+rank = np.linalg.matrix_rank(v)
+if rank == v.shape[0] and rank == v.shape[1]:
+ print("The matrix is full rank, so the rows are linearly "
+ "independent.")
+# Finding the generic solution to Ax = 0 on a computer can also be
+# written as scipy.linalg.null_space
+if len(scipy.linalg.null_space(v)) == 0:
+ print("The matrix only has the trivial solution, so its columns are"
+ " linearly independent, by definition.")
+
+print("\nSection 2.3: #18")
+w = np.transpose([[1,1,0], [2,2,1], [0,0,2]])
+b = [3,4,5]
+lstsq = np.linalg.lstsq(w, b, rcond=None)[0]
+print("We get least squares solution:")
+print(lstsq)
+print("Multiplying back out:")
+mul = np.matmul(w, lstsq)
+print(mul)
+if np.allclose(mul, b):
+ print("which is b, so there is a solution.")
+else:
+ print("which isn't b, so there isn't a solution.")
+
+w = [[1,2,0], [2,5,0], [0,0,2], [0,0,0]]
+rank = np.linalg.matrix_rank(w)
+if rank == 3:
+ print("The vectors span R^3, so a solution exists for all b.")
+else:
+ print("The vectors don't span R^3, so some b don't have a solution")
+
+print("\nSection 2.3: #24")
+v = [1,-2,3] # vT x = 0
+
+print("Basis for the solution set:")
+print(scipy.linalg.null_space([v]))
+
+print("Intersection with the xy plane:") # xy plane -> z = 0
+u = [0,0,1] # uT x = 0
+print(scipy.linalg.null_space([v,u]))
+
+print(f"Basis of V-perp: {v}")
+
+if regression_problems:
+ df = pd.read_csv('insurance.csv')
+ df['smoker'] = df['smoker'].replace({'yes': 1, 'no': 0})
+ df['sex'] = df['sex'].replace({'female': 1, 'male': 0})
+ regions = df['region'].unique()
+ for region in df['region'].unique():
+ df[region] = df.region.apply(lambda r: r == region)
+ del df['region']
+
+ vals = df.to_numpy()
+ # Index(['age', 'sex', 'bmi', 'children', 'smoker', 'charges', 'southwest', 'southeast', 'northwest', 'northeast'],
+ A = [(1, row[0], 1.1**row[0], row[1], row[2], row[3], row[4],
+ row[6], row[7], row[8], row[9]) for row in vals]
+ # exponential and linear model of age, and linear models of
+ # everything else
+ b = [row[5] for row in vals]
+ x, e, rank, s = np.linalg.lstsq(A, b, rcond=None)
+ print(f"With error {rank},\n"
+ f"charges = {x[0]} + {x[1]}*age + {x[2]}*1.1^age + {x[3]}*female + "
+ f"{x[4]}*bmi + {x[5]}*children + {x[6]}*smoker + "
+ f"{x[7]}*southwest + {x[8]}*southeast + {x[9]}*northwest + "
+ f"{x[10]}*northeast")
+
+ print("Expected charges for a 45-year-old woman in the northeast "
+ "who doesn't smoke and has no children at BMI of 27 is "
+ f"{x[0] + x[1]*45 + x[2]*1.1**45 + x[3] + x[4]*27 + x[10]}")
+
+
+print("\nSection 3.3: #32")
+b = (0,8,8,20)
+t = np.array((0,1,3,4))
+# mostly given by np linalg lstsq
+A = np.transpose((t, np.ones(t.shape)))
+x, e, rank, s = np.linalg.lstsq(A, b, rcond=None)
+hatb = np.matmul(A,x)
+print(f"b = {x[0]}t + {x[1]}")
+print(f"Approximations {hatb} have error {hatb-b} with min. error "
+ f"E^2 = {e}")
+
+print("\nSection 3.3: #33")
+p = (1,5,13,17)
+for arow, brow in zip(A, b):
+ print(f"{arow[0]}t + {arow[1]}t = {brow}")
+x, e, rank, s = np.linalg.lstsq(A, p, rcond=None)
+print(f"We get exact solution p = {x[0]}t + {x[1]} with error {e}.")
+
+print("\nSection 3.3: #34")
+e = np.array(b) - np.array(p)
+print(f"A^Te = {np.matmul(np.transpose(A), e)}, so e "
+ "is orthogonal to every column of A.")
+print(f"The error ||e|| is {np.sqrt(np.dot(e,e))}.")
+
+print("\nSection 3.3: #35")
+def buildcoeff(coeff):
+ return ' + '.join(f"{c}*x_{i}" for i,c in enumerate(coeff))
+
+A = np.transpose((t*t, t, np.ones(t.shape)))
+eq = np.matmul( np.transpose(A), b )
+for coeff, of in zip(np.matmul(np.transpose(A), A), eq):
+ print(f"{buildcoeff(coeff)} = {of}")
+
+print("\nSection 3.3: #36")
+A = np.transpose((t*t*t, t*t, t, np.ones(t.shape)))
+x, e, rank, s = np.linalg.lstsq(A, b, rcond=None)
+p = np.matmul(A,x)
+print(f"p = b = {p}")
+print(f"e = p-b = 0 = {p-b}")
+
+print("\nSection 3.3: #37")
+st = sum(t)/len(t)
+sb = sum(b)/len(b)
+A = np.transpose((t, np.ones(t.shape)))
+x, e, rank, s = np.linalg.lstsq(A, b, rcond=None)
+print(f"hat b = {sb} = {np.dot((st, 1), x)} = C(hat t) + D")
+
+print("This is true because the first equation in A^TAp = A^Tb is "
+"(1,1,1,1)Ap = (1,1,1,1)b, so (1,1,1,1)Ap = (1,1,1,1) (Ct + D) = C(hat "
+"t) + D")
+
+print("\nSection 5.2: #29")
+A = ((.6, .4), (.4, .6))
+Aw,Av = np.linalg.eig(A)
+if all(l < 1 for l in Aw):
+ print("A^k converges.")
+else:
+ print("B^k doesn't converge to zero")
+B = ((.6, .9), (.1, .6))
+Bw,Bv = np.linalg.eig(B)
+if all(l < 1 for l in Bw):
+ print("B^k converges.")
+else:
+ print("B^k doesn't converge to zero.")
+
+print("\nSection 5.2: #30")
+if any(l > 1 for l in Aw):
+ print("A^k diverges")
+else:
+ l = [0 if l < 1 else 1 for l in Aw] #Lambda has diagonal = l, 0 else
+ print(f"A^k converges to\n{np.matmul( np.matmul(Av, np.diag(l)), np.linalg.inv(Av) )}")
+
+print("\nSection 5.2: #31")
+for u in ((3, 1), (3, -1), (6,0)):
+ L = np.diag(Aw**10)
+ lim = np.matmul(np.matmul( np.matmul(Av, L), np.linalg.inv(Av) ), u)
+ print(f"Limit of u0 = {u}: {lim}")
+
+print("\nSection 5.2: #45")
+A = np.array( ((.6, .2),(.4,.8)) )
+Alim = np.array( ((1/3, 1/3),(2/3, 2/3)) )
+
+Aw,Av = np.linalg.eig(A)
+Alimw,Alimv = np.linalg.eig(Alim)
+
+print(f"These matrices have the same eigenvectors\n{Aw}\nand\n{Alimw},")
+print("The limit of eigenvalues of A^k is A^\infty, so A^100 is close "
+ "(exponents quickly approach their asymptotes).\n"
+ f"Eigenvalues of A are {Av}, and of A^\infty are {Alimv}")
+
+print("Section 5.3: #3")
+f = np.array(((1,1), (1,0)))
+k = f
+for i in range(1,5):
+ print(f"F^{i} =\n{k}")
+ k = np.matmul(k, f)
+
+n = 20
+fn = round(((1+np.sqrt(5))/2)**n/np.sqrt(5))
+print(fn)
+
+print("\nSection 5.3: #7")
+
+l = (2, 1)
+for x in range(9):
+ l = (l[1], l[0] + l[1])
+print(f"Exactly, {l[1]}")
+
+l1 = (1+np.sqrt(5))/2
+print(f"Approximately, L_{10} = {round(l1**10)}")
+
+print("Section 5.3: #16")
+A = np.array( ((0, -1), (1, 0)) )
+stdeig = np.linalg.eig(A)
+print(stdeig)
+I = np.eye(A.shape[0])
+est = ( I+A, np.linalg.inv(I-A), np.matmul( np.linalg.inv(I - A/2),
+ I+A/2 ) )
+names = ( "Forward", "Backward", "Centered" )
+for f,name in zip(est, names):
+ w,v = np.linalg.eig(f)
+ print(f"{name} approximation has eigenvalues {w}.")
+ if all(np.isclose(abs(e), 1) for e in w):
+ print(f"{name} stays on the circle because all the eigenvalues "
+ "have magnitude 1.")
+
+print("\nSection 5.3: #29")
+
+B = ((3, 2), (-5, -3))
+C = ((5, 7), (-3, -4))
+
+w,v = np.linalg.eig(B)
+print(f"B ({B}) has eigenvalues\n{w}, and B^4 has eigenvalues\n{w**4} ="
+ " (1, 1), meaning that it is the identity.")
+w,v = np.linalg.eig(C)
+print(f"C ({C}) has eigenvalues\n{w}, and C^3 has eigenvalues\n{w**3} ="
+ " (-1, -1), meaning that it is the skew identity -I.")
+
+print("\nSection 5.4: #28")
+for x in range(1000):
+# my'' + by' + ky = 0
+# u = (y,y')^T
+# u' = (y',y'')^T
+# my'' = - (by' + ky)
+# y' = y
+ m,b,k = (np.random.rand(), np.random.rand(), np.random.rand())
+ A = (( 0, 1),
+ (-b,-k))
+ M = (( 1, 0),
+ ( 0, m))
+ # Mu' = Au => u' = Bu
+ B = np.matmul( np.linalg.inv(M), A )
+ w,v = np.linalg.eig(B)
+
+print("\nChecking that roots of characteristic polynomial = eigenvalues")
+lamda = sympy.symbols('lamda')
+for x in range(50):
+ npmat = np.random.rand(10, 10)
+ npmat += np.transpose(npmat) # make it symmetric for ease of proof
+ mat = sympy.Matrix(npmat)
+
+ poly = mat.charpoly(lamda)
+
+ w,v = np.linalg.eig(npmat)
+ roots = sorted(np.array(poly.nroots(), dtype='float64'))
+ w = sorted(w)
+ if not np.allclose(w, roots, atol=.001):
+ print("This property doesn't hold for all matrices")
+
+# Power method for eigenvalues and characteristic polynomials (see doc)
+
+print("\nPower Method for estimating the dominant eigen{vector,value}")
+matrix = np.random.rand(20, 20)
+w,v = np.linalg.eig(matrix)
+truedominant = max(*w)
+
+# a random vector will almost never be in the span of a subset of
+# eigenvectors, so we will use np.random.rand
+vec = np.random.rand(20)
+for x in range(50): # 50 iterations of the power method
+ vec = vec / np.linalg.norm(vec)
+ prev = vec
+ vec = matrix @ vec
+ val = np.linalg.norm(vec) / np.linalg.norm(prev)
+print(f"power method gives a dominant value {val}, compared to true {truedominant}")
diff --git a/zhilova/final.tex b/zhilova/final.tex
new file mode 100644
index 0000000..10b93f7
--- /dev/null
+++ b/zhilova/final.tex
@@ -0,0 +1,179 @@
+\newfam\rsfs
+\newfam\bbold
+\def\scr#1{{\fam\rsfs #1}}
+\def\bb#1{{\fam\bbold #1}}
+\let\oldcal\cal
+\def\cal#1{{\oldcal #1}}
+\font\rsfsten=rsfs10
+\font\rsfssev=rsfs7
+\font\rsfsfiv=rsfs5
+\textfont\rsfs=\rsfsten
+\scriptfont\rsfs=\rsfssev
+\scriptscriptfont\rsfs=\rsfsfiv
+\font\bbten=msbm10
+\font\bbsev=msbm7
+\font\bbfiv=msbm5
+\textfont\bbold=\bbten
+\scriptfont\bbold=\bbsev
+\scriptscriptfont\bbold=\bbfiv
+
+\def\E{\bb E}
+\def\P{\bb P}
+\newcount\qnum
+\def\fr#1#2{{#1\over #2}}
+\def\var{\mathop{\rm var}\nolimits}
+\def\cov{\mathop{\rm cov}\nolimits}
+\def\dd#1#2{\fr{\partial #1}{\partial #2}}
+\def\bmatrix#1{\left[\matrix{#1}\right]}
+
+\def\problem#1{\vskip0pt plus 2in\goodbreak
+\vskip0pt plus -2in\medskip\noindent{\bf #1)}\smallskip\penalty500}
+\def\part#1{\goodbreak\smallskip\noindent{\bf (#1)}\penalty500\par}
+
+\problem{1}
+\part{a}
+\let\truebeta\beta
+\def\beta{\hat\truebeta}
+$${\cal L}(\beta;X_1,\ldots,X_n) = \prod_{i=1}^{n} {1\over
+\Gamma(\alpha)\beta^\alpha}X_i^{\alpha-1}e^{-X_i/\beta}$$
+$$\ell(\beta) = \ln{\cal L}(\beta) = -n\ln(\Gamma(\alpha)) - \alpha n\ln\beta +
+ \sum_{i=1}^n [(\alpha-1)\ln(X_i) - X_i/\beta].$$
+$${d\ell(\beta)\over d\beta} = -{\alpha n\over\beta} + \sum_{i=1}^n
+{X_i\over\beta^2} = 0 \to \sum_{i=1}^n X_i = \beta \alpha n \to \beta
+= \overline X/\alpha = \overline X/4,$$
+where $\overline X = \fr1n \sum_{i=1}^n X_i.$
+
+\part{b}
+\let\beta\truebeta
+$$\E(\overline X) = \fr1n\sum_{i=1}^n \E X = \E X,$$
+so $\E\hat\beta = \E X / \alpha.$
+$$\E X = \int_0^\infty {x^\alpha e^{-x/\beta}\over\Gamma(\alpha)\beta^\alpha} dx
+= \alpha\beta\int_0^\infty {x^\alpha e^{-x/\beta}\over
+\Gamma(\alpha+1)\beta^{\alpha+1}} dx
+= \alpha\beta,$$
+because the integrand is the gamma distribution for $\alpha' =
+\alpha+1.$ Therefore, the estimator is unbiased because
+$\E\hat\beta = \alpha\beta/\alpha = \beta.$
+
+\part{c}
+By the central limit theorem, as $n\to\infty,$ $\hat\beta \sim
+{\cal N}(\E (\hat\beta), \var(\hat\beta)/n) = {\cal N}(\beta,
+\var(\hat\beta)/n),$ so if
+$\var \hat\beta<\infty,$ the estimator is consistent.
+
+The mgf of $X_k$ is $(1-\beta t)^\alpha,$ so the mgf of
+$Y = \sum_{i=1}^n X_i$ is $(1-\beta t)^{n\alpha}.$
+$$Y \sim \Gamma(\alpha n, \beta) \to \overline X \sim \Gamma(\alpha n,
+\beta/n) \to \hat\beta \sim \Gamma(\alpha n, \beta/n\alpha),$$
+by change of variable in the gamma distribution.%
+\footnote{*}{$X\sim \Gamma(\alpha, \beta)$ has pdf
+${1\over\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x\over\beta},$ so
+$kX$ has pdf
+${1\over\Gamma(\alpha)\beta^\alpha}(kx)^{\alpha-1}e^{-kx\over\beta},$
+giving $kX \sim \Gamma(\alpha, k\beta).$}
+The gamma function has variance $\alpha\beta^2,$ so the variance of
+$\hat\beta$ is $\beta^2/n\alpha,$ which is less than infinity, so the
+estimator is consistent (and it will tend to zero).
+% to prove, and to find variance. actually, we don't need to find
+% variance for this problem, but we do for (d). find the lecture!!
+
+\part{d}
+
+Since we've already proven $\E(\hat\beta) = \beta$ and obtained a value
+for $\var\hat\beta,$ we get the following result by the Central Limit
+Theorem,
+$\sqrt n(\hat\beta - \beta) \to {\cal N}(0, \var(\hat\beta)) =
+{\cal N}(0, \beta^2/n\alpha)$
+
+\problem{2}
+
+\part{a}
+
+With $\theta = 0,$ by Student's Theorem,
+$${\overline X\over S/\sqrt{n}} \sim t(n-1).$$
+Taking $t_{1-\alpha}(n-1)$ to be the $1-\alpha$ quantile of the
+t-distribution with $n-1$ degrees of freedom, we have a test ${\overline
+X\over S/\sqrt{n}} \geq t_{1-\alpha}(n-1).$
+
+Substituting in our given values ($\overline X = .8,$ $n = 25,$ $S^2 =
+2.56$),
+$${.8\over 1.6/\sqrt{25}} \geq 1.711 \to 2.5 \geq 1.711,$$
+so our data does pass the test.
+
+\part{b}
+
+\def\cdf{{\bf T}}
+Letting $\cdf$ be the cdf of a t-distribution with $n-1$ degrees of
+freedom, we use the following result of student's theorem
+$${\overline X - \theta\over S/\sqrt n} \sim t(n-1)$$
+to get the probability this static is in the critical set as
+$$\gamma_C(\theta) = \P({\overline X\over S/\sqrt n} > 1.711) =
+\P({\overline X-\theta\over S/\sqrt n} > 1.711-{\theta\over S/\sqrt n})
+$$$$
+= 1-\cdf(1.711-{\theta\over S/\sqrt n}) =
+1-\cdf(1.711-{\theta\over .32})
+= \cdf({25\theta\over8} - 1.711).
+$$
+
+\problem{3}
+
+\part{a}
+
+The cdf of $X_i$ on the support of its pdf $[0,1],$ is $x,$ (zero on
+$x<0$ and one on $x>1$)
+With $X_{(1)} = \min(X_1,\ldots,X_6)$ and $X_{(6)} =
+\max(X_1,\ldots,X_6),$
+$$\P(X_{(1)} \geq x) = \P(X_1 \geq x)\P(X_2 \geq x)\cdots\P(X_6 \geq x)
+= (1-x)^6 \to \P(X_{(1)} \leq x) = 1-(1-x)^6.$$
+$$\P(X_{(6)} \leq x) = \P(X_1 \leq x)\P(X_2 \leq x)\cdots\P(X_6 \leq x)
+= x^6.$$
+
+These give pdfs for $X_{(1)}$ and $X_{(6)},$ respectively, $6(1-x)^5$
+and $6x^5.$
+
+\part{b}
+
+$$\E(X_{(1)} + X_{(6)}) = \int_0^1 x(6(1-x)^5 + 6x^5) dx =$$$$
+\left[ x(-(1-x)^6 + x^6) \right]_0^1 - \int_0^1 -(1-x)^6 + x^6 dx =
+1 - \fr17\left[-(1-x)^7 + x^7\right]_0^1 = 1.
+$$
+
+This makes sense by symmetry. The maximum value of the set has the same
+pdf as the minimum value reflected about .5, so we expect $\E X_{(6)} =
+1 - \E X_{(1)}.$
+
+\problem{4}
+
+\part{a}
+
+$$Y := 2X_2 - X_1 = \pmatrix{-1&2&0}{\bf X} = t^T{\bf X}.$$
+$$\E Y = t^T\E{\bf X} = -1(-1) + 2(2) = 5.$$
+$$\var Y = \E(t^TXX^Tt) - \E(t^TX)\E(X^Tt) = t^T(\var X)t =
+\bmatrix{-1&2&0}\bmatrix{4&-1&0\cr-1&1&0\cr0&0&2}\bmatrix{-1\cr2\cr0} =
+12.
+$$
+
+\def\cdf{{\bf\Phi}}
+Where $\cdf$ is the cdf of the standard normal distribution, we
+normalize $2X_2-X_1,$ to reach the standard normal and get
+$$\P(2X_2 > X_1 + 3) = \P(2X_2 - X_1 - 5 > -2) = \P((2X_2 - X_1 -
+5)/\sqrt{12} > -2/\sqrt{12}) = 1-\cdf(-1/\sqrt{3}) \approx .793.$$
+
+\part{b}
+
+If $X,Y \sim {\cal N}(0,1)$ and $X\perp Y,$ $X^2 + Y^2 \sim \chi^2(2)$
+by the definition of $\chi^2.$
+
+For our problem, $Y^TY \sim \chi^2(2)$ if $Y$ is a multivariate normal
+with
+$$\E Y = \pmatrix{0\cr0} \qquad \var Y = \pmatrix{1&0\cr0&1}.$$
+
+$$\var (X_1,X_3)^T = \pmatrix{4&0\cr0&2} \to Y =
+\pmatrix{1/2&0\cr0&1/\sqrt2}(X_1,X_3)^T + \mu,$$
+$A$ given by $(\var (X_1,X_3)^T)^{-1/2},$ from $A = \Sigma^{-1/2}$
+(3.5.12) in the textbook.
+
+$$\E Y = \bmatrix{-1/2\cr\sqrt2} + \mu = 0 \to \mu =
+\bmatrix{1/2\cr -\sqrt2}\qquad A = \bmatrix{1/2&0\cr0&1/\sqrt2}$$
+
+\bye
diff --git a/zhilova/midterm2.tex b/zhilova/midterm2.tex
new file mode 100644
index 0000000..8e74cf8
--- /dev/null
+++ b/zhilova/midterm2.tex
@@ -0,0 +1,146 @@
+\def\problem#1{\goodbreak\bigskip\noindent{\bf #1)}\smallskip\penalty500}
+\newfam\rsfs
+\newfam\bbold
+\def\scr#1{{\fam\rsfs #1}}
+\def\bb#1{{\fam\bbold #1}}
+\let\oldcal\cal
+\def\cal#1{{\oldcal #1}}
+\font\rsfsten=rsfs10
+\font\rsfssev=rsfs7
+\font\rsfsfiv=rsfs5
+\textfont\rsfs=\rsfsten
+\scriptfont\rsfs=\rsfssev
+\scriptscriptfont\rsfs=\rsfsfiv
+\font\bbten=msbm10
+\font\bbsev=msbm7
+\font\bbfiv=msbm5
+\textfont\bbold=\bbten
+\scriptfont\bbold=\bbsev
+\scriptscriptfont\bbold=\bbfiv
+
+\def\E{\bb E}
+\def\P{\bb P}
+\newcount\qnum
+\def\fr#1#2{{#1\over #2}}
+\def\var{\mathop{\rm var}\nolimits}
+\def\cov{\mathop{\rm cov}\nolimits}
+\def\dd#1#2{\fr{\partial #1}{\partial #2}}
+
+\problem{1}
+\noindent{\bf (a)}
+$$\E(Z) = \pmatrix{1\cr0}\qquad \var(Z) = \pmatrix{2&c\cr c&4}$$
+
+\noindent{\bf (b)}
+Covariance is a bilinear function, and $\var (kX) = \cov(kX,kX) =
+k^2\var X.$ Also, $\cov(c, X) = 0,$ if $c$ is a constant, regardless of
+$X.$ (because $\cov(c, X) = \E(cX) - \E(c)\E(X) = c\E(X) - c\E(X) = 0.$)
+With linearity, this means $\cov(X+c, Y+b) = \cov(X, Y).$
+$$\rho(2X-1, 5-Y) =
+{\cov(2X-1, 5-Y)\over\sqrt{\var(2X-1)}\sqrt{\var(5-Y)}} =
+{\cov(2X, -Y)\over\sqrt{\var(2X)}\sqrt{\var(-Y)}} = $$$$
+{-2\cov(X, Y)\over2\sqrt{\var(X)}\sqrt{\var(Y)}} =
+-{.5\over\sqrt{2}\sqrt{4}} =
+-{\sqrt2\over8}.$$
+
+\noindent{\bf (c)}
+
+$X$ and $Y$ are not necessarily independent from each other, although
+independence would give a covariance of 0 ($p_XY(x,y) = p_X(x)p_Y(y)
+\to \E(XY) = \E(X)\E(Y).$)
+
+Let $W \sim \cal N(0, 1),$ and $Z \sim 2\cal B(.5) - 1$ (i.e. it has a
+.5 probability of being -1 or 1).
+$X := \sqrt2 W + 1$ and $Y := 2ZW.$
+These are strictly dependent because $Y = \sqrt2Z(X-1),$ so $Y$ has
+conditional distribution $\sqrt2(x-1)(2\cal B(.5) - 1),$ which is
+clearly not equal to its normal distribution (which can be fairly easily
+verified by symmetry of $W$).
+However, they have covariance 0:
+$$\E(XY) - \E X\E Y = \E((\sqrt2 W + 1)2ZW) - \E(\sqrt 2 W + 1)\E(2ZW)
+$$$$
+= \E(2\sqrt2 ZW^2) - \E(\sqrt 2 W)\E(2ZW)
+= 0 - 0\E(2ZW) = 0.
+$$
+
+% wikipedia says no
+
+\problem{2}
+\noindent{\bf (a)}
+
+\def\idd#1#2{\dd{#1}{#2}^{-1}}
+$Y_1 = 2X_2$ and $Y_2 = X_1 - X_2$ give us $X_2 = Y_1/2$ and
+$X_1 = Y_2 + Y_1/2.$ This lets us compute Jacobian
+$$J = \left|\matrix{\idd{x_1}{y_1} &\idd{x_1}{y_2}\cr
+ \idd{x_2}{y_1} &\idd{x_2}{y_2}}\right|
+ = \left|\matrix{2&1\cr2&0}\right| = -2.$$
+$$g(y_1, y_2) = \left\{\vcenter{\halign{\strut$#$,&\quad$#$\cr
+ |J|2e^{-(y_2+y_1/2)}e^{-y_1/2} & 0 < y_2+y_1/2 < y_1/2\cr
+ 0 & {\rm elsewhere}\cr
+ }}\right.$$
+$y_2+y_1/2 < y_1/2 \to y_2 < 0 \to -y_2 > 0.$
+And $0 < y_2 + y_1/2 \to y_1 > -2y_2 > 0.$
+$$ = \left\{\vcenter{\halign{\strut$#$,&\quad$#$\cr
+ 4e^{-y_2}e^{-y_1}&y_1 > -2y_2 > 0\cr
+ 0&{\rm elsewhere}\cr
+ }}\right.$$
+
+\noindent{\bf (b)}
+
+$$g(y_1) = \int_{-\infty}^\infty g(y_1,y_2)dy_2 =
+\bb I(y_1>0)\int_{-y_1/2}^0 4e^{-y_1}e^{-y_2} dy_2 =
+\bb I(y_1>0)4e^{-y_1}(1-e^{y_1/2}).$$
+$$g(y_2) = \int_{-\infty}^\infty g(y_1,y_2)dy_1 =
+\bb I(y_2<0)\int_{-2y_2}^\infty 4e^{-y_1}e^{-y_2} dy_1 =
+\bb I(y_2<0)4e^{-y_2}(-e^{2y_2})
+.$$
+
+\noindent{\bf (c)}
+
+They are independent iff $g(y_1,y_2) \bb I(y_1 > -2y_2 > 0)e^{-y_1-y_2}
+= g(y_1)g(y_2) = \bb I(y_1 > 0)\bb I(y_2 < 0) h(x),$
+where $h(x)$ is the strictly non-zero product of exponents that would
+result, showing that they are dependent (if $y_1 = -y_2 = 1,$ the right
+indicators are satisfied but not the left indicator, and since $h(x)$ is
+non-zero, we see a contradiction.)
+
+\problem{3}
+\noindent{\bf (a)}
+We start determining the mgf from the pdf of $X,$
+$p_X(x) =
+\fr1{\sqrt{2\pi}}e^{-\fr12x^2}.$
+$$\E(e^{tX}) =
+\int_{-\infty}^\infty e^{tx}\fr1{\sqrt{2\pi}}e^{-\fr12x^2} dx =
+\fr1{\sqrt{2\pi}} \int_{-\infty}^\infty e^{tx-\fr12x^2} dx = $$$$
+\fr1{\sqrt{2\pi}} \int_{-\infty}^\infty e^{tx-\fr12x^2-\fr12t^2 + \fr12t^2} dx =
+e^{\fr12t^2} \int_{-\infty}^\infty \fr1{\sqrt{2\pi}}e^{-\fr12(x-t)^2} dx
+= e^{\fr12t^2},
+$$
+by the final integrand being a normal pdf and therefore integrating to
+1.
+
+\noindent{\bf (b)}
+$$M_Y(t) = \E(e^{t(aX+b)}) = \E(e^{bt}e^{atX}) = e^{bt}\E(e^{atX}) =
+e^{bt}M_X(at) = e^{bt}e^{\fr12(at)^2}.$$
+
+\noindent{\bf (c)}
+
+Theorem 1.9.2 states that two probability distribution functions are
+alike if and only if their moment generating functions are equal in some
+vicinity of zero.
+The mgf of $Y$ corresponds to $\cal N(b, a^2),$ which has the following
+mgf (by computation from its pdf definition):
+$$\int_{-\infty}^\infty e^{tx}\fr1{a\sqrt{2\pi}}e^{-(x-b)^2\over
+2a^2} dx =
+\fr1{a\sqrt{2\pi}}\int_{-\infty}^\infty e^{2a^2tx - x^2 + 2bx - b^2\over
+2a^2} dx =$$$$
+\fr1{a\sqrt{2\pi}}\int_{-\infty}^\infty e^{-(b+a^2t)^2 +
+2(a^2t+b)x - x^2 - b^2 + (b+a^2t)^2\over 2a^2} dx =
+e^{(b+a^2t)^2-b^2\over 2a^2}\int_{-\infty}^\infty
+\fr1{a\sqrt{2\pi}}e^{-(b+a^2t+x)^2 \over 2a^2} dx =
+e^{bt}e^{(at)^2\over2},
+$$
+proving $Y \sim \cal N(b, a^2),$ because this function is convergent
+everywhere, and reusing the fact that the final integrand is a normal
+pdf, so it must integrate to 1.
+
+\bye