aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore2
-rw-r--r--li/04_vector_space30
-rw-r--r--li/05_linear_span17
-rw-r--r--li/06_basis40
-rw-r--r--li/hw2.tex265
-rw-r--r--li/hw3.tex378
-rw-r--r--li/matlab_hw.py183
-rw-r--r--zhilova/04_events89
-rw-r--r--zhilova/05_random_variables48
-rw-r--r--zhilova/06_ev67
-rw-r--r--zhilova/07_mgf22
-rw-r--r--zhilova/08_jensen23
12 files changed, 1164 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..48b55b0
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+*.log
+*.pdf
diff --git a/li/04_vector_space b/li/04_vector_space
new file mode 100644
index 0000000..720f44b
--- /dev/null
+++ b/li/04_vector_space
@@ -0,0 +1,30 @@
+Vector Space is a set V and binary addition operator and a
+scalar-by-vector multiplication operator where V is closed under the
+binary and multiplication operators. (V, +, *) denotes the space.
+V = R^n is one example of a vector space.
+
+A subspace is a subset of V s.t. (S, +, *) is a closed vector space.
+
+With A an mxn matrix,
+
+S = { x in R^n | Ax = 0 } is a subspace in R^n
+
+It is sufficient proof to check:
+(i) \forall x,y \in S, x+y \in S
+ If Ax = 0, Ay = 0, A(x+y) = Ax + Ay = 0 + 0 = 0.
+(ii) \forall x\in S, \alpha\in R, \alpha x \in S.
+ A(\alpha*x)= \alpha(Ax) = \alpha*0 = 0.
+Therefore, S is a subspace of R^n.
+
+This is called the null space of A, null(A).
+
+AX = 0 is also a vector space over X\in R^M by R^M.
+
+If A is invertible, AX = 0 \to A^{-1} AX = A^{-1} 0 \to X = 0.
+
+S = span{a1, a2, ..., a_n} is a subspace in R^m
+ Proof of subspace omitted.
+
+S = column space of A = C(A) = R(A) = span of A.
+R(A) stands for range.
+Ax = b has a solution, iff b \in R(A)
diff --git a/li/05_linear_span b/li/05_linear_span
new file mode 100644
index 0000000..ed5fdf2
--- /dev/null
+++ b/li/05_linear_span
@@ -0,0 +1,17 @@
+ Linear Independence and Linear Span
+Solution set for Ax=0, Ax=B, and the dimension of null space/rank of a
+matrix.
+
+Def. linear independence over vector space (V, +, *)
+Let v = {v1, ..., vk} be vectors in V. We say v is linearly independent
+iff whenever a1v1 + a2v2 + ... akvk = 0, a1 = a2 = ... = ak = 0.
+
+Where v_i in v in R^n, we can find linear independence by writing it as
+
+[ v1 | v2 | ... | vk ][a1 a2 ... ak]^T = 0
+
+Prove: basis of a vector space has the same dimension regardless of the
+vectors.
+
+Def basis: a linearly independent set of vectors which spans a vector
+space.
diff --git a/li/06_basis b/li/06_basis
new file mode 100644
index 0000000..2a103b7
--- /dev/null
+++ b/li/06_basis
@@ -0,0 +1,40 @@
+(V, +, *) vector space.
+
+Let v be {v1, ..., vn} where vi \in V.
+v is a basis of V if
+(1) v is linearly independent
+(2) span{v} = V.
+
+Lemma: Suppose v = {v1, ..., vj} is a linearly independent set in V and
+w = {w1, w2, ..., wk} s.t. span{w} = V. Then j \leq k.
+
+Since w spans V,
+
+[ v1 | v2 | ... | vj ] = [ w1 | w2 | ... | wk ] A,
+ B C
+where A is a k-by-j matrix.
+
+Suppose k < j.
+ Then Ax = 0 has nontrivial solutions because more unknowns (j) than
+ outputs.
+ Therefore, there exists x0 s.t. Ax0 = 0 with x0 \neq 0.
+
+B = CA \to Bx0 = CAx0 = 0, but Bx0 = 0 with nontrivial x0 is not
+possible because B is defined to be linearly independent.
+Therefore, k \geq j.
+
+Thm: If v and w are each bases of V, then k = j by application of the
+previous lemma (k\geq j and j\geq k)
+Let dim V = k = j.
+
+Suppose that dim V = n.
+ Any linearly independent set in V will have at most n vectors, and
+ Any spanning set in V must have at least n vectors.
+Converses
+ Every set with more than n vectors in V is linearly dependent.
+ Every set with less than n vectors in V does not span V.
+(2')For linearly independent set w which doesn't span V, there exists a
+vector which may be added to V outside the span, and w \cup {v} is also
+linearly independent.
+ [I think this is equivalent to saying "if w is linearly independent,
+ then w1 \not\in span{w \setminus {w1}}
diff --git a/li/hw2.tex b/li/hw2.tex
new file mode 100644
index 0000000..769cad3
--- /dev/null
+++ b/li/hw2.tex
@@ -0,0 +1,265 @@
+\def\bmatrix#1{\left[\matrix{#1}\right]}
+{\bf\noindent Section 1.6}
+{\bf\noindent 4.}
+{\it (a)}
+That means $A^{-1}$ exists s.t. $A^{-1}A = I.$
+$$AB = AC \to A^{-1}AB = A^{-1}AC \to B = C$$
+
+{\it (b)}
+$$B = \pmatrix{1 & 1\cr 1 & 0}$$
+$$C = \pmatrix{1 & 2\cr 1 & 1}$$
+
+$$AB = AC.$$ % double check
+
+{\bf\noindent 10.}
+
+$$A_1^{-1} = \bmatrix{0 & 0 & 0 & 1\cr 0 & 0 & 1/2 & 0\cr 0 & 1/3 & 0 &
+0\cr 1/4 & 0 & 0 & 0}$$
+
+$$A_2^{-1} = \bmatrix{1 & 0 & 0 & 0\cr 1/2 & 1 & 0 & 0\cr 0 & 2/3 & 1 &
+0\cr 0 & 0 & 3/4 & 1}$$
+
+\def\rowone{{d\over ad-bc}&{-b\over ad-bc}}
+\def\rowtwo{{-c\over ad-bc}&{a\over ad-bc}}
+$$A_3^{-1} = \bmatrix{\rowone & 0 & 0\cr \rowtwo & 0 & 0\cr 0 & 0 &
+\rowone \cr 0 & 0 & \rowtwo}$$
+
+{\bf\noindent 11.}
+
+If $A = I$ and $B = -I,$ both are invertible, but $A + B$ is not.
+
+If $A = \pmatrix{1&0\cr 0&0}$ and $B = \pmatrix{0&0\cr 0&1},$ $A + B =
+I$ is invertible, but neither $A$ nor $B$ are.
+
+$A = B = I$ gives $A + B,$ $A,$ and $B$ invertibility.
+$A^{-1} = B^{-1} = I \to A^{-1}(A+B)B^{-1} = B^{-1} + A^{-1} \to 2I =
+2I.$
+
+{\bf\noindent 12.}
+
+{\it (a)}
+
+This one remains.
+
+{\it (b)}
+
+This one remains.
+
+{\it (c)}
+
+This one remains.
+
+{\it (d)}
+
+This one doesn't remain.
+
+{\it (e)}
+
+This one remains.
+
+% I don't really have any coherent reasons for this.
+% But these all make a lot of sense to me.
+% This isn't graded, but I would like to understand it better. Office
+% hours?
+
+{\bf\noindent 23.}
+
+$$\bmatrix{10&20\cr 20&50}\bmatrix{x\cr y} = \bmatrix{1\cr 0} \to
+\bmatrix{10&20\cr 0&10}\bmatrix{x \cr y} = \bmatrix{1\cr -2}$$$$\to
+\bmatrix{1&2\cr 0&1}\bmatrix{x \cr y} = \bmatrix{1/10\cr -2/10} \to
+\bmatrix{1&0\cr 0&1}\bmatrix{x \cr y} = \bmatrix{5/10\cr -2/10}.
+$$
+
+$$\bmatrix{10&20\cr 20&50}\bmatrix{x\cr y} = \bmatrix{0\cr 1} \to
+\bmatrix{1&2\cr 0&1}\bmatrix{x\cr y} = \bmatrix{0\cr 1/10} \to
+\bmatrix{1&0\cr 0&1}\bmatrix{x\cr y} = \bmatrix{-2/10\cr 1/10}.
+$$
+
+$$A^{-1} = {1\over10}\bmatrix{5&-2\cr -2&1}$$
+
+{\bf\noindent 25.}
+
+{\it (a)}
+$$\bmatrix{\hbox{row 1}\cr\hbox{row 2}\cr\hbox{row 1 + row
+2}}\bmatrix{x\cr y\cr z} = \bmatrix{1\cr0\cr0}
+\to \bmatrix{\hbox{row 1}\cr\hbox{row 2}\cr0}\bmatrix{x\cr y\cr z} =
+\bmatrix{1\cr0\cr-1}.$$
+$0 \neq -1,$ so this is inconsistent.
+
+{\it (b)}
+
+If $b_3 = b_2 + b_1,$ this matrix admits a solution.
+
+{\it (c)}
+
+As already shown, row 3 becomes a zero row during elimination.
+
+{\bf\noindent 27.}
+
+$B = E_12A,$
+and the elementary matrix which switches the first two rows is by
+definition invertible ($E_12E_12 = I$) so if $A$ is invertible, $B$ is
+also invertible, and $B^{-1} = A^{-1}E_12$ (I think this is a switch of
+columns).
+
+{\bf\noindent 36.}
+
+$$\bmatrix{2 & 1 & 0 & 1 & 0 & 0\cr
+ 1 & 2 & 1 & 0 & 1 & 0\cr
+ 0 & 1 & 2 & 0 & 0 & 1}
+\to
+\bmatrix{2 & 1 & 0 & 1 & 0 & 0\cr
+ 0 & 1.5 & 1 & .5 & 1 & 0\cr
+ 0 & 1 & 2 & 0 & 0 & 1}
+\to
+\bmatrix{2 & 1 & 0 & 1 & 0 & 0\cr
+ 0 & 1 & 0 & .5 & 1 & .5\cr
+ 0 & 1 & 2 & 0 & 0 & 1}
+$$$$\to
+\bmatrix{2 & 0 & 0 & .5 & -1 & -.5\cr
+ 0 & 1 & 0 & .5 & 1 & .5\cr
+ 0 & 1 & 2 & 0 & 0 & 1}
+\to
+\bmatrix{2 & 0 & 0 & .5 & -1 & -.5\cr
+ 0 & 1 & 0 & .5 & 1 & .5\cr
+ 0 & 0 & 2 & -.5 & -1 & .5}
+$$
+
+Inverse is
+$$
+\bmatrix{.25 & -.5 & -.25\cr
+ .5 & 1 & .5\cr
+ -.25 & -.5 & .25}
+$$
+
+% {\bf\noindent 40.}
+
+{\bf\noindent Section 2.1}
+{\bf\noindent 2.}
+
+{\it (a)}
+
+This is. $k(0, b_2, b_3) = (0, kb_2, kb_3),$ and similar for addition.
+
+{\it (b)}
+
+This isn't. $2(0, 1, 0) = (0, 2, 0),$ which isn't in the subset.
+
+{\it (c)}
+
+This isn't. $(1, 1, 0) + (1, 0, 1) = (2, 1, 1),$ which isn't in the
+subset, even though its two starting components are.
+
+{\it (d)}
+
+This is, by definition.
+
+{\it (e)}
+
+Yes, this is the null space of a homogenous matrix with one row. It is a
+subspace.
+
+{\bf\noindent 7.}
+
+{\it (a)}
+
+This is not a subspace. The given sequence plus the given sequence with
+a zero prepended would sum to a sequence without a zero, meaning it is
+not closed under addition.
+
+{\it (b)}
+
+This is a subspace because the sums and multiples are closed.
+
+{\it (c)}
+
+$x_{j+1} \leq x_j \land y_{j+1} \leq y_j \to x_{j+1} + y_{j+1} \leq x_j
++ y_j,$
+and similar for multiplication.
+
+{\it (d)}
+
+Yes, the sum of two convergent sequences will tend to the sum of their
+limits, and each can be multiplied by a real.
+
+{\it (e)}
+
+Yes, the sum of two arithmetic series will be an arithmetic series, and
+they work with multiplications by a real.
+
+{\it (f)}
+
+This isn't because $x_1 = 1,$ $k = 2,$ and $x_1 = 1,$ $k = 3$ sum to
+$$2 + 5 + 14 + \cdots,$$ which is not a geometric sequence.
+
+{\bf\noindent 8.} % required
+
+The two rows are linearly independent, so with 3 unknows, this forms a
+line (intersection of two equations/planes).
+
+{\bf\noindent 18.}
+
+{\it (a)}
+
+It's probably a line, but it could be a plane.
+
+{\it (b)}
+
+It's probably $(0, 0, 0)$, but it could be a line.
+
+% {\it (c)}
+
+% I don't know.
+
+{\bf\noindent 22.} % required
+
+{\it (a)}
+
+This matrix has a column space with basis $(1, 2, -1)^T,$ so it is only
+solvable if $b = (b_1, b_2, b_3)^T$ is a multiple of that vector.
+
+{\it (b)}
+
+These two columns are linearly independent, so any b within the span of
+them has a solution.
+
+{\bf\noindent 25.}
+
+Unless $b$ is in the span of $A.$ If A is the zero $3\times 1$ matrix,
+and $b$ is the first column of $I_3,$ then the column space increases.
+If $A = b,$ then it would not extend the column space. Iff $b$ is
+already in the column space, then $Ax = b$ has a solution.
+
+{\bf\noindent 26.}
+
+They are not equal only if $B$ is not invertible (it is a necessary but
+not sufficient condition). If $A$ is invertible, like $A = I,$ then
+non-invertible $B$ gives $AB = B$ with smaller column space.
+
+{\bf\noindent 28.} % required
+
+{\it (a)}
+
+False. Counterexample: The complement of $(0, 0, 0)$ does not pass the
+test $0v \in V.$
+
+{\it (b)}
+
+True. The space with only the zero vector has a basis of only zero
+vectors.
+
+{\it (c)}
+
+True. Each of $2A$'s columns can be transformed into the columns of $A$
+by division by 2.
+
+{\it (d)}
+
+False. The column space of $-I$ is the full space, unlike the zero
+matrix it's ``based on.''
+
+{\bf\noindent 30.} % required
+
+$${\bf C}(A) = {\bf R}^9.$$
+
+\bye
diff --git a/li/hw3.tex b/li/hw3.tex
new file mode 100644
index 0000000..870daf0
--- /dev/null
+++ b/li/hw3.tex
@@ -0,0 +1,378 @@
+\def\bmatrix#1{\left[\matrix{#1}\right]}
+
+ {\noindent\bf Section 2.2}
+
+{\noindent\bf 12.}
+
+{\it (a)}
+
+This is correct. It's equal to the number of linearly independent
+rows/dimension of the row space/rank because if any of the non-zero rows
+were linearly dependent, they would have been eliminated to a zero row
+when forming $R.$
+
+{\it (b)}
+
+This is false. A zero matrix has rank zero but can have a nonzero number
+in this property if it has more columns than rows.
+
+{\it (c)}
+
+This is true. All columns are either pivot columns or free columns, and
+the rank is the number of pivot columns.
+
+{\it (d)}
+
+No. The following matrix has four ones but rank one:
+$$\bmatrix{1&1&1&1}$$
+
+{\noindent\bf 26.}
+
+The maximum rank of a matrix is the smaller of its number of rows and
+its number of columns because the pivot columns and rows are strictly
+less than the total number of each. Therefore, $C$ and $A$ have at most
+rank 2, and $CA$ also has at most rank 2 (column space of $A$ is a
+superset of the column space of $CA,$ which becomes obvious if they're
+treated like functions). $CA$ is a $3\times 3$ matrix, and $I_3$ has
+rank 3, so $CA \neq I.$
+
+$AC = I$ if
+$$A = \bmatrix{1 & 0 & 0\cr
+ 0 & 1 & 0}$$
+ and
+$$C = \bmatrix{1 & 0\cr
+ 0 & 1\cr
+ 0 & 0}$$
+
+{\noindent\bf 42.}
+
+If $Ax = b$ has infinitely many solutions, then there exists infinitely
+many solutions to $Ay = 0$ if $y = x - x_0$ where $x_0$ is a particular
+solution to $Ax_0 = b.$ If there exists one particular solution $x_1$ to
+$Ax_1 = B,$ then there must be an infinite number $A(x_1+y) = B$ where
+$y$ is in the null space of $A$ as noted earlier.
+
+However, $Ax = B$ could have zero solutions. The matrix
+$$A = \bmatrix{1&0\cr 0&0}$$
+does not include $b_0 = \bmatrix{0\cr 1}$ in its column space, so $Ax =
+b_0$ would have zero solutions even though $Ax = \bmatrix{1\cr 0}$ has
+an infinite number of solutions.
+
+\iffalse % practice problems
+
+{\noindent\bf 7.}
+
+$$R_3 = R_2 + R_1 \to c = 5 + 2.$$
+
+{\noindent\bf 9.}
+
+{\it (a)}
+
+$$\bmatrix{1&2&3&4\cr 0&0&1&2\cr 0&0&0&0}\bmatrix{x_1\cr x_2\cr x_3\cr
+x_4} = \bmatrix{0\cr 0\cr 0} \to x = \bmatrix{-4\cr 0\cr -2\cr 1}x_4 +
+\bmatrix{-2\cr 1\cr 0\cr 0}x_2$$
+$$R = \bmatrix{1&2&0&-2\cr 0&0&0&1&2\cr 0&0&0&0}.$$
+$$Rx = 0 \to x = \bmatrix{2&0&-2&1}x_4 + \bmatrix{-2&1&0&0}x_2.$$
+
+{\it (b)}
+
+If the right-hand side is $(a, b, 0),$ the solution set will be the null
+space plus a particular solution. In the case of $U,$ a particular
+solution would be $(a, 0, b, 0).$
+
+{\noindent\bf 10.}
+
+$$\bmatrix{0&1&-1\cr 1&0&-1}x = \bmatrix{1\cr -2\cr 0}.$$
+$$\bmatrix{0&1&-1\cr 1&0&-1\cr 1&1&-2}x = \bmatrix{1\cr -2\cr 0}.$$
+
+{\noindent\bf 14.}
+
+$$R_A = \bmatrix{1&2&0\cr 0&0&1\cr 0&0&0}.$$
+$$R_B = \bmatrix{1&2&0&1&2&0\cr 0&0&1&0&0&1\cr 0&0&0&0&0&0}.$$
+$$R_C = \bmatrix{1&2&0&0&0&0\cr 0&0&1&0&0&0\cr 0&0&0&0&0&0\cr
+0&0&0&1&2&0\cr 0&0&0&0&0&1\cr 0&0&0&0&0&0}.$$
+
+{\noindent\bf 21.}
+
+The rank $r$ is the number of pivot rows and the number of pivot
+columns, so the subset of these rows and columns would be an $r\times r$
+matrix. They are by definition linearly independent, so each spans/forms
+a basis for $R^r,$ giving them invertibility.
+
+{\noindent\bf 24.}
+
+The rank of $A$ is the same as the rank of $A^T,$ so
+$${\rm rank}(AB) \leq {\rm rank}(A) \to {\rm rank}((AB)^T) \leq
+{\rm rank}(A^T) \to {\rm rank}(B^TA^T) \leq {\rm rank}(A^T) \to
+{\rm rank}(AB) \leq {\rm rank}(B).$$
+
+{\noindent\bf 25.}
+
+
+
+{\noindent\bf 36.}
+
+{\it (a)}
+
+All vectors in $R^3$ are in the column space, so only the trivial
+combination of the rows of $A$ gives zero.
+
+{\it (b)}
+
+Only vectors where $b_3 = 2b_2$ are within the column space. This means
+that $2x_2 = -x_3$ gives a zero combination. % double check this.
+
+{\noindent\bf 40.}
+
+$x_5$ is a free variable, the zero vector isn't the only solution to
+$Ax=0,$ and if $Ax=b$ has a solution, then it has infinite solutions.
+
+{\noindent\bf 43.}
+
+{\it (a)}
+
+$q=6$ gives a rank of 1 for $B,$ and $q=3$ gives a rank of 1 for the
+frist matrix.
+
+{\it (b)}
+
+$q = 7$ gives a rank of 2 for both matrices.
+
+{\it (c)}
+
+A rank of 3 is impossible for both matrices.
+
+{\noindent\bf 45.}
+% idk come back to this.
+
+{\it (a)}
+
+$r < n.$
+
+{\it (b)}
+
+$r > m.$ $r\geq n.$ % ???
+
+{\it (c)}
+
+$r < n.$
+
+{\it (d)}
+
+{\noindent\bf 53.}
+
+{\it (a)}
+
+False. The zero matrix has $n$ free variables.
+
+{\it (b)}
+
+True. If the linear function corresponding to the matrix can be
+inverted, it must not have a non-zero null-space (i.e. a non-injective
+relation).
+
+{\noindent\bf 60.}
+
+$$\bmatrix{1&0&-2&-3\cr0&1&-2&-1}$$ has this nullspace.
+
+{\noindent\bf 61.}
+
+% simple enough to construct
+
+{\noindent\bf 62.}
+
+
+
+{\noindent\bf 63.}
+{\noindent\bf 64.}
+
+{\noindent\bf 65.}
+
+$$\bmatrix{0&0\cr 1&0}$$
+
+{\noindent\bf 66.}
+
+Dimension of null space is $n-r = 3-r,$ and dimension of column space is
+$r,$ so they cannot have the same dimension and therefore cannot be
+equal.
+
+\fi
+
+ {\noindent\bf Section 2.3}
+
+{\noindent\bf 22.}
+
+{\it (a)}
+
+They might not span ${\bf R}^4$ if, for example, they are all the zero
+vector, but they could span it, like if the first four were elementary
+vectors $e_1$ to $e_4.$
+
+{\it (b)}
+
+They are not linearly independent because 4 is the maximal independent
+set.
+
+{\it (c)}
+
+Any four might be a basis for ${\bf R}^4,$ because they could be
+linearly independent and four vectors in ${\bf R}^4$ could span it.
+
+{\it (d)}
+
+$Ax = b$ might not have a solution. It could have a solution depending
+on the $b,$ but $0x = e_1,$ where $0$ refers to the zero vector for $A$
+has zero solutions.
+
+{\noindent\bf 27.}
+
+The column space of $A$ has basis in $\{(1, 0, 1)^T, (3, 1, 3)^T\}$ and
+the column space of $U$ has basis in $\{(1, 0, 0)^T, (3, 1, 0)^T\}.$
+The two matrices have the same row space, based in $\{(1, 3, 2), (0, 1,
+1)\}.$
+They also have the same null space, based in $\{(-1, 1, -1)\}.$
+
+{\noindent\bf 32.}
+
+{\it (a)}
+
+The dimension is 3 because this is the set of vectors on ${\bf R}^4$
+under one linear constraint: $v_4 = -(v_3 + v_2 + v_1).$
+
+{\it (b)}
+
+The dimension is 0 because the identity matrix, by definition only
+returns 0 if given 0.
+
+{\it (c)}
+
+The dimension is 16 because there are 16 unconstrained components.
+
+{\noindent\bf 36.}
+
+6 independent vectors satisfy $Ax=0$ by the rank theorem. $A^T$ has the
+same rank, so 53 independent vectors satisfy $A^Ty = 0.$
+
+{\noindent\bf 42.}
+
+$\{x^3, x^2, x, 1\}$ form a basis of the polynomials of degree up to 3,
+and this set restricted to those where $p(1) = 0$ has basis $\{x^3-1,
+x^2-1, x-1\}.$
+
+\iffalse % practice problems
+
+{\noindent\bf 7.}
+
+$$v_1 - v_2 + v_3 = w_2 - w_3 - w_1 + w_3 + w_1 - w_2 = 0,$$
+proving dependence of these vectors.
+
+{\noindent\bf 8.} % not an actual problem
+
+$$c_1v_1 + c_2v_2 + c_3v_3 = c_1(w_2 + w_3) + c_2(w_1 + w_3) + c_3(w_1 +
+w_2) = (c_2+c_3)w_1 + (c_1+c_3)w_2 + (c_1+c_2)w_3 = 0.$$
+Because the set of $w$ vectors are independent, this sum is only equal
+to zero if $c_2 + c_3 = 0 \to c_3 = -c_2,$ $c_1 + c_3 = 0 \to c_1 = -c_3
+= +c_2,$ and $c_1+c_2 = 0 \to c_2 = c_1 = 0 \to c_3 = 0.$
+
+{\noindent\bf 9.}
+
+{\it (a)}
+
+If $v_1$ to $v_3$ are linearly independent, the dimension of their
+spanning set must be 3 (and the set equal to $R^3$), so $v_4 \in R^3$
+can be written as a combination of the other three.
+
+{\it (b)}
+
+$v_2 = kv_1$ where $k\in\bf R$
+
+{\it (c)}
+
+$0v_1 + k(0,0,0) = 0,$ giving a non-trivial combination with the value
+0.
+
+{\noindent\bf 12.}
+
+The vector $b$ is in the subspace spanned by the columns of $A$ when
+there is a solution to $Ax = b.$ The vector $c$ is in the row space of
+$A$ when there is a solution to $A^Tx = c$ or $x^TA = c.$
+
+The zero vector is in every space, so the rows may still be independent.
+(False)
+
+{\noindent\bf 13.}
+
+The dimensions of the column spaces and of the row spaces of $A$ and $U$
+are the same (2), and the row spaces are the same between the two (and
+conversely, the null space)
+
+{\noindent\bf 21.}
+
+% easy
+
+{\noindent\bf 23.}
+
+If they are linearly independent, the rank of $A$ is $n.$ If they span
+$R^m,$ the rank is $m.$ If they are a basis for $R^m,$ then both are
+true and $n = m.$
+
+{\noindent\bf 25.}
+
+{\it (a)}
+
+The columns are linearly independent, so there is no nontrivial linear
+combination equal to 0.
+
+{\it (b)}
+
+The columns of $A$ span $R^5,$ so there must be a linear combination
+(value of $x$) equal to $b.$
+
+{\noindent\bf 26.}
+
+{\it (a)}
+
+True. Thm in the book.
+
+{\it (b)}
+
+False. See 31.
+
+{\noindent\bf 31.}
+
+If we let $v_k = e_k,$ the subspace with basis $(0, 0, 1, 1)$ does not
+have a basis in the elementary vectors.
+
+{\noindent\bf 34.}
+
+% seems simple enough, don't know if I can do it.
+
+{\noindent\bf 35.}
+
+{\it (a)}
+
+False. The unit vector $e_1$'s single column is linearly independent,
+but except in $R,$ it doesn't span $R^k,$ and $e_1x = e_2$ has no
+solution.
+
+{\it (b)}
+
+True. The rank is at most $5,$ meaning there must be two free variables.
+
+{\noindent\bf 41.}
+
+{\it (a)}
+
+For dimension 1, $y_k = kx.$
+
+{\it (b)}
+
+For dimension 2, $y_1 = x^2,$ $y_2 = 2x,$ and $y_3 = 3x.$
+
+{\it (c)}
+
+For dimension 3, $y_k = x^k.$
+
+\fi
+
+\bye
diff --git a/li/matlab_hw.py b/li/matlab_hw.py
new file mode 100644
index 0000000..71b89b6
--- /dev/null
+++ b/li/matlab_hw.py
@@ -0,0 +1,183 @@
+import numpy as np
+import scipy.linalg
+
+print("Section 1.3: #30")
+
+coeff = (
+ [[ 1, 1, 1 ],
+ [ 1, 2, 2 ],
+ [ 2, 3, -4]]
+ )
+print(
+np.linalg.solve(
+ coeff,
+ [6, 11, 3])
+)
+
+print(
+np.linalg.solve(
+ coeff,
+ [7, 10, 3])
+)
+
+print("Section 1.3: #32")
+
+dimension = 3
+tot = [0]*3
+ct = 10000
+for x in range(ct):
+ P, L, U = scipy.linalg.lu(np.random.rand(dimension, dimension))
+ for i in range(dimension):
+ tot[i] += U[i][i]
+print([x/ct for x in tot]);
+
+print("Section 1.4: #21")
+
+A = [[.5, .5], [.5, .5]]
+B = [[1, 0], [0, -1]]
+C = np.matmul(A, B)
+print("A^2")
+print(np.linalg.matrix_power(A, 2))
+print("A^3")
+print(np.linalg.matrix_power(A, 3))
+print("A^k = A");
+print("B^2")
+print(np.linalg.matrix_power(B, 2))
+print("B^3")
+print(np.linalg.matrix_power(B, 3))
+print("B^{2k} = B^2. B^{2k+1} = B.")
+print("C^2")
+print(np.linalg.matrix_power(C, 2))
+print("C^3")
+print(np.linalg.matrix_power(C, 3))
+print("C^k = 0")
+
+print("Section 1.4: #59")
+
+print("A * v = [3, 4, 5]")
+print("v' * v = 50")
+print(f"v * A = {np.matmul([3,4,5], np.identity(3))}")
+
+print("Section 1.4: #60")
+
+print("Av = [4. 4. 4. 4.]")
+print("Bw = [10. 10. 10. 10.]")
+
+print("Section 1.6: #12")
+
+print("Section 1.6: #32")
+
+print("the inverse of the first matrix is")
+print(np.linalg.inv(5*np.identity(4) - np.ones((4,4))))
+print("(meaning a = .4, b = .2)")
+print("ones(4)*ones(4) =")
+print(np.matmul(np.ones((4,4)),np.ones((4,4))))
+print("so we can solve this like the following equation: (a*eye +\n\
+b*ones)(c*eye + d*ones) = ac*eye + (ad+bc+dimension*bd)*ones = eye")
+print("c = 1/a. d = -b/a/(a+dimension*b), giving for the next matrix")
+print("a = 1/6, d = 6/(1/6+5*-1)") # WRONG
+
+print("Section 1.6: #47")
+
+print("I'm not sure. This is Python. It says (in either case):")
+print("numpy.linalg.LinAlgError: Singular matrix")
+'''
+print(np.linalg.solve(np.ones((4, 4)), np.random.rand(4, 1)))
+print(np.linalg.solve(np.ones((4, 4)), np.ones((4, 1))))
+'''
+
+''' #68
+dimension = 500
+northwest = np.random.rand(dimension, dimension)
+for x in range(0, dimension):
+ for y in range(x+1, dimension):
+ northwest[y][x] = 0
+print(northwest)
+'''
+print("Section 1.6: #69")
+
+
+from time import perf_counter
+matrix = np.random.rand(500, 500)
+start = perf_counter()
+np.linalg.inv(matrix)
+first_time = perf_counter() - start
+print("500x500 inverse:", str(first_time) + "s")
+matrix = np.random.rand(1000, 1000)
+start = perf_counter()
+np.linalg.inv(matrix)
+second_time = perf_counter() - start
+print("1000x1000 inverse:", str(second_time) + "s")
+print("factor:", second_time / first_time)
+
+print("Section 1.6: #70")
+
+dimension = 1000
+I = np.identity(dimension)
+A = np.random.rand(dimension, dimension)
+U = np.random.rand(dimension, dimension)
+for x in range(0, dimension):
+ for y in range(x+1, dimension):
+ U[y][x] = 0
+
+start = perf_counter()
+np.linalg.inv(U)
+print("inverse of U:", str(perf_counter() - start) + "s")
+start = perf_counter()
+np.linalg.solve(U, I)
+print("U\I:", str(perf_counter() - start) + "s")
+start = perf_counter()
+np.linalg.inv(A)
+print("inverse of A:", str(perf_counter() - start) + "s")
+start = perf_counter()
+np.linalg.solve(A, I)
+print("A\I:", str(perf_counter() - start) + "s")
+
+#print("Section 1.6: #71")
+
+print("Section 2.2: #33")
+
+print("For the first matrix:")
+A = [[1, 3, 3], [2, 6, 9], [-1, -3, 3]]
+b = [[1, 5, 5]]
+# TODO
+print(scipy.linalg.null_space(A))
+print("For the second matrix:")
+A = [[1, 3, 1, 2], [2, 6, 4, 8], [0, 0, 2, 4]]
+b = [[1, 3, 1]]
+
+print("Section 2.2: #35")
+
+print("For the first system:")
+A = [[1, 2], [2, 4], [2, 5], [3, 9]]
+print(sympy.Matrix.rref(A)) # ew is there no automated way to do this ?
+
+print("Section 2.2: #36")
+
+print("(a)")
+A = np.array([[1, 2, 1], [2, 6, 3], [0, 2, 5]])
+print("Basis of the column space:")
+print(scipy.linalg.orth(A))
+print("Multiplying the rows by this gives zero (for this matrix, \
+equivalent to [0 0 0]^T)")
+print(scipy.linalg.null_space(A.transpose()))
+
+print("(b)")
+A = np.array([[1, 1, 1], [1, 2, 4], [2, 4, 8]])
+print("Basis of the column space:")
+print(scipy.linalg.orth(A))
+print("Multiplying the rows by this gives zero (for this matrix, \
+equivalent to [0 2 -1]^T)")
+print(scipy.linalg.null_space(A.transpose()))
+
+print("Section 2.3: #2")
+
+print("Section 2.3: #5")
+
+print("Section 2.3: #13")
+
+print("Section 2.3: #16")
+
+print("Section 2.3: #18")
+
+print("Section 2.3: #24")
diff --git a/zhilova/04_events b/zhilova/04_events
new file mode 100644
index 0000000..551d4cc
--- /dev/null
+++ b/zhilova/04_events
@@ -0,0 +1,89 @@
+Bayes' Theorem is useful for determining something like ``how likely is
+XYZ to have disease A if they pass test B?'' because it lets us convert
+coditionals in the other direction (e.g. test given disease).
+
+ Independent Random Events
+(C, \bb B, P) is a probability space
+With A, B \in \bb B and A, B \subseteq C, they are independent iff
+P(A\cap B) = P(A)P(B).
+
+A group of events Ai, ... An in \bb B is
+
+(1) pairwise independent iff P(A_i \cap A_j) = P(A_i)P(A_j) (i \neq j).
+
+(2) triplewise independent iff P(A_i \cap A_j \cap A_k) =
+P(A_i)P(A_j)P(A_k) (i \neq j \neq k \neq i).
+
+(3) mutually independent iff for all subsets C of {A1, ..., An},
+P(intersection of C) = product of all P(A) where A in C.
+
+3 implies 2 and 1, but 2 doesn't imply 1.
+
+Independence can also be defined equivalently as:
+P(A | C) = P(A)
+
+A,B are conditionally independent if P(A\cap B | C) = P(A|C)P(B|C)
+
+ Random Variables
+
+[What lol]
+
+X = X(w) : C \mapsto D where D is the range of X.
+
+Inverse functions can exist, I guess.
+
+P_X(A) = P({all w : X(w) in A})
+
+Key Properties
+
+1) P_X(A) is a probability set function on D.
+2) P_X(A) \geq 0
+3) P_x(D) = 1
+4) P_x(empty) = 0
+5+ P_x=(A) = 1 - P_x(D \setminus A)
+6,7) monotonicity, sigma-additivitiy.
+
+ Discrete r.v. have countable domain.
+Ex: Binomial r.v.
+
+X ~ Binomial(n, p)
+n in N, p in (0,1)
+
+D = {0, 1, ... n}
+
+P(X = x) = (n choose x)p^x(1-p)^{n-x}
+
+X ~ Poisson(\lambda)
+
+D = N^+.
+
+P(X = x) = \lambda^x e^{-\lambda}/x!
+
+ Probability Mass Function (pmf)
+
+For r.v. with countable domain D,
+
+P_X(x) := P(X = x) (if x \in D, 0 otherwise)
+
+Properties of P_X(x), x \in D:
+ (Correspond directly to probability set function properties)
+
+1) Typically, P_X(x) > 0 forall x \in D. >= 0 also acceptable.
+
+2) sum over all x of D P_x(x) gives 1.
+
+3) {X in A} equivalent to {w in C : X(w) in A}
+
+ r.v. of continuous type
+Ex: Let X uniformly take values in [0, 1].
+P(X in (a, b]) = b - a. 0 \leq a < b \leq 1.
+
+ Cumulative distribution type
+Defined for discrete and continuous type r.v.
+
+F_X(x) := P(X \leq x).
+
+F_X : R -> [0,1] [couldn't it be from any ordered domain?]
+1) 0 \leq F_X \leq 1
+2) non-decreasing
+3) right-continuous
diff --git a/zhilova/05_random_variables b/zhilova/05_random_variables
new file mode 100644
index 0000000..fbf8bc0
--- /dev/null
+++ b/zhilova/05_random_variables
@@ -0,0 +1,48 @@
+ Cumulative Distribution Function (CDF)
+Def: CDF of a r.v. X, taking values in R is
+F_X(x) = \Pr(X\leq x) = \Pr(X\in (-\infty, x] ) % to appease vim, ')'
+
+Th 1.5.1 (Properties of a CDF)
+0) 0 \leq F_X(x) \leq 1 \forall x \in R
+1) It is non-decreasing. x_1 \leq x_2 \in A, F_X(x_1) \leq F_X(x_2).
+2) F_X(x) -> 0 as x -> -\infty
+3) F_X(x) -> 1 as x -> +\infty
+4) F_X(x) is right-continuous.
+
+ Continuous R.V.
+Over an uncountable domain D like (0, 1), R.
+
+Let there be a CDF F_X(x) = P(X \leq x).
+
+Assume there exists f_X(x) := d/dx F_X(x), the probability density
+function.
+[discontinuities might be able to be resolved with a delta function]
+By the second fundamental theorem of calculus (?),
+F_X(x) = P(X \leq x) = \int_{-\infty}^\infty f_x(t) dt.
+
+In the discrete case, we have the pmf (probability mass function)
+where P_x(t) = P(X = t)
+
+P(a < X \leq b) for a < b = P_X(b) - P_X(a).
+
+Examples:
+- Uniform Distribution
+X ~ U[a, b]
+ = { 1/(b-a) for a \leq x \leq b
+ { 0 otherwise.
+
+- Exponential Distribution
+X ~ Exp(\lambda) \lambda > 0
+f_X(x) = { \lambda e^{-\lambda x}, x \geq 0
+ { 0 otherwise
+
+F_X(x) = { 1 - e^{-\lambda x}, x \geq 0
+ { 0 otherwise
+
+- Normal Distribution
+X ~ N(\mu, \sigma^2) \mu \in R, \sigma^2 > 0.
+\sigma = stdev. \sigma^2 = variance. \mu = mean/center.
+
+f_X(x) = 1/\sqrt{2\pi \sigma^2} exp( - (x-\mu)^2 / {2\sigma^2} )
+
+F_X(x) = \int_{-\infty}^x f_X(x) dx
diff --git a/zhilova/06_ev b/zhilova/06_ev
new file mode 100644
index 0000000..c13c159
--- /dev/null
+++ b/zhilova/06_ev
@@ -0,0 +1,67 @@
+ Expectation/Expected Value/Mean Value/Average of an r.v.:
+ (Does not exist for all r.v.)
+We must assume that \int_{-\infty}^\infty |x|f_x(x) dx < \infty, so
+
+E(X) := \int_{-\infty}^\infty xf_x(x) dx
+= {\bb E} X = E X.
+
+If discrete,
+E(X) = \sum_{x\in D} xp_x(x)
+
+ Higher (order) moments of X
+moment of kth order := {\bb E}(X^k)
+Again, they do not always exist, but they do exist if {\bb E}(|X^k|)
+exists.
+
+ Variance/dispersion of X
+Var(X) = {\bb E}(X - {\bb E} X)^2
+aka quadratic deviation
+\def\exp{{\bb E}}
+
+Thm: [ proof in textbook ] (1)
+g : R \mapsto R.
+
+Let \int |g(x)| f_x(x) < \infty
+Therefore, \exp g(X) = \int_{-\infty}^\infty g(x)f_x(x) dx
+
+Ex:
+ \exp X^2 = \int x^2 f_x(x) dx
+ \exp(X-a) = \int (x-a) f_x(x) dx
+ \exp\sin X = \int sin x f_x(x) dx
+
+ Stdev
+Stdev := \sqrt{Var(x)}
+
+ Properties of E(x)
+1) Linearity
+ Where E(X), E(Y) exist, and a, b \in R
+ E(aX + bY) = aE(X) + bE(Y)
+ By thm (1), \int axf_x(x) dx = a \int xf_x(x) dx.
+2) E(a) = a
+3) If g(x) \geq 0, E(g(X)) \geq 0, regardless of X.
+
+Example application:
+Var(X)
+= E [X - E[X]]^2
+= E [ X^2 - 2X * E[X] + [E[X]]^2 ]
+= E[X^2] - 2E[X]^2 + [E[X]]^2
+ ^ linearity applied with E[X] as constant
+= E[X^2] - E[X]^2
+
+On the reals (by property 3),
+Var(X) \geq 0
+\to E(X^2) - E(X)^2 \geq 0
+\to E(X^2) \geq E(X)^2 [equality is strict unless X = a]
+
+More example:
+Var(aX) = E[aX]^2 - (E[aX])^2
+ = E[a^2X^2] - (aE[X])^2
+ = a^2E[X^2] - a^2E[X]^2
+ = a^2(Var(X))
+
+Definitions:
+1) centering: X - \exp X. \exp[X - \exp X] = 0.
+2) rescaling: With c>0, cX. Var(cX) = c^2 Var X.
+3) centering and standardization: centering and rescaling s.t.
+Var(Y) = 1.
+ Y = (X - \exp X)/\sqrt{Var X}
diff --git a/zhilova/07_mgf b/zhilova/07_mgf
new file mode 100644
index 0000000..5d5a007
--- /dev/null
+++ b/zhilova/07_mgf
@@ -0,0 +1,22 @@
+ Moment-generating Function
+(Still technically lecture #6 but very different topic)
+X := real r.v.
+M_X(t) = \exp e^{tX} where t \in R.
+Defined if \int_{-\infty}^\infty e^{tx} f_x(x) dx < \infty
+ for t \in (-h, h) for some h > 0. [I can't remember why the region
+ of convergence is symmetric about 0, but I remember some thm. about
+ that]
+
+e^{tx} gives a nice Taylor series.
+For M_X(t) around 0,
+M_X(t) = M_X(0) + M_X'(0) t + M_X''(0)t^2/2 + M_X'''(0) t^3/3! + ...
+M_X^{(k)}(t) = {d^k\over dt^k} \exp{e^{tX}} = {d^k\over dt^k}
+\int_{-\infty}^\infty e^{tx} f_x(x) dx
+= \int_{-\infty}^\infty x^k e^{tx} f_x(x) dx
+= \exp[X^k e^{tX}]
+ = [with t = 0] \exp[X^k].
+
+Why is it useful?
+Example: X ~ N(\mu, \sigma^2) *can* have its moments computed by
+integration-by-parts (probably table method), but the mgf can be used
+instead, which makes the determination easier.
diff --git a/zhilova/08_jensen b/zhilova/08_jensen
new file mode 100644
index 0000000..20a8158
--- /dev/null
+++ b/zhilova/08_jensen
@@ -0,0 +1,23 @@
+Definition: A fn. f : R -> R is called convex on an interval (a,b) if
+f(cx + dy) \leq cf(x) + df(y)
+\forall x, y \in (a, b)
+\forall c \in (0, 1), d = 1-c.
+Concave is -convex.
+
+Essentially stating that the function lies on or below a line segment
+connecting f(a) and f(b) [or above in the case of concave].
+
+Strictly convex: f(cx+dy) < cf(x) + df(y).
+
+ Jensen's Inequality
+X - r.v., E|X| < infty. E|f(x)| < infty.
+ f(E X) \leq E(f(x)).
+If f is strictly convex, \leq -> "less than" unless X is a constant r.v.
+
+Further theorems:
+(1) If f is differentiable on (a,b),
+f is convex <=> f' is nondecreasing on (a,b).
+f is strictly convex <=> f' is strictly increasing on (a,b)
+(2) If f is twice differentiable on (a,b)
+f is convex <=> f'' \geq 0 on (a,b)
+f is strictly convex <=> f'' > 0 on (a,b)