aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHolden Rohrer <hr@hrhr.dev>2021-08-31 17:06:06 -0400
committerHolden Rohrer <hr@hrhr.dev>2021-08-31 17:06:06 -0400
commit1e3c434c8b108a5abd9f6810d629c3ae83face98 (patch)
tree8beb1f8971653132970a84c254bf8814e1b4d489
parenta3fb090ff96fc67ee054b15da005126ef578894c (diff)
added notes for math classes and the first non-computing homework
-rw-r--r--li/03_inverse94
-rw-r--r--li/hw1.tex166
-rw-r--r--zhilova/03_probability_function59
3 files changed, 319 insertions, 0 deletions
diff --git a/li/03_inverse b/li/03_inverse
new file mode 100644
index 0000000..8ba7887
--- /dev/null
+++ b/li/03_inverse
@@ -0,0 +1,94 @@
+f : X -> Y
+g : Y -> X
+if inverse exists, g = f^{-1}
+g(f(x)) = x for all x \in X
+f(g(x)) = y for all y \in Y
+
+For simplicity's sake, we will require bijectivity to define the
+inverse, although degenerate cases (i.e. non-injective) can be defined.
+
+ Matrix Inverse
+A := mxn matrix.
+
+Ax where a is nx1 matrix. A can be considered as a function from R^n to R^m.
+
+Definition:
+nxn matrix A is invertible iff there exists B nxn such that AB = BA =
+I_n. A^{-1} := B.
+
+Thm: If A, B are inverses s.t. AB = I_n, BA = I_n.
+
+A = [ a1 | a2 | ... an ]
+B = [ b1 | b2 | ... bn ]
+
+AB = [Ab1 | Ab2 | ... Abn ]
+
+Let e_i = [ 0 0 ... 1 ... 0 ] where 1 is in the ith position.
+This gives systems Ab1 = e1, Ab2 = e2 ...
+Each can be solved like a standard augmented matrix.
+However, we can solve like
+
+[A | e1 | e2 | e3 ... ] (*)
+Two possibilities:
+- n pivots (every column has pivot)
+ Reduced echelon form is I_n
+ Right matrix = B = A^{-1}
+- <n pivots (implies at least one row of zeroes at the bottom)
+ The right matrix is always invertible [how?], so at least one of the
+ systems in (*) has no solution, and A is not invertible.
+
+If we only use A_j + cA_i -> A_j where j > i to solve
+[ A | I_n ],
+we get [ U | L^{-1} ]
+
+U is invertible <=> all diagonal elements of U are non-zero
+<=> every column of U has a pivot column
+L is always invertible, so iff U is invertible, A = LU is invertible.
+
+ Transpose
+
+A := mxn matrix.
+A^T = B
+B := nxm where b_ji = a_ij
+
+A : R^n -> R^m
+B : R^m -> R^n (Not inverse properties)
+
+If A is invertible, then A^T is invertible, and
+(A^{-1})^T = (A^T)^{-1}
+But why?
+(1)
+If A, B are invertible, AB is invertible, and:
+ (AB)^{-1} = B^{-1}A^{-1} [why??] [this should verify the previous
+ identity]
+(2)
+(AB)^T = B^T A^T [could be proved by brute calculation]
+
+Definition: nxn matrix A is symmetric if A = A^T
+
+If A is symmetric and invertible, A = LU = LDL^T (Thm!)
+Then, D would be invertible. If A not invertible, U not invertible, and
+D doesn't need to be invertible.
+This is Cholesky decomposition. "Keeps the symmetry" (?)
+D is a diagonal (and therefore symmetric) matrix.
+
+
+Chapter 2
+---------
+
+Vector space is a collection V of objects called vectors, a binary
+addition operator, and an operator to multiply a vector and a scalar
+(defined in R or C)
+
+(u + v) + w = u + (v + w)
+a(u + v) = au + av
++ Some more rules (probably commutative?)
+ (a+b)u = au + bu. Gives existence of 0 vector.
+
++, * must be closed under V.
+
+Ex: Let V = polynomials degree <= 2.
+Ex: Upper-diagonal 2x2 matrices
+Ex: R^2
+Ex: Subspace of R^2
+Not ex: Line in R^2 not containing origin.
diff --git a/li/hw1.tex b/li/hw1.tex
new file mode 100644
index 0000000..1676065
--- /dev/null
+++ b/li/hw1.tex
@@ -0,0 +1,166 @@
+{\bf\noindent 10.}
+
+$(0, y_1)$, $(1, y_2)$, and $(2, y_3)$ lie on the same line if $(2, y_3)
+= k(1, y_2 - y_1) + (0, y_1) = (k, ky_2 - (k-1)y_1) \to y_3 = 2y_2 - y_1.$
+
+{\bf\noindent 11.}
+
+For $a = 2$ and $a = -2,$ the columns are linearly dependent, and there
+is a line of solutions.
+
+{\bf\noindent 15.}
+
+"Lines." "2." "the column vectors."
+
+{\bf\noindent 22.}
+
+If $(a,b)$ is a multiple of $(c,d),$ $c/a = d/b \to c/d = a/b,$ so $(a,
+c)$ is a multiple of $(b, d)$.
+
+{\bf\noindent 5.}
+
+$0$ gives no solutions. $20$ gives infinitely many solutions. These
+solutions include $(4, -2)$ and $(0, 5).$
+
+{\bf\noindent 8.}
+
+$k=3,$ $k=0,$ and $k=-3$ cause elimination to break down. $k=3$ makes
+the system inconsistent, so it has 0 solutions, $k=-3$ causes
+infinite solutions, and $k=0$ is consistent with 1 solution but requires
+a row exchange.
+
+{\bf\noindent 12.}
+
+If $d=10,$ a row exchange is required, giving a triangular system
+$$\pmatrix{2&5&1\cr 0&1&-1\cr 0&0&-1}\pmatrix{x\cr y\cr z} =
+\pmatrix{0\cr 3\cr 2}$$
+
+{\bf\noindent 19.}
+
+"Combination." $2x - y = 0$ cannot be solved.
+
+{\bf\noindent 28.}
+
+{\it (a)}
+
+False. If the second row doesn't start with a zero coefficient, then a
+multiple of row 1 will be (indirectly) subtracted from row 3 when row 2
+is subtracted from row 3.
+
+{\it (b)}
+
+False. After eliminating the $u$ column from the third row, a $v$
+``residue'' might remain.
+
+{\it (c)}
+
+True. The third row is already fully ``solved'' for back-substitution.
+
+{\bf\noindent 22.}
+
+{\it (a)}
+
+$$\pmatrix{1&0&0\cr -5&1&0\cr 0&0&1\cr}$$
+
+{\it (b)}
+
+$$\pmatrix{1&0&0\cr 0&1&0\cr 0&-7&1\cr}$$
+
+{\it (c)}
+
+$$\pmatrix{0&1&0\cr 0&0&1\cr 1&0&0\cr}$$
+
+{\bf\noindent 27.}
+
+$R_{31}$ should add 7 times row 1 to row 3. $E_31R_31 = I_3.$
+
+{\bf\noindent 29.}
+
+{\it (a)}
+
+$$E_{13} = \pmatrix{1&0&1\cr 0&1&0\cr 0&0&1}$$
+
+{\it (b)}
+
+$$\pmatrix{1&0&1\cr 0&1&0\cr 1&0&1}$$
+
+{\it (c)}
+
+$$\pmatrix{2&0&1\cr 0&1&0\cr 1&0&1}$$
+
+{\bf\noindent 42.}
+
+{\it (a)}
+
+True.
+
+{\it (b)}
+
+False, they just have to be $m\times n$ and $n\times m.$
+
+{\it (c)}
+
+True, but they don't have the same dimensions.
+
+{\it (d)}
+
+False. This is only true if $B$ is invertible.
+
+{\bf\noindent 51.}
+
+$AX = I_3.$
+
+{\bf\noindent 6.}
+
+$$E^2 = \pmatrix{1&0\cr12&1}$$
+$$E^8 = \pmatrix{1&0\cr48&1}$$
+$$E^{-1} = \pmatrix{1&0\cr-6&1}$$
+
+{\bf\noindent 9.}
+
+{\it (a)}
+
+If none of $d_1,$ $d_2,$ or $d_3$ are zero, the product is nonsingular.
+% Prove it
+
+{\it (b)}
+
+Solving this first system, $c = b,$ by substitution.
+
+Then we have $$Dd = c \to d = \pmatrix{0\cr0\cr 1/d_3}$$
+and $$Vx = d \to \pmatrix{1 & -1 & 0\cr 0 & 1 & -1 \cr 0 & 0 &
+1}\pmatrix{x_1\cr x_2\cr x_3} = d \to x_3 = x_2 = x_1 = 1/d_3.$$
+
+{\bf\noindent 19.}
+
+In the second matrix, $c=0$ requires a row exchange, and $c=3$ would
+make the matrix singular.
+
+In the first matrix, it is singular if $3b = 40-10a.$
+And it requires a row exchange if $a=4$ and $b\neq 0.$
+
+{\bf\noindent 31.}
+
+$$\pmatrix{1&1&0\cr 1&2&1\cr 0&1&2} = \pmatrix{1&0&0\cr 1&1&0\cr 0&1&1}
+\pmatrix{1&1&0\cr 0&1&1\cr 0&0&1} =
+\pmatrix{1&0&0\cr 1&1&0\cr 0&1&1}\pmatrix{1&0&0\cr0&1&0\cr0&0&1}\pmatrix{1&1&0\cr 0&1&1\cr 0&0&1}
+$$
+
+$$\pmatrix{a&a&0\cr a&a+b&b\cr 0&b&b+c} =
+\pmatrix{1&0&0\cr1&1&0\cr0&1&1}\pmatrix{a&a&0\cr 0&b&b\cr 0&0&c} =
+\pmatrix{1&0&0\cr1&1&0\cr0&1&1}\pmatrix{a&0&0\cr0&b&0\cr0&0&c}\pmatrix{1&1&0\cr 0&1&1\cr 0&0&1}
+$$
+
+{\bf\noindent 32.}
+
+$$Lc = b \to \pmatrix{1&0\cr4&1}c = \pmatrix{2\cr 11} \to c =
+\pmatrix{2\cr 3}.$$
+$$Ux = c \to \pmatrix{2&4\cr0&1}x = \pmatrix{2\cr 3} \to x =
+\pmatrix{-5\cr 3}.$$
+
+$$A = LU = \pmatrix{1&0\cr4&1}\pmatrix{2&4\cr0&1} =
+\pmatrix{2&4\cr8&17}.$$
+$$\pmatrix{2&4\cr8&17}x = \pmatrix{2\cr 11} \to \pmatrix{2&4\cr0&1}x =
+\underline{\pmatrix{2\cr3}} \to x = \pmatrix{-5\cr 3}$$
+
+\bye
diff --git a/zhilova/03_probability_function b/zhilova/03_probability_function
new file mode 100644
index 0000000..218e941
--- /dev/null
+++ b/zhilova/03_probability_function
@@ -0,0 +1,59 @@
+ The Probability Set Function
+
+P: B -> R
+
+B is a sigma-algebra on C.
+
+Properties:
+P(A) >= 0 \forall A in B
+P(C) = 1
+\forall A1, A2, A3, ... in B, if A_i \cap A_j = \empty,
+P(infinite union of A1, A2, ...) = sum over all j of P(A_j)
+
+Useful inequalities:
+Boole's inequality (th 1.3.7) P(union of A1, A2, ...) = P(A1) + P(A2) + ...
+(Derives from the inclusion-exclusion formula)
+
+ Conditional Probability
+
+Let A, B be sets in \B (Borel Algebra)
+Assume P(B) > 0 [because it wouldn't make sense to condition on an
+impossible event]
+
+P(A | B) = P(A \cap B) / P(B)
+
+P(* | B) : B -> R [that's a new notation]
+
+Gives similar properties to the main probability function because it is
+a probability set function.
+
+P(A | B) >= 0
+P(C | B) = 1 [and P(B | B) = 1 ]
+P(* | B) is z-additive
+
+Sometimes, it's simpler to define P(A \cap B) = P(A | B) * P(B) like in
+a Markov chain.
+P(A \cap B_1 \cap B_2) = P(A | B_1 \cap B_2)P(B_1 | B_2)P(B_2).
+ Trivially proved by induction.
+
+The law of total probability.
+
+Consider B_1, B_2, ... in B such that any B_i, B_j are disjoint and the
+union of all B_1 to B_\infty = C.
+
+If P(B_i) > 0, P(A) = \sum^infty P(A | B_i) * P(B_i)
+
+ Proof
+
+For any i >= 1,
+P(A | B_i) * P(B_i) = P(A \cap B_i) [basic property of conditionals]
+A = A \cap C = A \cap (countable union of B_i) = (countable union of A
+\cap B_i).
+\to P(A) = P(countable union of A \cap B_i)
+\to P(A) = (countable sum of P(A | B_i)*P(B_i))
+
+ Bayes' Theorem
+P(B_i | A) = P(A | B_i) * P(B_i) / (sum over all B_j P(A | B_j)*P(B_j))
+
+Applies the law of total probability and the definition of conditional
+probability.