admin ps2 due today ps3 out, due 10/16 today expander chernoff bound explicit constructions graph operations expander chernoff chernoff bound f:\bits^m\to [0,1] V_1,\ldots,V_t\in\bits^m iid X_i=f(V_i) X=\sum X_i Pr[ |X-\E[X]|\ge\eps t]\le \exp(-\eps^2 t/4) Q. similar result for expander walks? [[last time saw hitting property of expander walks useful for randomness efficient RP amplication this will be used for randomness efficient BPP amplification]] expander chernoff bound G=(\bits^m,E) D-regular graph, spectral \lambda f:\bits^m\to [0,1] V_1 iid V_{i+1}=random neighbor of V_i, i\ge 1 X_i=f(V_i) X=\sum X_i Pr[ |X-\E[X]|\ge(\eps +\lambda) t]\le \exp(-\eps^2 t/4) Rmk: most useful if \lambda=~\eps [[can get this by taking walk on M^k for k=\Theta(\log (1/\eps))]] can get Pr[ |X-\E[X]|\ge \eps t]\le \exp(-\eps^2 (1-\gamma) t/4) [[won't be crucial here]] Pf want to bound moment generating function \E[\e^{rX}] [[no longer have independence, so can't split into MGF of each X_i alone]] define diagonal matrix P P_{v,v}=\e^{rf(v)} lem: <1,P\pi>=\E_{v~\pi}[\e^{r f(v)}] =\sum_v \e^{r f(v)} \pi_v lem: <1,(PM)^{t-1}P\vec{u}> = \E[\e^{rX}] lem[matrix decomposition] M=\gamma J+\lambda E ||E||\le 1 J is random walk matrix of complete graph lem: <1,(PM)^{t-1}P\vec{u}> =<1,(PM)^t\vec{u}> \le \sqrt{N} * ||PM||^t ||u|| = ||PM||^t cauchy schwartz operator norm lem: ||PM||\le 1+(\mu+\lambda)r+O(r^2)\le \e^{(\mu+\lambda)r+O(r^2)} \le \gamma ||PJ||+ \lambda ||PE|| ||PE||\le ||P||\le ||P||_\infty=\e^r\le 1+r+r^2 ||PJ||^2=||PJu||^2/||u||^2 =||Pu||^2/||u||^2 =\sum (e^{r f(v)}/N)^2 / \sum (1/N)^2 =1/N \sum e^{2r f(v)} =\E_{v~U_{V}}[e^{2r f(v)}}] if 2r\le 1/2 \le \E [1+2r f(v) +4r^2 f(v)] \le \E [1+2r f(v) +4r^2] \le 1+2r \E[f(v)] +4r^2 ||PJ||\le 1+r\E[f(v)]+2r^2 \sqrt{1+x}\le 1+x/2 \E[e^{rX}] \le ||PM||^t \le \e^{(\mu+\lambda)rt+O(r^2)t}, if r\le 1/4 markov on MGF w/ r=\eps/4 => result samplers for f:\bits^m\to[0,1] construction queries random bits independent O(\ln(1/\delta)*1/\eps^2) m * O(\ln(1/\delta) * 1/\eps^2) pairwise independent O(1/\delta * 1/\eps^2) O(m+\ln(1/\delta)+ln(1/\eps^2)) expander walk O(\ln(1/\delta)*1/\eps^2) m+O(\ln(1/\delta)+1/\eps^2*\ln(1/\eps)) optimal O(\ln(1/\delta)*1/\eps^2) m+O(\ln(1/\delta)+ln(1/\eps^2)) BPP amplification: error 1/3 -> error 2^{-k} [[\eps=(1)]] independent O(k) O(mk) pairwise independent 2^{O(k)} O(m+k) expander walk O(k) m+O(k) optimal O(k) m+O(k) explicit constructions obs: to implement expander walk need explicit expander explicitness monte-carlo: guess & verify [[spectral expansion can be verified]] [[random graphs are good expanders]] [[polynomial expected runtime]] [[suffices for algorithmic applications]] does not suffice for derandomization weakly explicit: family G_1,\ldots,G_i,\ldots of D-regular graphs N_i sufficiently dense, eg N_i=2^i G_i=([N_i],E_i) k\to G_k in poly(k) steps [[no randomness here]] does not suffice for expander walks [[need expander on \bits^m, would require 2^m time]] strongly explicit [[implies weakly explicit]] Neigh:[N_k]\times [D]\to[N_k] in polylog(N_k) steps (v,i) -> i-th neighbor of v in G_k [[suffices for expander walks]] thm[Margulis, Gabber-Galil, Boppana, Jimbo-Marouka] V=\Z_n\times Z_n [[n composite!]] T_1=(1 2|0 1), T_2 = (1 0 | 2 1), e_1=(1| 0), e_2 =(0|1) [[these linear maps are invertible, even mod n]] v\in V \mapsto (T_1v, T_2 v, T_1 v+ e_1, T_2 v+e_2 T_1^{-1}v, T_2^{-1} v, T_1^{-1} (v- e_1), T_2^{-1} (v-e_2)) \lambda(G)\le 5\sqrt{2}/8<1 [[rmk: fully explicit good spectral expander "elementary" proof of correctness via fourier analysis suffices for expander walks]] here: explicit construction via zig-zag product [[why "more conceptual" (matter of taste) zig-zag product is used for USTCONN in L]] defn: (N,D,\gamma) graph N vertices degree D \gamma spectral gap graph operations idea: G\to G' improving some parameters, damaging others [[make graph bigger]] [[reduce degree]] [[improve expansion]] squaring (N,D,1-\lambda\gamma) -> (N,D^2,1-\lambda^2) [[N same, D worse, \lambda better]] tensoring (N,D,\gamma) -> (N^2,D^2,\gamma) [[N better, D worse, \gamma same]] zig-zag (N,D_1,\gamma_1)+(D_1,D_2,\gamma_2) -> (ND_1,D_2^2,\gamma_1\gamma_2^2) [[N better, D better, \gamma worse]] => explicit expanders [[apply them in appropriate order, as we'll see]] squaring defn: the *square* of a graph G with random walk matrix M is the graph with random walk matrix M^2 lem: squaring: (N,D.\lambda) -> (N,D^2,\lambda^2) [[ N is the same pick up \lambda twice two-step walk on G, so get D twice ]] tensoring defn tensor product aka kronecker product A\in \F^{n\times m}, B\in\F^{n'\times m'} A\otimes B \in \F^{nn'\times mm'} (A\otimes B)_{(i,i',j,j')}=A_{i,j} * B_{i',j'} draw matrix [[also get tensor product of vectors]] lem: (A\otimes B)(x\otimes y)=(Ax)\otimes (By) [[proof is straighforward]] Cor: = defn G_i=(V_i,E_i) is (N_i,D_i,\gamma_i) for i\in\bits, w/ random walk matrix M_i G_0\otimes G_1 is graph on (V_0\times V_1,E) ((i,j),(k,l))\in E if (i,j)\in E_0, (k,l)\in E_1 [[ie, we are basically simulating walk in G_0,G_1 simultaneously]] lem: random walk of G_0\otimes G_1 is M_0\otimes M_1=(M_0\otimes I)(I\otimes M_1)=(I\otimes M_1)(M_0\otimes I) thm: G_0 (N_1,D_1,\gamma_1), G_1 (N_2,D_2,\gamma_2) => G_0\otimes G_1 is (N_1N_2,D_1D_2,\min{\gamma_1,\gamma_2}) [[intuitively simulating both walks, get spectral gap of worse walk]] pf M_0 has orthonormal eigenbasis v_i w/ values \lambda_i M_1 has orthonormal eigenbasis w_j w/ values \mu_j lem: \{ v_i\otimes w_j\} are orthonormal pf = lem: v_i \otimes w_j is eigenvector of M_0\otimes M_1 w/ value \lambda_i\mu_j pf (M_0\otimes M_1 ) (v_i\otimes w_j)= =M_0 v_i \otimes M_1 \otimes w_j =\lambda_i v_i \otimes \mu_j \otimes w_j =\lambda_i \mu_j* v_i \otimes \otimes w_j => \{ v_i\otimes w_j\} are orthonormal eigenbasis for M_0\otimes M_1 \lambda(M_0\otimes M_1)=\max_{\lambda_i\mu_j\ne 1} |\lambda_i\mu_j\| = \max{\lambda(G_0),\lambda(G_1)} admin ps2 due ps3 due 10/16 today expander chernoff squaring, tensoring next time zig-zag explicit expanders USTCONN\in L