admin ps1 out, due 2pm 9/13 I emailed about it, let me know if you didn't get that email today approximate counting random walks approximate counting defn "given" f, approx \E[f] to \eps error, w/p \ge 1-\delta "given" as oracle as boolean ckt, over AND, OR, NOT do example oracle: [[last time]] easy randomized algorithm no deterministic algorithm not language problem is oracle problem defn \pm\eps approx counting, CA^\eps input (C,p) \E[C]\ge p+\eps => output yes \E[C]\le p-\eps => output no rmk: promise problem \PI_y\sqcup \PI_N\subseteq\bits^\star define prP,prBPP generalize language problems essentially the same complexity as actually estimating \E[C] via binary search, won't show here rmk: easy randomized algo Q. deterministic algo? thm: CA^{1/6} is \prBPP-complete pf in \prBPP via random sampling + chernoff \prBPP hard ie, give reduction given (\PI_N,\PI_Y) in \prBPP give map x-> (C_x,1/2) x\in \PI_Y -> (C_x,1/2)\in (CA^\eps)_Y x\in \PI_N -> (C_x,1/2)\in (CA^\eps)_N map is deterministic polytime so algo for CA^{1/6} can be used to solve any prBPP problem the reduction (\PI_N,\PI_Y) has prBPP algo A(x,r) x\in \PI_Y -> A(x,r)=1 w/p \ge 2/3 over r x\in \PI_N -> A(x,r)=1 w/p \le 1/3 over r define C_x(r)\eqdef A(x,r) can conver TM's to ckts, see complexity books open: BPP-complete problem multiplicative approx counting more natural in applications defn (1\pm\eps) approx counting, CA^{\times(1\pm\eps)} input (C,p) \E[C]\ge (1+\eps)p => output yes \E[C]\ge (1-\eps)p => output no Rmk: \NP-hard even for CNFs CNFs=... eg p~=0 distinguishes satisfiable from unsatisfiable formulas Thm[KarpLuby83]: \times (1\pm\eps) approx for DNFs in poly(|\phi|,1/\eps,\ln 1/\delta) randomized time Rmk: DNF = ... m clauses n variables SAT is easy for DNFs Pf randomized sampling doesn't work draw picture of small # of samples in large space idea: map satisfting assignments to new (smaller) space draw picture draw actual picture key point: sat assign to clause are easy to enumerate x_1\AND \not x_3 AND x_5 -> 1*0*1***** define B={(\alpha,i): \alpha is sat assignment to \phi} A'={(\alpha,i): \alpha is sat assignment to \phi, clause i is first satisfied clause} canonical explaination draw table of sat assign vs clauses table = B 1's = A lem: A/B\ge 1/m as each row has at least 1 one this says A is dense in B, so can randomly sample facts: |B|=\sum_i 2^{n-n_i} n_i=# var in i-th clause after removing degenerate clauses eg x AND \not(x) => can efficiently sample from B => can get [\pm\gamma]-approx to A/B in poly(n,m,1/\gamma) rand step can decide membership in A => can get [(1\pm\eps]-approx to A/B in poly(n,m,1/\eps) rand step |\hat{\mu}-A/B|\le \eps/m \le \eps A/B \gamma=\eps/m later: techniques for n^{\polylog}-time algo best: (nm)^{\lg\lg (nm)} for constant \eps can even do slightly subconstant random walks defn of STCONN input (G,s,t), G=(V,E), |V|=n is there an s-t path recall: polytime algorithm via depth-first search but uses \Omega(n) space Q. STCONN in small space? model for logspace computation draw TM input tape work tape write-only output tape space = size of work tape randomness = randomized state if you want to remember randomness, need to write it down defn of L,RL,BPL Q. L=RL=BPL? seems much easier than P vs BPP actually have progress on this thm[savitch] STCONN in DSPACE(log^2) (time poly^log) thm[AKLLR79] USTCONN in RL Thm{Reinghold05] USTCONN in L [[willlater]] Pf. random walk algo v=s for poly(n) steps pick random neighbor of v, call it u v<-u if ever saw t, output yes else output no complexity log(n) bits to store current vertex in logspace can compute random neighbor analysis digraph directed graph allow parallel edges allow self-loops defn hitting time, hit(G) max_{i,j} min \{t: \Pr[rand walk starting at i hits j in t steps]\ge 1/2]\} thm: G connected, undirected => hit(G)\le \poly(n) Rmk: exist G directed with hit(G)\ge exp(n^{\Omega(1)}) Pf[hw] via spectral graph theory wlog G non-bipartite, d-regular graph add self-loops will only increase hitting time will only do regular graphs in this class random walk matrix M\in\R^{n\times n} M_{i,j}=Pr[j\to i]=(# j->i edges)/degree(j) reverse than in book will use column vectors in class, row vectors in book lem: G regular, undirected => M symmetric thm[spectral thm] A symmetric. Then A has orthogonal eigenbasis [[ask who is comfortable with this]] \lem \pi prob dist, M\pi = prob dist after one step of random walk obs \vu\eqdef uniform dist = 1/n*\vec{1} G regular => Mu=u eigenvalue 1 want: M\pi closer to \vu than \pi is