\documentclass{tufte-handout} % use larger type; default would be 10pt \usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX) \usepackage{geometry} % to change the page dimensions \usepackage{graphicx} \usepackage{amsmath,amsthm,url} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \newtheorem{proposition}{Proposition} %%% PACKAGES \usepackage{booktabs} % for much better looking tables \usepackage{array} % for better arrays (eg matrices) in maths \usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.) \usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim \usepackage[draft,inline,nomargin,index]{fixme} \fxsetup{theme=colorsig,mode=multiuser,inlineface=\sffamily,envface=\itshape} \FXRegisterAuthor{su}{asu}{Suresh} \FXRegisterAuthor{ro}{aro}{Robert} \newcommand{\determin}[1]{\mathcal{D}\left(#1\right)} \newcommand{\randR}[1]{\mathcal{R}\left(#1\right)} \newcommand{\equ}{\mathcal{E\!Q}} \newcommand{\disjoint}{\ensuremath{\mathcal{DISJ}}} \newcommand{\mP}{\mathcal{P}} \newcommand{\gt}{\textsc{GT}} %%% END Article customizations %%% The "real" document content comes below... \title{CS 7931: Lecture 2} \author{Robert Christensen} %\date{} % Activate to display a given date or no date (if empty), % otherwise the current date is printed \begin{document} \maketitle % Why does the size matter with median computations? % %The rational that you remove the same amount from both sides %only matters if it removes the same fraction at each iteration. %Equality is necessary $|n_{\text{Alice}}| \equal |n_{\text{Bob}}|$, %almost equal does not work. \section{A review of Equality} % recording ~4:50 If you look at the set of inputs that end up at any leaf of the protocol tree, it is a combinatorial rectangle. If we say this, we know all the elements in the rectangle must have the same answer. The protocol tree can be represented as a matrix and we want to color the matrix with monochromatic rectangles. \begin{marginfigure} \centering \includegraphics{figs/equal_rectangle} \end{marginfigure} If we look at the example of equality, there is no way to cover multiple `$1$' answers in a single rectangle, so each `$1$' must be in its own rectangle. Since there are $2^n$ leafs with a `$1$' and each element must be in its own rectangle. The tree must have depth at least $\log$ of that, so for this example equality must have \begin{equation*} \determin{\equ} = \log 2^n = n \log 2 = O(n) \end{equation*} % Most lower bounds are proved using this method. The model is % investigated in detail to understand what limitations exist % for what shapes the model can create in the protocol tree. % We use this information to trap the model into a difficulty. % recording ~8:00 \paragraph{Fooling set argument} The fooling set argument is one of the basic methods for proving lower bounds in communication complexity. We are constructing a witness set which is hard to cover. In other words, they can not be placed in the same combinatorial rectangle. \begin{definition} \label{def:fooling} A fooling set is a subset $S \subseteq X \times Y$ of inputs and a fixed value $z \in \{0,1\}$ such that: \begin{itemize} \item $(x,y) \in S \implies f(x,y) = Z$ \item if $(x_1, y_1)$ and $(x_2, y_2) \in S$ \\ then either $f(x_2, y_1) \neq Z$ or $f(x_1, y_2) \neq Z$ \end{itemize} \end{definition} % recording ~10:45 \begin{lemma} \label{lem:fooling} If S in a fooling set for $f$ then \begin{equation*} \determin{f} \geq \log_2{|S|} \end{equation*} \end{lemma} The proof of the above lemma follows essentially from the second property above. It guarantees that any two elements of $S$ define a combinatorial rectangle that is not monochromatic, and so no two elements of $S$ can be covered by the same leaf. Note that $S$ is only defined with respect to the function $f$ and is independent of any algorithm used to calculate $f$. Our only job for using the fooling set is to find a set which satisfies the properties in definition \ref{def:fooling}. We look a the size of the set $S$ which comes from the definition and we use \autoref{lem:fooling} to get the lower bound. \paragraph{Example: \textsc{Greater than}} % recording ~14:30 We will define the function $\gt(x,y)$ to be \begin{equation*} \gt(x,y) = \left\{ \begin{array}{l l} 1 & \quad \text{if } x \geq y\\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation*} \begin{marginfigure} \centering \includegraphics{figs/ge_matrix_2} \caption{Example matrix for $\gt(x,y)$ when $|x| = |y| = 5$ with fooling set elements of size 2 drawn.} \end{marginfigure} %\sunote{Better to have just the figure as a PDF and use the caption for the % text. This allows you to use the \gt command for the function name, to get a % consistent look.} According to the definition of $\gt(x,y)$ the lower triangle will be `$1$'. The remaining values in the upper right corner will be `$0$'. In the example shown here, we are including the elements along the diagonal which have the value $1$ except for the `upper right' corner, which has a value $0$. %We can create a fooling set along the diagonal. All %the elements along the diagonal have the value $1$. %The `upper right' corner is $0$. We also include the %elements where all the elements are $0$ exept the The size of the fooling set for $\gt(x,y)$ is $|S|=2^n$. Now we know the value of $|S|$, $\determin{gt(x,y)}=\log{2^n}= O(n)$ follows directly. % recording ~20:20 \paragraph{Example: \disjoint} %\sunote{Didn't we define \disjoint in the last lecture notes?} %\ronote{In response, we did define \disjoint, but in in lecture 1 % we did not reason about the definition with as much detail.} % %We define $x$ and $y$ to be a vector of bits %which define set membership. % %\begin{align*} % & x,y \in [0,1]^n \\ %\text{and } & x_i \Leftrightarrow i \in S_x \\ % & y_i \Leftrightarrow i \in S_y %\end{align*} % %the function $\disjoint(x,y)$ is defined as follows % %\begin{equation*} %\disjoint(x,y) = \left\{ % \begin{array}{l l} % 1 & \quad \text{if $(x,y)$ are disjoint: } \left\{ \forall i, x_i \land y_i = 0 \right\} \\ % 0 & \quad \text{otherwise} % \end{array} \right. %\end{equation*} \begin{marginfigure} \begin{tabular}{l l | l l l l} & & $0$ & $0$ & $1$ & $1$ \\ & & $0$ & $1$ & $0$ & $1$ \\ \hline $0$ & $0$ & $1$ & $1$ & $1$ & $1$ \\ $0$ & $1$ & $1$ & $0$ & $1$ & $1$ \\ $1$ & $0$ & $1$ & $1$ & $0$ & $0$ \\ $1$ & $1$ & $1$ & $0$ & $0$ & $0$ \\ \end{tabular} \\ \vspace{1em} \caption{Matrix for $DISJ(x,y)$ when $n=4$ showing which combinations will result in a $1$ or a $0$.} \end{marginfigure} Let us observe one of the items in the fooling set using our example. Using the pair $(x_1, y_1) = (01, 10)$ and $(x_2, y_2) = (10, 01)$ we can see this is part of the fooling set. This set is interesting because $(x_1,y_1) = (\overline{x_2}, \overline{y_2})$. In fact, the fooling st for $\disjoint(x,y)$ are the bit patterns with the bits flipped; it is the set of all $A$ and $\overline{A}$. % recording ~33:20 The fooling set $S = \{A, \overline{A}\}$. Let us look at the following properties. \begin{align*} (A, \overline{A}) &= 1 \\ (B, \overline{B}) &= 1 \\ (A, \overline{B}) &= 1~\text{if}~B \supseteq A \\ (\overline{A}, B) &= 1~\text{if}~A \supseteq B \end{align*} For this to not be a fooling set the bottom two properties must be true. Since this happens when $B \supseteq A$ and $A \supseteq B$, it only happens when $A=B$. There can be $2^n$ combinations of elements in the fooling set, so $|S| = 2^n$. The complexity of $\determin{\disjoint(x,y)} = \log 2^n = O(n)$ \section{Generalizing Fooling Set Argument} \begin{proposition} Let $\mu$ be a probability distribution of $X \times Y$. If any monochromatic rectangle R has measure $\mu \le t$\footnote{$t<1$}, then $\determin{f} \ge \log_2{\frac{1}{t}}$ \end{proposition} For example, given a fooling set $S$ for a function $f$, let us define \begin{equation*} \mu = \left\{ \begin{array}{l l} \frac{1}{|S|} & \quad \text{for $(x,y) \in S$} \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation*} Each monochromatic rectangle can have at most $1$ element of the fooling set, the measure of any monochromatic rectangle is $1/|S|$. Thus this generalized the method based on fooling sets. set. \section{Rank based lower bound} Let us define a matrix \begin{equation*} M_f = \{m_{ij} = f(x_i, y_i) | x_i \in X, y_i \in Y \} \end{equation*} \begin{lemma} $\determin{f} \geq \log{\text{rank} (M_f)}$ \end{lemma} \begin{proof} For a leaf, let us define \begin{equation*} \ell : M_{\ell} = \left\{ \begin{array}{l l} 1 & \text{if $(x,y) \in R_{\ell}$} \\ 0 & \text{otherwise} \end{array} \right. \end{equation*} $\ell$ is the set of all leaves where the label is $1$. The matrix $M_{\ell}$ is an indicator matrix for a leaf. \begin{equation*} M_f = \sum_{\ell} M_{\ell} \end{equation*} Each $M_{\ell}$ is an all $1$'s matrix, so by the properties of rank. \begin{equation*} \forall \ell, \text{rank}(M_{\ell}) = 1 \end{equation*} If we have a collection of matrices $M_1 \ldots M_k$, then \begin{equation*} \text{rank}\left( \sum_{i=1}^k M_i \right) \le \sum_{i=1}^k \text{rank} \left( M_i \right) \end{equation*} From this we can derive the rank of $M_f$ because we know the rank of each of the matrices $M_{\ell}$. \begin{equation*} \text{rank}\left( M_f \right) \le |L| \end{equation*} \end{proof} These tools can be helpful for us to reason about the communication complexity of a function using only algebraic tools rather then reasoning about a problem combinatorially. \paragraph{Inner product using rank} We will define the function $IP(x,y)$ as \begin{equation*} IP(x,y) : x \bullet y~\text{over GF($2$)} \end{equation*} This problem is very tricky to solve using the fooling set, as was mentioned before. Using the rank argument makes reasoning about $IP(x,y)$ must more easy. The matrix for the this function is defined as \begin{equation*} M_{IP_{i,j}} = x_i \bullet y_j \end{equation*} We define the matrix $N=M_{IP}^2$. More explicitly $N$ is defined as \begin{equation*} N(y, y') = \sum_Z\left(y \bullet z \right) \left(z \bullet y'\right) \end{equation*} \begin{marginfigure} $ \overbrace{ \begin{bmatrix} 0 & \rightarrow & & \\ \downarrow & 2^{n-1} & & \\ & & \ddots & \\ & & & 2^{n-n} \\ \end{bmatrix} }^{2^n} $ \end{marginfigure} We will be looking at the off diagonal of the matrix. Lets say we have $y$ and $y'$ which have $r$ bits in common. We need to make sure $z$ has $1$'s in all the $r$ bits. The rest we do not care about. We are choosing some subset of the $r$ bits to be fixed to $1$, the rest we can pick to be anything else. Doing this results in $2^n - 2^{n-r}$ % \sunote{See the note that I provided in the discussion group and insert it here} The statement that the rank of the IP matrix over GF($2$)$=2^n$ is not true. A proof of this can be found in Sherstov's notes\footnote{http://www.cs.ucla.edu/~sherstov/teaching2012-winter/docs/lecture03.pdf}, which we will briefly discuss here. The first observation is that the rank argument works over any field. So we can choose the field we wish to evaluate matrix rank over. In particular, the rank over the reals is larger than the rank over any other field. Recall that we would like to show that the inner product has deterministic communication complexity $n$ via the rank method. So we need to show that the rank of the induced matrix (where $M_{ij}=<\!\!x_i,y_j\!\!>$) is large: remember that IP is a function from $n$ bit string to $\{0, 1\}$. Firstly, consider the auxiliary matrix $M'=2M-1$ (where $1$ is the all ons matrix). All this does is change $0$ to $-1$ and $1$ to $1$, which is more convenient to work with. It is a nonsingular transformation, so $\text{rank}(M) \ge \text{rank}(M')-1$. Now it is easy to see that $M'_{\text{IP}}$ on one-bit strings is the Hadamard matrix\footnote{$ \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$} and the IP matrix for $n$ bit strings is merely the $n$-th order version of this (or the $n$th tensor power of this matrix). But this matrix has full rank: in particular its rank is $2^n$. Therefor the rank of the IP matrix is at least $2^n-1$. %recording ~1:00:00 \section{Randomized Algorithms} Algorithms employing randomness make communication complexity much more interesting and useful. First question we need to ask before we study randomness in relationship to communication complexity is who has the randomness? Alice, Bob, or both? For now, lets assume Alice and Bob each have their own \emph{private} source of randomness, which will be indicated as $r_A$ and $r_B$ for Alice's and Bob's random bits source respectively. The source of the random bits ($r_A$ and $r_B$) are only known by their owner. If the randomness is to be shared, a communication penalty will have to be paid. \begin{marginfigure} \centering \includegraphics{figs/random_AliceBob} \end{marginfigure} When using randomness, each element in the protocol tree will now include the random bits. So each of Alice's nodes will consider the triple $(x, \Pi, r_A)$, where $x$ is Alice's private data, $\Pi$ is the knowledge the location and previous information communicated, and $r_A$ are the private random bits for Alice. Randomness is over the choice of bits not over the choice of inputs. Using the notion of randomness, we can define different notions for what the protocol is doing. The following are the different errors a randomized protocol can have: \begin{description} \item[zero error] \begin{equation*} Pr\left( P(x,y) = f(x,y) \right) = 1 \end{equation*} \item[$\epsilon$ error] or two-sided error \begin{equation*} Pr\left( P(x,y) = f(x,y) \right) \ge 1 - \epsilon \end{equation*} \item[$1$ sided error] \begin{align*} Pr(P(x,y) = 0 | f(x,y) = 0 ) &= 1 \\ Pr(P(x,y) = 1 | f(x,y) = 1) &= 1 - \epsilon \end{align*} \end{description} % recording ~1:05:00 The worst case running time for a random algorithm is the maximum time over all possible input and internal random bits. The average case is the number of bits which are expected to be communicated on average. \begin{definition} The complexity measures for each of the randomized protocols listed above. \begin{itemize} \item for zero error, $R_0(f)$ is the minimum average number of bits used for computing $f$ with $0$ error. \item for $\epsilon$ error for $0 < \epsilon < 1/2$ (because $\epsilon$ is two sided), $R_{\epsilon}$ is the \emph{minimum worst case} number of bits for a $\epsilon$-error protocol. We denote $R(f) = R_{1/3}(f)$. \item for $1$ sided error for $0 < \epsilon < 1$, $R_{\epsilon}^1(f)$ is the minimum worst case number of bits for a one sided error. We denote $R^1(f) = R_{1/2}^1(f)$. \end{itemize} \end{definition} % recording 1:09:00 \paragraph{An example} using equality with randomization. We define $\equ(x,y) = 1$ if $x = y$. Last week in lecture 1 we showed $\determin{\equ} = n+1$. Using randomized algorithms we will show $\randR{\equ} = O(\log n)$. This is exponentially better. The trick is to think of the input as polynomials rather than a sequence of bits. We define the input for Alice and Bob to be a bit string $A$ and $B$ respectively. \begin{align*} A &: a_{n-1} \ldots a_0 \\ B &: b_{n-1} \ldots b_0 \end{align*} We define the polynomials over $GF(p)$ (where $n^2 < p < 2n^2$) to be \begin{align*} A(x) &= \sum a_i x^i \\ B(x) &= \sum b_i y^i \end{align*} The randomized protocol is as follows. Alice chooses $r \in GF(p)$ uniformly at random. Alice sends $r$ and $A(r)$ to Bob. Bob checks if $B(r) = A(r)$. If they are true, Bob declares that they are equal. If Bob notices this is not true, he declares they are not equal. This works because polynomials can only agree at a small number of locations without being equivalent. Two degree-$n$ polynomials can only intersect at a maximum of $n$ points unless the polynomials are the same\footnote{This is trivial to prove using the fundamental theorem of algebra}. \begin{marginfigure} \centering \includegraphics{figs/polynomial} \end{marginfigure} The size of $GF(p)$ is $\Theta(n^2)$, as described above. Using this information we can calculate the following probability \begin{equation*} Pr(A(r) = B(r) | A \neq B) = \frac{n}{|GF(p)|} \sim \frac{1}{n} \end{equation*} How much communication is needed? Alice must send $A(r)$, which takes a single bit and $r$ which is drawn from a set of size $n^2$, so it will take $2 \log n$ bits. Bob must return the answer of if $A=B$, taking one bit. The total communication complexity is \begin{equation*} 2\log n + 2 = O(\log n) \end{equation*} %\newpage %\bibliographystyle{plain} %\bibliography{L2} \end{document}