diff git a/lectures/lecture12/lecturenotes.tex b/lectures/lecture12/lecturenotes.tex
deleted file mode 100644
index 15d3324e1a0e45b2a50c72bbc80ed92f12f5de7b..0000000000000000000000000000000000000000
 a/lectures/lecture12/lecturenotes.tex
+++ /dev/null
@@ 1,105 +0,0 @@
\documentclass{article}[12pt]

\input{../../texstuff/fonts.sty}
\input{../../texstuff/notepaper.sty}

\usepackage{outlines}

\title{CS6963 Lecture \#12}
\author{Robert Ricci}
\date{February 20, 2014}

\begin{document}

\maketitle

\begin{outline}

\1 Left over from lat time

\1 Overall goal of experiment design:
 \2 Learn as much as possible from as few experiments as possible

\1 Some terminology
 \2 Response variable: outcome
 \3 \textit{Why call it a variable?}
 \3 \textit{Examples of nonperformance variables?}
 \2 Factors: things you change
 \3 \textit{Why call them predictor variables?}
 \2 Primary / secondary factors
 \3 \textit{How to decide which ones to use?}
 \2 Replication: How many reactions

\1 Important properties of experiment design
 \2 Every experiment should get you closer to answering one of the questions
 \2 You should be able to explain all behavior in the resultsif not, you
 may need more experiments
 \2 Control all variables you can
 \2 Measure the variables you can't control

\1 Interacting factors
 \2 Understand which of your factors interact, and which are independent
 \2 Saves you a lot of time not running experiments that don't reveal more
 information
 \2 May take a few experiments to determine
 \2 A good (negative) example the FV paper
 \2 If you know for sure they are independent, make sure to say so in the
 paper

\1 Common mistakes
 \2 Ignoring variation in experimental error
 \2 Not controlling params
 \2 One factor at a time experiments
 \2 Not isolating effects
 \2 Too many experiments
 \3 Break into several subevals to answer questions, evaluate particular
 pieces of the SUT

\1 Discussing design of lab 1
 \2 Our goal: Which variant of TCP should I run on my webserver?
 \2 Congestion control: cubic vs. newreno
 \2 SACK or no SACK? (orthogonal to CC algo)
 \2 ``Doesn't make a difference'' is an okay answer, but have to prove it.

\1 Questions to answer
 \2 Which provides my clients the best experience?
 \3 Low latency to load page
 \3 High throughput
 \2 Which allows me to server more users?
 \3 Resources on server
 \3 Fairness between clients

\1 Metrics
 \2 Time to download one page
 \2 Error rate
 \2 Throughput
 \2 Jain's fairness index
 \2 Resource usage on server (eg. CPU)

\1 Parameters and factors
 \2 TCP parameters
 \2 Number of simultaneous clients
 \2 File size distribution
 \2 Which webserver?
 \2 Client load generator?
 \2 Packet loss
 \2 Client RTT
 \2 Client bandwidth

\1 Tools to use
 \2 Webserver: which one (apache 2.2?)
 \2 \texttt{tc}
 \2 Client workload generator (\texttt{httperf?})

\1 Experiments to to run
 \2 Determine whether SACK and CC are interacting factors
 \2 Max out number of clients

\1 How to present results

\1 For next time
 \2 Finish paper analysis before class

\end{outline}

\end{document}
diff git a/lectures/lecture13/lecturenotes.tex b/lectures/lecture13/lecturenotes.tex
index a16afd7d35901a1d6b43441fd41702455c22c355..15d3324e1a0e45b2a50c72bbc80ed92f12f5de7b 100644
 a/lectures/lecture13/lecturenotes.tex
+++ b/lectures/lecture13/lecturenotes.tex
@@ 1,19 +1,13 @@
\documentclass{article}[12pt]
\usepackage[nomath]{fontspec}
\usepackage{sectsty}
\usepackage[margin=1.25in]{geometry}
\usepackage{outlines}
\usepackage{pdfpages}
+\input{../../texstuff/fonts.sty}
+\input{../../texstuff/notepaper.sty}
\setmainfont[Numbers=OldStyle,Ligatures=TeX]{Equity Text A}
\setmonofont{Inconsolata}
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
+\usepackage{outlines}
\title{CS6963 Lecture \#13}
+\title{CS6963 Lecture \#12}
\author{Robert Ricci}
\date{February 27, 2014}
+\date{February 20, 2014}
\begin{document}
@@ 21,62 +15,91 @@
\begin{outline}
\1 What we decided last time:
 \2 Remember, overall goal is to decide whether to use NewReno or cubic TCP on
 our webserver, and whether or not to enable SACK
 \2 SUT includes network and NIC, but not webserver or client
 \2 Questions to answer:
 \3 Which variant provides higher througput / goodput?
 \3 Which variant gives lower delay?
 \3 Which provides better fairness between clients?
 \3 How many TCP sessions / clients can we support?
 \2 Secondary things to look at:
 \3 How many retransmissions are caused?
 \3 What is the utilization on the server NIC?
 \3 What is the interaction between the application and TCP?

\1 Things to decide for today:
 \2 What metrics will we use?
 \2 What are the parameters?
 \2 Which ones will will vary as factors?
 \3 How will we decide what values they should take on?
 \2 What will we use as our workload generator?
 \2 How will we collect measurements?
 \2 What will our major set of evaluations be?
 \2 How will we present results?

\0%
\begin{description}

\item[Client workload generator: Naveen, Binh] \hfill \\

 A tool for actually making http requests; we should be reasonably confident that this tool does not itself cause some kind of bottleneck or timing artifacts, which would make it part of the system under test, which we decided we didn't want it to be. This tool is going to need to be capable of, or scriptable to, make requests to URLs according to some distribution and with some timing models (eg. how long do I wait between page loads, do I wait for the previous page load to load the next page, etc.)

\item[Server to respond to clients: Christopher, Hyunwook] \hfill \\

 Like the client, we need to be confident that the server introduces minimal effects on the system so that it does not become part of the system under test. We need to be able to serve objects of varying size according to some distribution.

\item[Tools for analyzing client to server communication: Ren, Philip] \hfill \\

 We decided that we would capture the response variables (bandwidth, latency, etc.) at the layer of capturing packets on the wire. So, we will need to decide what tools to use to capture these packets, and we will need to be able to compute these higherlevel metrics from the raw traces that we collect.

\item[Data about distribution of web requests: Chaitu, Aisha] \hfill \\

 We need to cause the client program to make requests according to some distribution of requests that is representative of that seen by real webservers  for example, what are the sizes of objects fetched, what is the time between objects being fetched, what is the ratio of data downloaded from the server vs. data uploaded to the server. As much as possible, we should use distributions gathered from real systems, so we should try to find studies, traces, datasets, etc.

\item[Data about distribution of client network performance: Junguk, Makito] \hfill \\

 We need to model some network conditions between the clients and the server: what is their roundtrip latency to the server, what bandwidth do they have available, what packet loss rates do they see, etc. As with the distribution of requests, we should try to use distributions gathered from real networks, and should look for studies, etc. giving us distributions to use for these values

\end{description}
+\1 Left over from lat time
+
+\1 Overall goal of experiment design:
+ \2 Learn as much as possible from as few experiments as possible
+
+\1 Some terminology
+ \2 Response variable: outcome
+ \3 \textit{Why call it a variable?}
+ \3 \textit{Examples of nonperformance variables?}
+ \2 Factors: things you change
+ \3 \textit{Why call them predictor variables?}
+ \2 Primary / secondary factors
+ \3 \textit{How to decide which ones to use?}
+ \2 Replication: How many reactions
+
+\1 Important properties of experiment design
+ \2 Every experiment should get you closer to answering one of the questions
+ \2 You should be able to explain all behavior in the resultsif not, you
+ may need more experiments
+ \2 Control all variables you can
+ \2 Measure the variables you can't control
+
+\1 Interacting factors
+ \2 Understand which of your factors interact, and which are independent
+ \2 Saves you a lot of time not running experiments that don't reveal more
+ information
+ \2 May take a few experiments to determine
+ \2 A good (negative) example the FV paper
+ \2 If you know for sure they are independent, make sure to say so in the
+ paper
+
+\1 Common mistakes
+ \2 Ignoring variation in experimental error
+ \2 Not controlling params
+ \2 One factor at a time experiments
+ \2 Not isolating effects
+ \2 Too many experiments
+ \3 Break into several subevals to answer questions, evaluate particular
+ pieces of the SUT
+
+\1 Discussing design of lab 1
+ \2 Our goal: Which variant of TCP should I run on my webserver?
+ \2 Congestion control: cubic vs. newreno
+ \2 SACK or no SACK? (orthogonal to CC algo)
+ \2 ``Doesn't make a difference'' is an okay answer, but have to prove it.
+
+\1 Questions to answer
+ \2 Which provides my clients the best experience?
+ \3 Low latency to load page
+ \3 High throughput
+ \2 Which allows me to server more users?
+ \3 Resources on server
+ \3 Fairness between clients
+
+\1 Metrics
+ \2 Time to download one page
+ \2 Error rate
+ \2 Throughput
+ \2 Jain's fairness index
+ \2 Resource usage on server (eg. CPU)
+
+\1 Parameters and factors
+ \2 TCP parameters
+ \2 Number of simultaneous clients
+ \2 File size distribution
+ \2 Which webserver?
+ \2 Client load generator?
+ \2 Packet loss
+ \2 Client RTT
+ \2 Client bandwidth
+
+\1 Tools to use
+ \2 Webserver: which one (apache 2.2?)
+ \2 \texttt{tc}
+ \2 Client workload generator (\texttt{httperf?})
+
+\1 Experiments to to run
+ \2 Determine whether SACK and CC are interacting factors
+ \2 Max out number of clients
+
+\1 How to present results
\1 For next time
 \2 Read Chapter 14, linear regression
+ \2 Finish paper analysis before class
\end{outline}
\newpage

\includepdf[pages={1}]{boarddrawing.pdf}

\end{document}
diff git a/lectures/lecture13/boarddrawing.pdf b/lectures/lecture14/boarddrawing.pdf
similarity index 100%
rename from lectures/lecture13/boarddrawing.pdf
rename to lectures/lecture14/boarddrawing.pdf
diff git a/lectures/lecture14/lecturenotes.tex b/lectures/lecture14/lecturenotes.tex
index 2382cd8a6e2d42373f4183802be9d30be023f486..a16afd7d35901a1d6b43441fd41702455c22c355 100644
 a/lectures/lecture14/lecturenotes.tex
+++ b/lectures/lecture14/lecturenotes.tex
@@ 4,15 +4,16 @@
\usepackage{sectsty}
\usepackage[margin=1.25in]{geometry}
\usepackage{outlines}
+\usepackage{pdfpages}
\setmainfont[Numbers=OldStyle,Ligatures=TeX]{Equity Text A}
\setmonofont{Inconsolata}
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#1}
+\title{CS6963 Lecture \#13}
\author{Robert Ricci}
\date{March 4, 2014}
+\date{February 27, 2014}
\begin{document}
@@ 20,101 +21,62 @@
\begin{outline}
\1 Today: How well does your data fit a line?
 \2 More complicated regressions exist, of course, but we'll stick with this one for now
 \2 Eyeballing is just not rigorous enough

\1 Basic model: $y_i = b_0 + b_1x_i + e_i$
 \2 $y_i$ is the prediction
 \2 $b_0$ is the yintercept
 \2 $b_1$ is the slope
 \2 $x_i$ is the predictor
 \2 $e_i$ is the error
 \2 \textit{Which of these are random variables?}
 \3 A: All but $x_i$ the $b$s are estimated from random variables, $e$ is difference between random variables
 \3 So, we can compute statistics on them

\1 Two criteria for getting $b$s
 \2 Zero total error
 \2 Minimize SSE (sum of squared errors)
 \2 Example of why one is not enough: two points, infinite lines with zero total error
 \2 Squared errors always positive, so this criterion alone could overshoot
 or undershoot

\1 Deriving $b_0$ is easy
 \2 Solve for $e_i$: $y_i  (b_0 + b_i x_i)$
 \2 Take the mean over all $i$: $\overline{x} = \overline{y}  b_0  b_1 \overline{x}$
 \2 Set mean error to 0 to get $b_0 = \overline{y}  b_1 \overline{x}$
 \2 Now we just need $b_1$

\1 Deriving $b_1$ is harder
 \2 SSE = sum of errors squared over all $i$
 \2 We want a minimum value for this
 \2 It's a function with one local maximum
 \2 So we can differentiate and look for zero
 \2 $s_y^2  2b_1s^2_{xy} + b_1^2s_x^2$, then take derivative
 \2 $s_{xy}$ is correlation coefficient of $x$ and $y$ (see p. 181)
 \2 In the end, gives us $b_1 = \frac{s^2_{xy}}{s_x^2}$
 \3 Correlation of $x$ and $y$ divided by variance of $x$
 \3 $\frac{\sum{xy}  n \overline{x} \overline{y}}{\sum{x^2}  n(\overline{x})^2}$

\1 SS*
 \2 SSE = Sum of squared errors
 \2 SST = total sum of squares (TSS): difference from mean
 \2 SS0 = square $\overline{y}$ $n$ times
 \2 SSY = square of all $y$, so SST = SSY  SS0
 \2 SSR = Error explained by regression: SST  SSE

\1 Point of above: we can talk about two sources that explain variance: sum of
 squared difference from mean, and sum of errors
 \2 $R^2 = \frac{SSR}{SST}$
 \2 The ratio is the amount that was explained by the regression  close to 1 is good (1 is max possible)
 \2 If the regression sucks, SSR will be close to 0

\1 Remember, our error terms and $b$s are random variables
 \2 We can calculate stddev, etc. on them
 \2 Variance is $s_e^2 = \frac{SSE}{n2}$  MSE, mean squared error
 \2 Confidence intervals, too
 \2 \textit{What do confidence intervals tell us in this case?}
 \3 A: Our confidence in how close to the true slope our estimate is
 \3 For example: How sure are we that two slopes are actually different
 \2 \textit{When would we want to show that the confidence interval for $b_1$ includes zero?}

\1 Confidence intervals for predictions
 \2 Confidence intervals tightest near middle of sample
 \2 If we go far out, our confidence is low, which makes intuitive sense
 \2 $s_e \big(\frac{1}{m} + \frac{1}{n} + \frac{(x_p  \overline{x}^2)}{\sum_{x^2}  n \overline{x}^2}\big)^\frac{1}{2}$
 \2 $s_e$ is sttdev of error
 \2 $m$ is how many predictions we are making
 \2 $p$ is value at which we are predicting ($x$)
 \2 $x_p  \overline{x}$ is capturing difference from center of sample
 \2 \textit{Why is it smaller for more $m$}?
 \3 Accounts for variance, assumption of normal distribution

\1 Residuals
 \2 AKA error values
 \2 We can expect several things from them if our assumptions about regressions are correct
 \2 They will not show trends: \textit{why would this be a problem}
 \3 Tells us that an assumption has been violated
 \3 If not randomly distributed for different $x$, tells us there is a systematic error at high or low values  error and predictor not independent
 \2 QQ plot of error distribution vs. normal ditribution
 \2 Want the spread of stddev to be constant across range
+\1 What we decided last time:
+ \2 Remember, overall goal is to decide whether to use NewReno or cubic TCP on
+ our webserver, and whether or not to enable SACK
+ \2 SUT includes network and NIC, but not webserver or client
+ \2 Questions to answer:
+ \3 Which variant provides higher througput / goodput?
+ \3 Which variant gives lower delay?
+ \3 Which provides better fairness between clients?
+ \3 How many TCP sessions / clients can we support?
+ \2 Secondary things to look at:
+ \3 How many retransmissions are caused?
+ \3 What is the utilization on the server NIC?
+ \3 What is the interaction between the application and TCP?
+
+\1 Things to decide for today:
+ \2 What metrics will we use?
+ \2 What are the parameters?
+ \2 Which ones will will vary as factors?
+ \3 How will we decide what values they should take on?
+ \2 What will we use as our workload generator?
+ \2 How will we collect measurements?
+ \2 What will our major set of evaluations be?
+ \2 How will we present results?
+
+\0%
+\begin{description}
+
+\item[Client workload generator: Naveen, Binh] \hfill \\
+
+ A tool for actually making http requests; we should be reasonably confident that this tool does not itself cause some kind of bottleneck or timing artifacts, which would make it part of the system under test, which we decided we didn't want it to be. This tool is going to need to be capable of, or scriptable to, make requests to URLs according to some distribution and with some timing models (eg. how long do I wait between page loads, do I wait for the previous page load to load the next page, etc.)
+
+\item[Server to respond to clients: Christopher, Hyunwook] \hfill \\
+
+ Like the client, we need to be confident that the server introduces minimal effects on the system so that it does not become part of the system under test. We need to be able to serve objects of varying size according to some distribution.
+
+\item[Tools for analyzing client to server communication: Ren, Philip] \hfill \\
+
+ We decided that we would capture the response variables (bandwidth, latency, etc.) at the layer of capturing packets on the wire. So, we will need to decide what tools to use to capture these packets, and we will need to be able to compute these higherlevel metrics from the raw traces that we collect.
+
+\item[Data about distribution of web requests: Chaitu, Aisha] \hfill \\
+
+ We need to cause the client program to make requests according to some distribution of requests that is representative of that seen by real webservers  for example, what are the sizes of objects fetched, what is the time between objects being fetched, what is the ratio of data downloaded from the server vs. data uploaded to the server. As much as possible, we should use distributions gathered from real systems, so we should try to find studies, traces, datasets, etc.
+
+\item[Data about distribution of client network performance: Junguk, Makito] \hfill \\
+
+ We need to model some network conditions between the clients and the server: what is their roundtrip latency to the server, what bandwidth do they have available, what packet loss rates do they see, etc. As with the distribution of requests, we should try to use distributions gathered from real networks, and should look for studies, etc. giving us distributions to use for these values
+
+\end{description}
\1 For next time
 \2 Start filling out your section in cs6963lab1 repo
 \2 Be careful to only modify parts of the .tex file for your section
 \3 Unless you want to suggest a broader change
 \2 Fork it, give your parter access, send me a merge request before
 the start of class Thursday
 \2 Check in any notes you create, reference papers
 \2 You are empowered to make decisions
 \2 Goal is to describe in sufficient detail that people can start
 implementing
 \2 We will try to finish up our plan by deciding what experiments to run
 and how to present results on Thursday
 \2 Need next two paper volunteers, let's get them out before spring
 break
+ \2 Read Chapter 14, linear regression
\end{outline}
+\newpage
+
+\includepdf[pages={1}]{boarddrawing.pdf}
+
\end{document}
diff git a/lectures/lecture15/lecturenotes.tex b/lectures/lecture15/lecturenotes.tex
index 305f01d7051f782257f6b292705a8d3e550989ce..2382cd8a6e2d42373f4183802be9d30be023f486 100644
 a/lectures/lecture15/lecturenotes.tex
+++ b/lectures/lecture15/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#15}
+\title{CS6963 Lecture \#1}
\author{Robert Ricci}
\date{March 6, 2014}
+\date{March 4, 2014}
\begin{document}
@@ 20,40 +20,100 @@
\begin{outline}
\1 For today: finish planning the lab
+\1 Today: How well does your data fit a line?
+ \2 More complicated regressions exist, of course, but we'll stick with this one for now
+ \2 Eyeballing is just not rigorous enough
\1 Executive decisions I made (can discuss, though!)
 \2 Keep one distribution for client behavior
 \2 One distance per experiment?
 \2 Use Linux TC for traffic shaping
 \2 100 Mbit server NIC
 \2 Draw the topology
+\1 Basic model: $y_i = b_0 + b_1x_i + e_i$
+ \2 $y_i$ is the prediction
+ \2 $b_0$ is the yintercept
+ \2 $b_1$ is the slope
+ \2 $x_i$ is the predictor
+ \2 $e_i$ is the error
+ \2 \textit{Which of these are random variables?}
+ \3 A: All but $x_i$ the $b$s are estimated from random variables, $e$ is difference between random variables
+ \3 So, we can compute statistics on them
\1 Major evaluations
 \2 cf. questions from Lecture 13
 \2 Calibrate how many runs to do
 \2 How to present data: tables, graphs, etc.
+\1 Two criteria for getting $b$s
+ \2 Zero total error
+ \2 Minimize SSE (sum of squared errors)
+ \2 Example of why one is not enough: two points, infinite lines with zero total error
+ \2 Squared errors always positive, so this criterion alone could overshoot
+ or undershoot
\1 Interfaces between the pieces
 \2 Get distributions of session sizes
 \2 Set client conditions
+\1 Deriving $b_0$ is easy
+ \2 Solve for $e_i$: $y_i  (b_0 + b_i x_i)$
+ \2 Take the mean over all $i$: $\overline{x} = \overline{y}  b_0  b_1 \overline{x}$
+ \2 Set mean error to 0 to get $b_0 = \overline{y}  b_1 \overline{x}$
+ \2 Now we just need $b_1$
\1 Other grungy stuff
 \2 Time synchronization
 \2 Clients into traffic shaping pipes
 \2 Calculating stats from packet streams
+\1 Deriving $b_1$ is harder
+ \2 SSE = sum of errors squared over all $i$
+ \2 We want a minimum value for this
+ \2 It's a function with one local maximum
+ \2 So we can differentiate and look for zero
+ \2 $s_y^2  2b_1s^2_{xy} + b_1^2s_x^2$, then take derivative
+ \2 $s_{xy}$ is correlation coefficient of $x$ and $y$ (see p. 181)
+ \2 In the end, gives us $b_1 = \frac{s^2_{xy}}{s_x^2}$
+ \3 Correlation of $x$ and $y$ divided by variance of $x$
+ \3 $\frac{\sum{xy}  n \overline{x} \overline{y}}{\sum{x^2}  n(\overline{x})^2}$
+
+\1 SS*
+ \2 SSE = Sum of squared errors
+ \2 SST = total sum of squares (TSS): difference from mean
+ \2 SS0 = square $\overline{y}$ $n$ times
+ \2 SSY = square of all $y$, so SST = SSY  SS0
+ \2 SSR = Error explained by regression: SST  SSE
\1 Next step assignments
 \2 Continue to divide up by same areas?
 \2 Estimate of the amount of work to do
+\1 Point of above: we can talk about two sources that explain variance: sum of
+ squared difference from mean, and sum of errors
+ \2 $R^2 = \frac{SSR}{SST}$
+ \2 The ratio is the amount that was explained by the regression  close to 1 is good (1 is max possible)
+ \2 If the regression sucks, SSR will be close to 0
+
+\1 Remember, our error terms and $b$s are random variables
+ \2 We can calculate stddev, etc. on them
+ \2 Variance is $s_e^2 = \frac{SSE}{n2}$  MSE, mean squared error
+ \2 Confidence intervals, too
+ \2 \textit{What do confidence intervals tell us in this case?}
+ \3 A: Our confidence in how close to the true slope our estimate is
+ \3 For example: How sure are we that two slopes are actually different
+ \2 \textit{When would we want to show that the confidence interval for $b_1$ includes zero?}
+
+\1 Confidence intervals for predictions
+ \2 Confidence intervals tightest near middle of sample
+ \2 If we go far out, our confidence is low, which makes intuitive sense
+ \2 $s_e \big(\frac{1}{m} + \frac{1}{n} + \frac{(x_p  \overline{x}^2)}{\sum_{x^2}  n \overline{x}^2}\big)^\frac{1}{2}$
+ \2 $s_e$ is sttdev of error
+ \2 $m$ is how many predictions we are making
+ \2 $p$ is value at which we are predicting ($x$)
+ \2 $x_p  \overline{x}$ is capturing difference from center of sample
+ \2 \textit{Why is it smaller for more $m$}?
+ \3 Accounts for variance, assumption of normal distribution
+
+\1 Residuals
+ \2 AKA error values
+ \2 We can expect several things from them if our assumptions about regressions are correct
+ \2 They will not show trends: \textit{why would this be a problem}
+ \3 Tells us that an assumption has been violated
+ \3 If not randomly distributed for different $x$, tells us there is a systematic error at high or low values  error and predictor not independent
+ \2 QQ plot of error distribution vs. normal ditribution
+ \2 Want the spread of stddev to be constant across range
\1 For next time
 \2 I won't be here
 \2 Guest lectures by Xing Lin and Weibin Sun
 \2 Papers posted
 \2 Form not required
 \2 Do think actively about questions as you read the papers
 \2 You are encouraged to suggest ways to improve the evaluations
+ \2 Start filling out your section in cs6963lab1 repo
+ \2 Be careful to only modify parts of the .tex file for your section
+ \3 Unless you want to suggest a broader change
+ \2 Fork it, give your parter access, send me a merge request before
+ the start of class Thursday
+ \2 Check in any notes you create, reference papers
+ \2 You are empowered to make decisions
+ \2 Goal is to describe in sufficient detail that people can start
+ implementing
+ \2 We will try to finish up our plan by deciding what experiments to run
+ and how to present results on Thursday
+ \2 Need next two paper volunteers, let's get them out before spring
+ break
\end{outline}
diff git a/lectures/lecture16/lecturenotes.tex b/lectures/lecture16/lecturenotes.tex
index 0e84818eb66325ccf3e9c46372c2074770339cf4..305f01d7051f782257f6b292705a8d3e550989ce 100644
 a/lectures/lecture16/lecturenotes.tex
+++ b/lectures/lecture16/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#16}
+\title{CS6963 Lecture \#15}
\author{Robert Ricci}
\date{March 25, 2014}
+\date{March 6, 2014}
\begin{document}
@@ 20,121 +20,40 @@
\begin{outline}
\1 From last time
 \2 Thanks to Junguk and Makito for the scripts!
 \2 Quick status: client, server, network conditions, client request
 sizes, analysis

\1 Today: Talking about different prob distributions
 \2 You many run into the need to generate data in these distributions,
 or to recognize them in data that you get
 \2 MLE: Maximum likelihood estimator: estimate parameters, each distrib
 has its own
 \2 Don't memorize the formulas, just be familiar with the concepts so that
 you can look them up when needed

\1 Discrete distributions

\1 Bernoulli
 \2 Just 1 and 0
 \2 \emph{Why discrete?}
 \2 Probability of a 1 is $p$
 \2 Mean is $p$
 \2 Variance $p(1p)$ at lowest when $p$ is 0 or 1
 \2 \emph{Examples of things modeled by Bernoulli distribs?}

\1 Binomial
 \2 Number of successes ($x$) in a sequence of $n$ Bernoulli trials
 \2 \emph{Why discrete?}
 \2 So it has both $p$ and $n$ as params
 \2 Mean: $np$
 \2 Var: $n$ times var of Bernoulli
 \2 \emph{Examples of things modeled by it?}

\1 Geometric
 \2 Number of trials up to and including first success
 \2 Param is just $p$
 \2 Mean is $1/p$
 \2 Remember, only for independent events!
 \2 \emph{Examples?}

\1 Negative binomial
 \2 How many successes before $r$ failures
 \2 Can invert success of course
 \2 Now you have $p$ and $r$ as parameters
 \2 Mean: $\frac{pr}{1p}$
 \2 \emph{What might you model with it?}

\1 Poisson
 \2 ``The probability of a given number of events occurring in a fixed
 interval of time and/or space if these events occur with a known
 average rate and independently of the time since the last event''
 \2 Produces a number of arrivals in a given time
 \2 Particularly good if the sources are independent
 \2 Parameter is mean ($\lambda$)
 \2 Very often used for arrivals: eg. arrival of packets at a queue or
 requests at a server
 \2 Can be used over particular intervals of time; eg. daytime, to keep
 the iid assumption
 \2 \emph{Examples?}

\1 Continuous distributions

\1 Uniform: All possibilities equally likely
 \2 There is a discrete version of course too
 \2 Params: $a$ to $b$
 \2 Mean: $\frac{a+b}{2}$
 \2 Usually generated, not measured

\1 Exponential
 \2 Models length of time between arrivals (compare to Poisson)
 \2 Parameter $\lambda$  inverse of mean
 \3 Sometimes called rate, eg. time between arrivals
 \2 Memoryless: eg. time between arrivals
 \3 No other continuous distribution has this property
 \3 This property makes analysis simple
 \3 But you have to be sure it's true!
 \2 \emph{Examples?}

\1 Tails: Can be on both sides
 \2 Heavytailed: Not exponentially bounded
 \2 Fattailed: usually in reference to normal
 \2 Longtailed: usually in reference to exponential
 \2 Means ``unlikely'' things are actually more common than one might expect
 \2 Long tail means ``light'' somewhere else

\1 Normal
 \2 We've talked plenty about, remember that sum of iid variables tends
 towards normal

\1 Lognormal
 \2 Logarithms turn multiplication into addition: $\log xy = \log x + \log y$
 \2 So, lognormal is like normal, but for products of idd variables
 \2 Useful for things that accumulate by multiplication, for example errors
 \2 \emph{Examples?}

\1 Pareto
 \2 Produces IID interarrival times
 \2 Discrete equivalent is Zipf
 \2 Power law: few account for the largest portion  eg. ``the 99%''
 \2 Selfsimilar: the same at different scales (think fractal)
 \2 ``Bursty on many or all time scales''
 \2 Values correlated with future incidents
 \2 Compare to other distributions\ldots eg Poisson
 \2 Can be constructed with heavytailed ON/OFF sources
 \2 Has either no mean or infinite variance

\1 Weibull
 \2 Good for monitoring mean time to failure
 \2 Parameters are scal ($\lambda$) and shape ($k$)
 \2 ``A value of k < 1 indicates that the failure rate decreases over time''
 \2 ``A value of k = 1 indicates that the failure rate is constant over time.''
 \2 ``A value of k > 1 indicates that the failure rate increases with time.''

\1 For next time:
 \2 Read papers for Analysis 3
 \2 Posted one reading for next Thursday
 \2 Lab due a week from today
+\1 For today: finish planning the lab
+
+\1 Executive decisions I made (can discuss, though!)
+ \2 Keep one distribution for client behavior
+ \2 One distance per experiment?
+ \2 Use Linux TC for traffic shaping
+ \2 100 Mbit server NIC
+ \2 Draw the topology
+
+\1 Major evaluations
+ \2 cf. questions from Lecture 13
+ \2 Calibrate how many runs to do
+ \2 How to present data: tables, graphs, etc.
+
+\1 Interfaces between the pieces
+ \2 Get distributions of session sizes
+ \2 Set client conditions
+
+\1 Other grungy stuff
+ \2 Time synchronization
+ \2 Clients into traffic shaping pipes
+ \2 Calculating stats from packet streams
+
+\1 Next step assignments
+ \2 Continue to divide up by same areas?
+ \2 Estimate of the amount of work to do
+
+\1 For next time
+ \2 I won't be here
+ \2 Guest lectures by Xing Lin and Weibin Sun
+ \2 Papers posted
+ \2 Form not required
+ \2 Do think actively about questions as you read the papers
+ \2 You are encouraged to suggest ways to improve the evaluations
\end{outline}
diff git a/lectures/lecture16/images/Binomial_cdf.svg b/lectures/lecture17/images/Binomial_cdf.svg
similarity index 100%
rename from lectures/lecture16/images/Binomial_cdf.svg
rename to lectures/lecture17/images/Binomial_cdf.svg
diff git a/lectures/lecture16/images/Binomial_pmf.svg b/lectures/lecture17/images/Binomial_pmf.svg
similarity index 100%
rename from lectures/lecture16/images/Binomial_pmf.svg
rename to lectures/lecture17/images/Binomial_pmf.svg
diff git a/lectures/lecture16/images/Exponential_cdf.svg b/lectures/lecture17/images/Exponential_cdf.svg
similarity index 100%
rename from lectures/lecture16/images/Exponential_cdf.svg
rename to lectures/lecture17/images/Exponential_cdf.svg
diff git a/lectures/lecture16/images/Exponential_pdf.svg b/lectures/lecture17/images/Exponential_pdf.svg
similarity index 100%
rename from lectures/lecture16/images/Exponential_pdf.svg
rename to lectures/lecture17/images/Exponential_pdf.svg
diff git a/lectures/lecture16/images/Geometric_cdf.svg b/lectures/lecture17/images/Geometric_cdf.svg
similarity index 100%
rename from lectures/lecture16/images/Geometric_cdf.svg
rename to lectures/lecture17/images/Geometric_cdf.svg
diff git a/lectures/lecture16/images/Geometric_pmf.svg b/lectures/lecture17/images/Geometric_pmf.svg
similarity index 100%
rename from lectures/lecture16/images/Geometric_pmf.svg
rename to lectures/lecture17/images/Geometric_pmf.svg
diff git a/lectures/lecture16/images/Lognorm_CDF.svg b/lectures/lecture17/images/Lognorm_CDF.svg
similarity index 100%
rename from lectures/lecture16/images/Lognorm_CDF.svg
rename to lectures/lecture17/images/Lognorm_CDF.svg
diff git a/lectures/lecture16/images/Lognorm_PDF.svg b/lectures/lecture17/images/Lognorm_PDF.svg
similarity index 100%
rename from lectures/lecture16/images/Lognorm_PDF.svg
rename to lectures/lecture17/images/Lognorm_PDF.svg
diff git a/lectures/lecture16/images/Negbinomial.gif b/lectures/lecture17/images/Negbinomial.gif
similarity index 100%
rename from lectures/lecture16/images/Negbinomial.gif
rename to lectures/lecture17/images/Negbinomial.gif
diff git a/lectures/lecture16/images/Normal_CDF.svg b/lectures/lecture17/images/Normal_CDF.svg
similarity index 100%
rename from lectures/lecture16/images/Normal_CDF.svg
rename to lectures/lecture17/images/Normal_CDF.svg
diff git a/lectures/lecture16/images/Normal_PDF.svg b/lectures/lecture17/images/Normal_PDF.svg
similarity index 100%
rename from lectures/lecture16/images/Normal_PDF.svg
rename to lectures/lecture17/images/Normal_PDF.svg
diff git a/lectures/lecture16/images/Pareto_CDF.svg b/lectures/lecture17/images/Pareto_CDF.svg
similarity index 100%
rename from lectures/lecture16/images/Pareto_CDF.svg
rename to lectures/lecture17/images/Pareto_CDF.svg
diff git a/lectures/lecture16/images/Pareto_PDF.svg b/lectures/lecture17/images/Pareto_PDF.svg
similarity index 100%
rename from lectures/lecture16/images/Pareto_PDF.svg
rename to lectures/lecture17/images/Pareto_PDF.svg
diff git a/lectures/lecture16/images/Poisson_cdf.svg b/lectures/lecture17/images/Poisson_cdf.svg
similarity index 100%
rename from lectures/lecture16/images/Poisson_cdf.svg
rename to lectures/lecture17/images/Poisson_cdf.svg
diff git a/lectures/lecture16/images/Poisson_pmf.svg b/lectures/lecture17/images/Poisson_pmf.svg
similarity index 100%
rename from lectures/lecture16/images/Poisson_pmf.svg
rename to lectures/lecture17/images/Poisson_pmf.svg
diff git a/lectures/lecture16/images/Weibull_CDF.svg b/lectures/lecture17/images/Weibull_CDF.svg
similarity index 100%
rename from lectures/lecture16/images/Weibull_CDF.svg
rename to lectures/lecture17/images/Weibull_CDF.svg
diff git a/lectures/lecture16/images/Weibull_PDF.svg b/lectures/lecture17/images/Weibull_PDF.svg
similarity index 100%
rename from lectures/lecture16/images/Weibull_PDF.svg
rename to lectures/lecture17/images/Weibull_PDF.svg
diff git a/lectures/lecture17/lecturenotes.tex b/lectures/lecture17/lecturenotes.tex
index 68e9558b4ac03541856d58e032a6146567589203..0e84818eb66325ccf3e9c46372c2074770339cf4 100644
 a/lectures/lecture17/lecturenotes.tex
+++ b/lectures/lecture17/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#17}
+\title{CS6963 Lecture \#16}
\author{Robert Ricci}
\date{April 8, 2014}
+\date{March 25, 2014}
\begin{document}
@@ 20,55 +20,121 @@
\begin{outline}
\1 Postmortem for Lab 1
 \2 Was our plan sufficient?
 \2 What could/should we have done differently?

\1 Grading Lab 1
 \2 Everyone will try to reproduce someone else's result!
 \2 Assignments to be passed out after class
 \2 Due next Tuesday

\1 Giving talks  specifically, conference type talks

\1 Hold the audience's interest
 \2 Tell a story
 \2 Assume they are looking for an excuse to get back to their email
 \2 Put in the effort: multiply the audience size by the length of your
 talk to think of how much attention you are consuming

\1 Slides
 \2 Your slides are not your notes
 \3 They are visual aids to punctuate what you're saying
 \2 If you and the slides say the same thing, one is not necessary
 \2 You are competing with your slides for audience attention
 \3 Don't put up complicated stuff all at onceprogressive reveal
 \3 Use bullet lists as little as possible
 \2 Slides are not paper, you don't have to do black on white
 \2 Use color to convey information, but remember
 \3 Conference projectors have the worst color accuracy in the universe
 \3 Colorblindness, esp. red/green colorblindness (8\% of
 Europeandescended men)
 \2 If you're not comfortable with humor, just put the joke on the slide
 \2 Animations should only reduce confusion or convey information

\1 Preparation
 \2 Do practice talksand consider the audience
 \2 Listen to feedback, don't argue with it
 \2 Do two practices after the final change you make to your slides
 \2 It's important to be comfortable (physically as well as mentally)
 \2 Practice the key words that you might have trouble pronouncing

\1 Presentation
 \2 Never look at your slides
 \2 Never look at your slides
 \2 Don't us a laser pointeruse animated objects on the slides
 \2 Talk to people, not the room
 \2 Use a clicker

\1 For next time
 \2 Two papers to read, evals to do
 \2 Work on reproducing!
+\1 From last time
+ \2 Thanks to Junguk and Makito for the scripts!
+ \2 Quick status: client, server, network conditions, client request
+ sizes, analysis
+
+\1 Today: Talking about different prob distributions
+ \2 You many run into the need to generate data in these distributions,
+ or to recognize them in data that you get
+ \2 MLE: Maximum likelihood estimator: estimate parameters, each distrib
+ has its own
+ \2 Don't memorize the formulas, just be familiar with the concepts so that
+ you can look them up when needed
+
+\1 Discrete distributions
+
+\1 Bernoulli
+ \2 Just 1 and 0
+ \2 \emph{Why discrete?}
+ \2 Probability of a 1 is $p$
+ \2 Mean is $p$
+ \2 Variance $p(1p)$ at lowest when $p$ is 0 or 1
+ \2 \emph{Examples of things modeled by Bernoulli distribs?}
+
+\1 Binomial
+ \2 Number of successes ($x$) in a sequence of $n$ Bernoulli trials
+ \2 \emph{Why discrete?}
+ \2 So it has both $p$ and $n$ as params
+ \2 Mean: $np$
+ \2 Var: $n$ times var of Bernoulli
+ \2 \emph{Examples of things modeled by it?}
+
+\1 Geometric
+ \2 Number of trials up to and including first success
+ \2 Param is just $p$
+ \2 Mean is $1/p$
+ \2 Remember, only for independent events!
+ \2 \emph{Examples?}
+
+\1 Negative binomial
+ \2 How many successes before $r$ failures
+ \2 Can invert success of course
+ \2 Now you have $p$ and $r$ as parameters
+ \2 Mean: $\frac{pr}{1p}$
+ \2 \emph{What might you model with it?}
+
+\1 Poisson
+ \2 ``The probability of a given number of events occurring in a fixed
+ interval of time and/or space if these events occur with a known
+ average rate and independently of the time since the last event''
+ \2 Produces a number of arrivals in a given time
+ \2 Particularly good if the sources are independent
+ \2 Parameter is mean ($\lambda$)
+ \2 Very often used for arrivals: eg. arrival of packets at a queue or
+ requests at a server
+ \2 Can be used over particular intervals of time; eg. daytime, to keep
+ the iid assumption
+ \2 \emph{Examples?}
+
+\1 Continuous distributions
+
+\1 Uniform: All possibilities equally likely
+ \2 There is a discrete version of course too
+ \2 Params: $a$ to $b$
+ \2 Mean: $\frac{a+b}{2}$
+ \2 Usually generated, not measured
+
+\1 Exponential
+ \2 Models length of time between arrivals (compare to Poisson)
+ \2 Parameter $\lambda$  inverse of mean
+ \3 Sometimes called rate, eg. time between arrivals
+ \2 Memoryless: eg. time between arrivals
+ \3 No other continuous distribution has this property
+ \3 This property makes analysis simple
+ \3 But you have to be sure it's true!
+ \2 \emph{Examples?}
+
+\1 Tails: Can be on both sides
+ \2 Heavytailed: Not exponentially bounded
+ \2 Fattailed: usually in reference to normal
+ \2 Longtailed: usually in reference to exponential
+ \2 Means ``unlikely'' things are actually more common than one might expect
+ \2 Long tail means ``light'' somewhere else
+
+\1 Normal
+ \2 We've talked plenty about, remember that sum of iid variables tends
+ towards normal
+
+\1 Lognormal
+ \2 Logarithms turn multiplication into addition: $\log xy = \log x + \log y$
+ \2 So, lognormal is like normal, but for products of idd variables
+ \2 Useful for things that accumulate by multiplication, for example errors
+ \2 \emph{Examples?}
+
+\1 Pareto
+ \2 Produces IID interarrival times
+ \2 Discrete equivalent is Zipf
+ \2 Power law: few account for the largest portion  eg. ``the 99%''
+ \2 Selfsimilar: the same at different scales (think fractal)
+ \2 ``Bursty on many or all time scales''
+ \2 Values correlated with future incidents
+ \2 Compare to other distributions\ldots eg Poisson
+ \2 Can be constructed with heavytailed ON/OFF sources
+ \2 Has either no mean or infinite variance
+
+\1 Weibull
+ \2 Good for monitoring mean time to failure
+ \2 Parameters are scal ($\lambda$) and shape ($k$)
+ \2 ``A value of k < 1 indicates that the failure rate decreases over time''
+ \2 ``A value of k = 1 indicates that the failure rate is constant over time.''
+ \2 ``A value of k > 1 indicates that the failure rate increases with time.''
+
+\1 For next time:
+ \2 Read papers for Analysis 3
+ \2 Posted one reading for next Thursday
+ \2 Lab due a week from today
\end{outline}
diff git a/lectures/lecture18/lecturenotes.tex b/lectures/lecture18/lecturenotes.tex
index a9f092c38d3e12d9e1d9afdeaa39d0c1a1e76281..68e9558b4ac03541856d58e032a6146567589203 100644
 a/lectures/lecture18/lecturenotes.tex
+++ b/lectures/lecture18/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#18}
+\title{CS6963 Lecture \#17}
\author{Robert Ricci}
\date{April 15, 2014}
+\date{April 8, 2014}
\begin{document}
@@ 20,43 +20,55 @@
\begin{outline}
\1 Some very embarrassing mistakes
 \2 Numbers that cannot possibly be right
 \2 Failing to explain anything, just presenting graphs
 \2 Strange oddities that were not explored at all
 \2 Unreadable graphs
 \2 Not answering the questions
 \2 Logical flow from experiments to question answers to a conclusion
 \2 Generally incurious about the results

\1 Not using the tools from the class
 \2 Picking number of runs
 \2 Telling me which index of central tendency
 \2 Some measure of variance
 \2 Using statistical tests on means to prove difference or lack thereof

\1 Why did we get different numbers?
 \2 How clients were run  back to back or at the same time?
 \2 Did we saturate the links?
 \2 TC: All clients through one pipe?

\1 Experience repeating someone else's experiment
 \2 Who became more convinced of their own results?
 \2 Less?
 \2 What did it make you wish about how you did your own?
 \2 Who had to talk to the "author"?
 \2 Who had to have the author make changes?
 \2 Did you actually manage to get similar results to the "author"?
 \2 How well did the tools work  GENI and our own tools?

\1 Why we are going to do it again:
 \2 I want to have confidence in your evaluations
 \2 I want you to actually apply tools from the class

\1 Due date: the 30th? (last day of finals week)
 \2 Willing to make deals with the defending MS students

\1 For next time (x2): what should we do review of?
+\1 Postmortem for Lab 1
+ \2 Was our plan sufficient?
+ \2 What could/should we have done differently?
+
+\1 Grading Lab 1
+ \2 Everyone will try to reproduce someone else's result!
+ \2 Assignments to be passed out after class
+ \2 Due next Tuesday
+
+\1 Giving talks  specifically, conference type talks
+
+\1 Hold the audience's interest
+ \2 Tell a story
+ \2 Assume they are looking for an excuse to get back to their email
+ \2 Put in the effort: multiply the audience size by the length of your
+ talk to think of how much attention you are consuming
+
+\1 Slides
+ \2 Your slides are not your notes
+ \3 They are visual aids to punctuate what you're saying
+ \2 If you and the slides say the same thing, one is not necessary
+ \2 You are competing with your slides for audience attention
+ \3 Don't put up complicated stuff all at onceprogressive reveal
+ \3 Use bullet lists as little as possible
+ \2 Slides are not paper, you don't have to do black on white
+ \2 Use color to convey information, but remember
+ \3 Conference projectors have the worst color accuracy in the universe
+ \3 Colorblindness, esp. red/green colorblindness (8\% of
+ Europeandescended men)
+ \2 If you're not comfortable with humor, just put the joke on the slide
+ \2 Animations should only reduce confusion or convey information
+
+\1 Preparation
+ \2 Do practice talksand consider the audience
+ \2 Listen to feedback, don't argue with it
+ \2 Do two practices after the final change you make to your slides
+ \2 It's important to be comfortable (physically as well as mentally)
+ \2 Practice the key words that you might have trouble pronouncing
+
+\1 Presentation
+ \2 Never look at your slides
+ \2 Never look at your slides
+ \2 Don't us a laser pointeruse animated objects on the slides
+ \2 Talk to people, not the room
+ \2 Use a clicker
+
+\1 For next time
+ \2 Two papers to read, evals to do
+ \2 Work on reproducing!
\end{outline}
diff git a/lectures/lecture19/lecturenotes.tex b/lectures/lecture19/lecturenotes.tex
index 97862183478dd4dfbc6c99693b415f8d2df8598d..a9f092c38d3e12d9e1d9afdeaa39d0c1a1e76281 100644
 a/lectures/lecture19/lecturenotes.tex
+++ b/lectures/lecture19/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#19}
+\title{CS6963 Lecture \#18}
\author{Robert Ricci}
\date{March 17, 2014}
+\date{April 15, 2014}
\begin{document}
@@ 20,25 +20,43 @@
\begin{outline}
\1 Figuring out what the factors are that caused us to get different results. 2d grid:
 \2 Results: DM, Reno w/o, Reno w/, Cubic (SACK doesn;t matter)
 \2 Methods: Runs (Few/lots), method of launching clients (threads, clientside, serverside), controlling time (number of runs, size of fetch)
 \2 Goal here is not to declare someone right and wrong, goal is to understand under what conditions we get different results

\1 Review:
 \2 One run
 \2 Three runs, five runs
 \2 Turn on display of variance
 \2 Turn display of mean CI
 \2 Pick the number of runs
 \2 Approximate test

\1 First half of a good report
 \2 Intro clearly states goalseven though they are in the plan, it helps focus the report
 \2 Followed procedure to pick the number of runs  but did he actually recalc confidence for all of them
 \2 TODO: Figures 1 and 2
 \2 Regional network conditions  one table, easy to compare,
 \2 Discussion as we go
+\1 Some very embarrassing mistakes
+ \2 Numbers that cannot possibly be right
+ \2 Failing to explain anything, just presenting graphs
+ \2 Strange oddities that were not explored at all
+ \2 Unreadable graphs
+ \2 Not answering the questions
+ \2 Logical flow from experiments to question answers to a conclusion
+ \2 Generally incurious about the results
+
+\1 Not using the tools from the class
+ \2 Picking number of runs
+ \2 Telling me which index of central tendency
+ \2 Some measure of variance
+ \2 Using statistical tests on means to prove difference or lack thereof
+
+\1 Why did we get different numbers?
+ \2 How clients were run  back to back or at the same time?
+ \2 Did we saturate the links?
+ \2 TC: All clients through one pipe?
+
+\1 Experience repeating someone else's experiment
+ \2 Who became more convinced of their own results?
+ \2 Less?
+ \2 What did it make you wish about how you did your own?
+ \2 Who had to talk to the "author"?
+ \2 Who had to have the author make changes?
+ \2 Did you actually manage to get similar results to the "author"?
+ \2 How well did the tools work  GENI and our own tools?
+
+\1 Why we are going to do it again:
+ \2 I want to have confidence in your evaluations
+ \2 I want you to actually apply tools from the class
+
+\1 Due date: the 30th? (last day of finals week)
+ \2 Willing to make deals with the defending MS students
+
+\1 For next time (x2): what should we do review of?
\end{outline}
diff git a/lectures/lecture20/lecturenotes.tex b/lectures/lecture20/lecturenotes.tex
index 3c29b5be052f1c646b712737f3ad882270e55fc6..97862183478dd4dfbc6c99693b415f8d2df8598d 100644
 a/lectures/lecture20/lecturenotes.tex
+++ b/lectures/lecture20/lecturenotes.tex
@@ 10,9 +10,9 @@
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}
\title{CS6963 Lecture \#20}
+\title{CS6963 Lecture \#19}
\author{Robert Ricci}
\date{April 22, 2014}
+\date{March 17, 2014}
\begin{document}
@@ 20,73 +20,25 @@
\begin{outline}
\1 Review of topics covered
 \2 Overall point of systems evaluation
 \3 Providing data necessary to make a decision
 \3 Presenting it on a convincing manner
 \2 Understand what you are measuring
 \3 Understand what make convincing evidence
 \3 Understand what you need to measure and how to measure it
 \2 Understand the system under test
 \3 Where are the boundaries?
 \3 What is inside those boundaries that you are actually measuring?
 \3 How to the SUT boundaries relate to a deployment environment?
 \2 Evaluation should be a part of the research process
 \3 Convince yourself with data, not just bias
 \3 Often need preliminary evaluations
 \3 Understand the scope and limitations of your work
 \3 The more evaluation you do along the way, the less biased you are
 likely to be
 \2 Recognize the strengths and weaknesses in evaluations that you read
 \3 Think actively about what you need to see to be convinced
 \3 Look for biases or baisic mistakes
 \2 Common mistakes in systems evaluation
 \3 No goals or biased goals
 \3 Ignoring significant factors
 \3 Analysis without understanding the problem
 \3 No sensitivity analysis
 \3 Ignoring variability
 \2 Use the statistical tools available to you
 \3 For selecting the number of runs
 \3 For understanding the confidence in your results
 \3 For showing difference or sameness in results
 \3 The question is ``have you found something real?''
 \2 Use the tools available to you
 \3 Reproducibility is a big dealnot just for others, but for
 yourself too
 \3 Assuming your environment is fragile forces you to build more
 reproducible research
 \3 The closer you can get to ``experiment as function call'', the
 happier you will be
 \3 Keep track of everything

\1 Go through example of a nice report
 \2 By no means perfect, but better than most
 \2 Picked a reasonable number of experiments
 \2 Good discussion throughout
 \2 Directly addresses questions
 \2 Final decision is clear
 \2 But missing:
 \3 Saturating link
 \3 More thorough analysis for fig 3
 \3 Proof of difference or lack thereof
 \3 Fishy low variances

\1 You have until the 30th (midnight) to submit lab3
 \2 Ask questions now!
 \2 Remember that you will probably have to spend more time running experiments

\1 Wrap up, suggestions for next year
 \2 What stood out the most?
 \2 What would you do differently?
 \2 What did we to do much of?
 \2 What did we not do enough of?
 \2 How was the balance between the book, papers, and labs?
 \2 What do you see yourself potentially using?
 \2 Not using ever?
 \2 How about the book?
 \2 Submission system

+\1 Figuring out what the factors are that caused us to get different results. 2d grid:
+ \2 Results: DM, Reno w/o, Reno w/, Cubic (SACK doesn;t matter)
+ \2 Methods: Runs (Few/lots), method of launching clients (threads, clientside, serverside), controlling time (number of runs, size of fetch)
+ \2 Goal here is not to declare someone right and wrong, goal is to understand under what conditions we get different results
+
+\1 Review:
+ \2 One run
+ \2 Three runs, five runs
+ \2 Turn on display of variance
+ \2 Turn display of mean CI
+ \2 Pick the number of runs
+ \2 Approximate test
+
+\1 First half of a good report
+ \2 Intro clearly states goalseven though they are in the plan, it helps focus the report
+ \2 Followed procedure to pick the number of runs  but did he actually recalc confidence for all of them
+ \2 TODO: Figures 1 and 2
+ \2 Regional network conditions  one table, easy to compare,
+ \2 Discussion as we go
\end{outline}
diff git a/lectures/lecture12/Makefile b/lectures/lecture21/Makefile
similarity index 100%
rename from lectures/lecture12/Makefile
rename to lectures/lecture21/Makefile
diff git a/lectures/lecture21/lecturenotes.tex b/lectures/lecture21/lecturenotes.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3c29b5be052f1c646b712737f3ad882270e55fc6
 /dev/null
+++ b/lectures/lecture21/lecturenotes.tex
@@ 0,0 +1,93 @@
+\documentclass{article}[12pt]
+
+\usepackage[nomath]{fontspec}
+\usepackage{sectsty}
+\usepackage[margin=1.25in]{geometry}
+\usepackage{outlines}
+
+\setmainfont[Numbers=OldStyle,Ligatures=TeX]{Equity Text A}
+\setmonofont{Inconsolata}
+\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
+\allsectionsfont{\titlefont}
+
+\title{CS6963 Lecture \#20}
+\author{Robert Ricci}
+\date{April 22, 2014}
+
+\begin{document}
+
+\maketitle
+
+\begin{outline}
+
+\1 Review of topics covered
+ \2 Overall point of systems evaluation
+ \3 Providing data necessary to make a decision
+ \3 Presenting it on a convincing manner
+ \2 Understand what you are measuring
+ \3 Understand what make convincing evidence
+ \3 Understand what you need to measure and how to measure it
+ \2 Understand the system under test
+ \3 Where are the boundaries?
+ \3 What is inside those boundaries that you are actually measuring?
+ \3 How to the SUT boundaries relate to a deployment environment?
+ \2 Evaluation should be a part of the research process
+ \3 Convince yourself with data, not just bias
+ \3 Often need preliminary evaluations
+ \3 Understand the scope and limitations of your work
+ \3 The more evaluation you do along the way, the less biased you are
+ likely to be
+ \2 Recognize the strengths and weaknesses in evaluations that you read
+ \3 Think actively about what you need to see to be convinced
+ \3 Look for biases or baisic mistakes
+ \2 Common mistakes in systems evaluation
+ \3 No goals or biased goals
+ \3 Ignoring significant factors
+ \3 Analysis without understanding the problem
+ \3 No sensitivity analysis
+ \3 Ignoring variability
+ \2 Use the statistical tools available to you
+ \3 For selecting the number of runs
+ \3 For understanding the confidence in your results
+ \3 For showing difference or sameness in results
+ \3 The question is ``have you found something real?''
+ \2 Use the tools available to you
+ \3 Reproducibility is a big dealnot just for others, but for
+ yourself too
+ \3 Assuming your environment is fragile forces you to build more
+ reproducible research
+ \3 The closer you can get to ``experiment as function call'', the
+ happier you will be
+ \3 Keep track of everything
+
+\1 Go through example of a nice report
+ \2 By no means perfect, but better than most
+ \2 Picked a reasonable number of experiments
+ \2 Good discussion throughout
+ \2 Directly addresses questions
+ \2 Final decision is clear
+ \2 But missing:
+ \3 Saturating link
+ \3 More thorough analysis for fig 3
+ \3 Proof of difference or lack thereof
+ \3 Fishy low variances
+
+\1 You have until the 30th (midnight) to submit lab3
+ \2 Ask questions now!
+ \2 Remember that you will probably have to spend more time running experiments
+
+\1 Wrap up, suggestions for next year
+ \2 What stood out the most?
+ \2 What would you do differently?
+ \2 What did we to do much of?
+ \2 What did we not do enough of?
+ \2 How was the balance between the book, papers, and labs?
+ \2 What do you see yourself potentially using?
+ \2 Not using ever?
+ \2 How about the book?
+ \2 Submission system
+
+
+\end{outline}
+
+\end{document}