lecturenotes.tex 3.32 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
\documentclass{article}[12pt]

\usepackage[no-math]{fontspec}
\usepackage{sectsty}
\usepackage[margin=1.25in]{geometry}
\usepackage{outlines}

\setmainfont[Numbers=OldStyle,Ligatures=TeX]{Equity Text A}
\setmonofont{Inconsolata}
\newfontfamily\titlefont[Numbers=OldStyle,Ligatures=TeX]{Equity Caps A}
\allsectionsfont{\titlefont}

\title{CS6963 Lecture \#20}
\author{Robert Ricci}
\date{April 22, 2014}

\begin{document}

\maketitle

\begin{outline}

\1 Review of topics covered
    \2 Overall point of systems evaluation
        \3 Providing data necessary to make a decision
        \3 Presenting it on a convincing manner
    \2 Understand what you are measuring
        \3 Understand what make convincing evidence 
        \3 Understand what you need to measure and how to measure it
    \2 Understand the system under test
        \3 Where are the boundaries?
        \3 What is inside those boundaries that you are actually measuring?
        \3 How to the SUT boundaries relate to a deployment environment?
    \2 Evaluation should be a part of the research process
        \3 Convince yourself with data, not just bias
        \3 Often need preliminary evaluations
        \3 Understand the scope and limitations of your work
        \3 The more evaluation you do along the way, the less biased you are
            likely to be
    \2 Recognize the strengths and weaknesses in evaluations that you read
        \3 Think actively about what you need to see to be convinced
        \3 Look for biases or baisic mistakes
    \2 Common mistakes in systems evaluation
        \3 No goals or biased goals
        \3 Ignoring significant factors
        \3 Analysis without understanding the problem
        \3 No sensitivity analysis
        \3 Ignoring variability
    \2 Use the statistical tools available to you
        \3 For selecting the number of runs
        \3 For understanding the confidence in your results
        \3 For showing difference or sameness in results
        \3 The question is ``have you found something real?''
    \2 Use the tools available to you
        \3 Reproducibility is a big deal---not just for others, but for
            yourself too
        \3 Assuming your environment is fragile forces you to build more
            reproducible research
        \3 The closer you can get to ``experiment as function call'', the 
            happier you will be
        \3 Keep track of everything 

\1 Go through example of a nice report
    \2 By no means perfect, but better than most
    \2 Picked a reasonable number of experiments
    \2 Good discussion throughout
    \2 Directly addresses questions
    \2 Final decision is clear
    \2 But missing:
        \3 Saturating link
        \3 More thorough analysis for fig 3
        \3 Proof of difference or lack thereof
        \3 Fishy low variances

\1 You have until the 30th (midnight) to submit lab3
    \2 Ask questions now!
    \2 Remember that you will probably have to spend more time running experiments

\1 Wrap up, suggestions for next year
    \2 What stood out the most?
    \2 What would you do differently?
    \2 What did we to do much of?
    \2 What did we not do enough of?
    \2 How was the balance between the book, papers, and labs?
    \2 What do you see yourself potentially using?
    \2 Not using ever?
    \2 How about the book?
    \2 Submission system


\end{outline}

\end{document}