Commit 33565408 authored by Robert Ricci's avatar Robert Ricci

Add paper eval template

parent 058dc2fe
## Rule to build a .pdf from a .tex of the same name; the automatic variable $<
## refers to the name of the first dependency
%.pdf: %.tex
pdflatex $<
## Find all .tex files, and assume each will be built into a separate PDF; by
## Adding dependency on foo.pdf, we cause the rule above to be triggered to turn
## foo.tex into foo.pdf
## The 'all' target is built by default if you just run 'make'
all: $(patsubst %.tex, %.pdf, $(wildcard *.tex))
\documentclass{article}
\title{Paper Evaluation: \\ FIRST AUTHOR LASTNAME, VENUE, YEAR}
\author{YOUR NAME HERE}
\date{\today}
%% Simple macro to make it easy to identify the questions in the evaluation
%% form. To make the questions vanish, comment out the first line and uncomment
%% the second.
\newcommand{\question}[1]{\textit{#1}}
%\newcommand{\question}[1]{}
\begin{document}
\maketitle
\section{Paper Summary}
\question{In your own words, summarize the paper in single paragraph. Make sure
to address, as succinctly as possible: what problem is the paper solving,
what are the major techniques it employs, and what are its conclusions?}
\section{Paper Goals}
\question{Does the paper have a set of clear goals stated in the abstract or
introduction? If so, what are they? State in your own words in two
sentences or less. If not, help the authors out: state what you think the
goals are. Be as brief as possible.}
\section{Questions to Answer}
\question{List the questions you think the paper needs to answer in order
to persuasively argue that it has accomplished its goals. Which of
these questions can be answered by evaluation? For those that can, what
type of evaluation would you find most convincing? What factors would
you expect to see investigated, and over what ranges (order of magnitude)?
What metrics would you expect to see used? For questions that cannot
be answered with a typical evaluation, what kinds of arguments would you
find persuasive?}
\section{System Under Test}
\question{What did the authors chose as the boundaries for the System Under
Test? Did they define them clearly? If not, try defining them yourself,
either based on clues in the paper or based on what you think the
boundaries should be.}
\section{Parameters and Factors}
\question{What (system and/or workload) parameters do the authors list? Did
they include any that were not necessary, or leave out any you think
to be important? Which of the parameters did they vary as factors,
and do you think there are any unnecessary or left-out factors?}
\section{Major Evaluations}
\question{List the major evaluations that the paper includes. For each,
briefly list:
\begin{itemize}
\item Evaluation method used (eg. simulation, analytical modeling,
live measurement, emulation)
\item Metrics used
\item Factor(s) varied
\item Workload used
\item Conclusion / outcome of the evaluation
\end{itemize}
}
\section{Questions Revisited}
\question{Looking back at the list of questions, which were answered to your
satisfaction, and which were not?}
\section{Goals Revisited}
\question{Overall, are you satisfied that the paper provides sufficient
evidence that it meets its claimed goals? If not, what else would be
required?}
\section{Common Mistakes}
\question{Does the paper make any of the common mistakes listed in Chapter~2
of the book? How could the authors have avoided any of the mistakes that
they made?}
\end{document}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment