Skip to content
Snippets Groups Projects
Select Git revision
  • 474afc394e8dff07b524e735b8de80154a10c326
  • master default protected
  • imagezip-64
  • freebsd12-servers
  • prereserve
  • python3-sslxmlrpc
  • tbadb-update
  • monitoring-test
  • portal-css
  • openssl-1-1-0
  • wireless-techpreview
  • lbs-greatness
  • extension-fix
  • extension-rounding-test
  • root-keypair-r1
  • admission-control-prereserve-fix
  • typelimitfix
  • no-require
  • imagealiases
  • reserve-nalloc
  • 153-add-a-flag-to-assign-to-turn-the-different-type-penalty-into-a-violation
  • pre-fbsd12-merge
  • stable-20200610
  • stable-20190306
  • stable-20181105
  • stable-20180326
  • pre-igevent-changes
  • stable-20170417
  • stable-20160912
  • stable-20160222
  • stable-20150511
  • instageni-20141003
  • help
  • list
  • instageni-20140823
  • stable-20140728
  • instageni-20140418
  • instageni-20140214
  • instageni-20140207
  • instageni-20140124
  • instageni-20140117
41 results

testsuite

Forked from emulab / emulab-devel
24785 commits behind the upstream repository.
user avatar
Leigh B. Stoller authored
wrong.
b502fd8a
History
Testbed Testing Suite
---------------------

This directory contains the testing framework for the Testbed.

Running
-------

To run the tests, follow these steps:

1. Create a directory to store your test files and logs.
	mkdir ~/tbtest

2. Make that directory your current working directory.
	cd ~/tbtest

3. Run tbtest with the appropriate arguments.  For frontend tests this
would be:
	~/testbed/testsuite/tbtest run tbdb frontend

4. Sit back and wait.

The test directory holds the following files:
	tbobj/			A configured/built object tree.
	install/		An installed tree.
	tests/			Holds individual test files.
	tests/<test>/
		test.log	Main log of everything.
		nsfile.ns	The NS file used.
		db.txt ***	The state of the database.
		*		Any files created by tbsetup stuff.
	test.log		Master log file.  Output of tbtest.
	configure.log		Log of 'configure'
	build.log		Log of 'gmake'
	install.log		Log of 'gmake boss-install'
	postinstall.log		Log of 'gmake post-install'
	createdb.log #		Log of creating test DB.
	fill.log #		Log of filling test DB.
	clear.log *		Log of clearing experiments from test DB.
	unavailable.log **	Log of marking nodes as unavailable.
	clean.txt		Dump of clean test DB.
	dbdump.txt		Dump of real database.
	defs			Defs file used for configure.
	state			Used internally for proper cleanup.
	
* - Appears only in frontend (default) mode.
** - Appears only in full mode.
*** - Only exists for failed tests.
# - Empty except under error conditions.

Note: cleanup.log, unavailable.log, and free.log appear only in full
mode.  clear.log appears only in frontend mode.

Advanced: To run all tests with a special DB setup you can do 'tbtest
init' and then replace 'clean.txt' with a dump of the DB of your
choice.  'clean.txt' is loaded into the test DB before every test
run.  Be very careful to maintain the reserved table when doing this
in full mode.

Adding Tests
------------

There are two ways to add a test.  

The simple way:

The script 'mktest' can be used to automatically generate tests the
use the following testing approach:
	Run tbprerun and check for exit code 0.
	Run tbswapin and check for exit code 0.
	Run tbswapout and check for exit code0.
	Run tbswapin and check for exit code 0.
	Run tbswapout and check for exit code 0.
	Run tbend and check for exit code 0.

I.e. tests that should pass and only care about checking as far as
exit codes.

To make such a test run:
	mktest <mode> <testname> <nsfile>

Where <mode> is frontend or full.  You will be prompted to enter a description of the test.

Alternately you can run mktest without arguments and it will prompt
for all information.


The advanced way:

The testing framework supports a far wider range of tests than those
described above.  A test can have arbitrary DB state, run any sequence
of commands, check for fail cases, and inspect data base state for
correctness.  To create such tests read the "Test Format" section
below.

Using the Test Tree
-------------------

It is often useful to be able to do manual testing using the test
tree.  This can easily be done.  By running 
	tbtest -leavedb init tbdb
you will setup the test tree and test database, leaving the DB
intact.  You can now run all your commands from the install tree to do
manual testing just as the tests do.

The experiment 'test' under project 'testbed' exists in the test DB
and should be used for all testing.

The following command, when run from the test directory, will set up
your path correctly:

setenv PATH "/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/site/bin:.:`pwd`/install/bin:`pwd`/install/sbin"

Program Reference 
-----------------

mktest
------

mktest is a tool to create basic tests from a NS file.  Tests created
with mktest only check the exit codes, they do not check database or
node status.

Syntax:

mktest <mode> <name> <nsfile> [<description>]
mktest

<mode> is one of "frontend" or "full".  If a description is not
specified on the command line then an editor will be opened for the
user to enter one.

When run with no arguments then mktest prompts the user for all
information.

mktest MUST be run in testbed/testsuite.


tbtest
------

tbtest runs all the tests.  It is designed to do an entire testing run
from start to finish leaving a large collection of log files in its
wake.


Syntax: tbtest [-leavedb] [-path <path>] [-frontend] [-full] <mode>
Options:
  -leavedb - Do not drop the test database on exit.
  -path <path> - Path to directory to store test files.
  -frontend - Run in frontend mode.
  -full - Run in full mode.
  -exitonfail - Exit on any failure.
  -session <sessionid> - Specify alternate session ID.
  -flest - Generate flest.log for use with flest.
Mode:
  run <db> <testlist> [<pid> <eid> <num>]
  init <db> [<pid> <eid> <num>]
  test <testlist>
  single <testlist> <tests>
  finish
tbtest [-leavedb] [-path <path>] <db> <testlist>

Notes:

The arguments in [] for the modes are necessary with -full and should
not be present otherwise.

You need only use -full on init to set the type for any other
subsequent commands.

Generally tbtest is invoked in 'run' mode that does everything.  For
an more interactive approach a you can invoke 'tbtest init ...' and
then any number of 'tbtest test' and 'tbtest single' commands to run
tests.  When finished do 'tbtest finish'.

Everything is keyed to your username.  There currently is no way to
have multiple test runs in progress under the same username without
collision.

-leavedb is really only useful when running in single mode with only
one test.  Since the database is reset at the beginning of every test
if you use this mode with multiple tests you'll only get the DB state
of the last one.  You recreate the database state of any test by 
creating a new database and then loading tests/<test>/db.txt into it.

If -path is not present the current working directory is used.

In full mode you must provide a <pid>, <eid>, and the number of nodes
to reserve.  tbtest will attempt to reserve that many nodes from the
testbed under the given experiment and then use those nodes to run the
tests.

-single can take a list of tests.

-exitonfail is best used in conjunction with -leavedb.  It causes
tbtest to exit on the first test failure, thus no later tests will
corrupt the DB.  The user of this option is a matter of taste, as you
can always use tests/<test>/db.txt to restore the database state.

-session specifies an alternate session id.  The test DB will be named
tbdb_<session>.  Usually the session id is taken from the username.
Unfortunately the session ID must be globally unique.

dbdump
------

dbdump is a utility for constructing the results structure used in
tb_compare.

	dbdump <database> <query>

the output will be Perl code constructing the an array, @result, which
matches the result of <query> on DB <database>.  This should be copied
into the test script and the appropriate tb_compare command inserted
afterwards:

	tb_compare("<query>",\@result);

Notes: 

Make sure you change " to \" in <query> for the tb_compare statement.

This makes an exact copy.  As such it is inappropriate for information
that can vary from experiment to experiment such as the
virtual<->physical mapping.


Test Format
-----------

A test is a directory in testsuite/tests which contains the
following files:
	nsfile.ns - NS file.
	dbstate - Any commands to set up the DB.
	test - Test file.
	info - Info file.

dbstate 

This is just a list of SQL commands that are applied to the DB state
before the test in run.

info

Just a description of the test.

test 

This is a perl script that actually runs the test.  Generally it looks
something like:

tb_prerun("tbprerun",0);
tb_compare("<SQL query>",<results>);
tb_run("tbswapin -test",0);
tb_compare("<SQL query>",<results>);

See "Test API" below for a list of available subroutines and
variables.


Test API
--------

Routines:

tb_prerun(<cmd>,<exitcode>)

This runs "<cmd> pid eid nsfile", and compares the exitcode.  The test
fails and exit if the exit codes do not equal.

tb_run(<cmd>,<exitcode>)

This runs "<cmd> pid eid", and compares the exitcode.  The test fails
and exit if the exit codes do not equal.

tb_compare(<query>,<results>)

This executes the SQL <query> and then fetches the results.  <results>
is a list of list references.  If the results match exactly then test
test continues otherwise results are displayed and the test fails.

tb_fail(<msg>)

Explicitly fail the test with <msg>.


Variables (changing this values will have no effect besides
potentially messing your own test up).

$pid, $eid - PID and EID of test experiment.
$test - Test name.
$dir - Directory containing test files.
$dbh - Handle to DB connection.

Other notes:

The current directory should not be changed.  It will be the directory
containing .top, .ptop, and .log files.