setup.txt 12.2 KB
Newer Older
1
#####
2
##### Setting up the Utah Network Testbed software on a boss node
3
##### Tested on FreeBSD 4.9
4
#####
5

6 7 8 9 10 11
##### Important notes

In order to be able to help you debug any problems you run into or answer
certain questions, we'll need have accounts, preferably with root access if
allowed by your institution's AUP, on your boss and ops nodes, and will need to
be able to access the webserver on boss.
12 13 14
This is crucial during testbed installation and bringup; after that it's not
so important, except when you are upgrading to a new version of our software.

15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
Supported environment:
This software does make some assumptions about the environment in which it is
run. Some of the most basic ones are listed below. In general, we don't have
the resources to adapt it to every possible environment. So, you will need to
either work out a way to match the environment outlined below, or be willing to
invest some work in adapting the software.

(1) You will need at least two network interfaces on each node - one for the
control network, and one for the experimental network. The experimental network
needs to be one on which we can make VLANs with SNMP. Currently, we support
Cisco 6500 and 4000 series switches (though not all switches in these lines
have been tested). The control net must have full multicast support, including
IGMP snooping. Nodes' control network interfaces must support PXE.

(2) We highly, highly recommend that boss, ops, and all the nodes be in
publicly routed IP space. If this is not possible, then boss and ops should be
given two interfaces: One in the nodes' control network, and one in public IP
space. If you must use private IP space for the nodes' control net, we suggest
using the 192.168/16 subnet, which leaves the larger 10/8 subnet available for
the experimental network.

(3) If you have a firewall, you will need to be able to get certain standard
ports through to boss and ops, such as the ports for http, https, ssh, named
(domain), and smtp. Any other strange network setup (such as NAT) between the
boss/ops and the outside world will cause really big headaches.

(4) The whole testbed should be in a domain or subdomain for which boss can be
the name server.

(5) The nodes must be able to reach boss with DHCP requests on the control
network - this means either being in the same broadcast domain (ie. LAN), or,
if there is a router in between, the router must be capable of forwarding
DHCP/BOOTP packets. Since the nodes will DHCP from boss, it is important that
there not be another DHCP server (ie. one for another part of your lab)
answering their requests.

52 53 54 55
##### Other docs:
Useful summary info and diagrams can be found in "build-operate.ppt" and
"security-all.ppt" in http://www.cs.utah.edu/flux/testbed-docs/internals/

Jay Lepreau's avatar
Jay Lepreau committed
56 57 58
##### Step -1 - Set up "ops"

Follow the instructions in the setup-ops.txt file before the ones in this file!
59

60
##### Step 0 - OS installation and setup
61

62
Install FreeBSD on the machine you'll be using for your boss node, using the
63 64 65 66 67 68 69 70 71
standard FreeBSD installation process.  When asked by the installer, it's best
to choose the 'Developer' distribution set - this gets you full sources. The
'X-Developer' distribution set would be fine too, if you want to be able to run
X clients from the boss node.  When it asks if you want to install the ports
collection, answer yes.  You don't need to worry about which packages to
install (of course, grab your favorite editors, etc.) - the ones our software
needs will be installed automatically later.  You, will, however, have to make
sure that you create a partition large enough to hold /usr/testbed - in
addition to the testbed software, this is where many disk images will get
72 73 74 75
stored. The /var partition will need to be large enough to hold the database -
100MB extra for the database should be sufficient. Also, since we'll be building
and installing a lot of software from the ports tree, you'll want to make sure
that /usr is at least 2 GB.
76

77 78 79 80
If you want, you can go ahead and create an account for yourself on boss. For
now, just stick the home directory somewhere local, and move it to /users/ once
you've got it mounted from ops (the boss-install script will set this up). In
general, it's probably simpler to just use 'root' for now.
81

82 83 84 85 86 87 88 89 90
We occasionally run into problems with certain FreeBSD ports. Also, you're going
to want the latest security updates. So, you should at the very least bring
your ports collection up to date using:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/cvsup.html

It is also a good idea to bring your base system up to date with the -STABLE
branch.  Instructions for doing this can be found at:
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/cutting-edge.html

91
##### Step 1 - Create a defs file
92

93 94 95
The defs file will describe some of your setup, such as the hostnames of your
boss and ops nodes, and email addresses that certain types of mail will be sent
to.
96

97 98
Use the 'defs-example' file in the root of our source distribution as a
template. It contains comments explaining the important variables to set.
99

100
##### Step 2 -  Unpacking and running configure
101

102
This works the same as it did on ops:
103
cd ~/tbobj
104
~/testbed/configure --with-TBDEFS=/users/ricci/testbed/defs-ricci
105

106
##### Step 3 - Running the boss installation script
107

Jay Lepreau's avatar
Jay Lepreau committed
108
Again, this works the same as it did on ops, except that you run
109
install/boss-install in the object tree, instead of ops-install.
110

111
Part way through, this script will bail out and prompt you to install some
112 113
ports. This can take a long time (hours), and you want to be able to see what's
going on, right?
114

115 116
So, just cd to /usr/ports/misc/emulab-boss/ and run (as root) a 'make install'.
When you're done, re-run the boss-install script.
117

118 119
Like the ops-install script, boss-install sets up paswordless sudo for anyone
in the wheel group.
120

121 122 123 124 125 126
There is one bootstrapping problem we have that needs to be worked around - we
put fully-qualified names for the ops/users node into /etc/fstab on boss. But,
if you're running the nameserver for this domain on boss, those names won't be
resolvable yet. Since we don't yet have a way to auto-generate DNS
configuration files, the suggested work-around is to add addresses for the
FSNODE and USERNODE that you specified in your defs file (which may be the same
127
thing), to /etc/hosts on boss. Remember to remove them once you really have
128 129
DNS set up.

130
##### Step 4 - Installing from source.
131

132 133
In your object directory, do a 'gmake && gmake boss-install'. Then, as root, do
a 'gmake post-install'. The post-install target needs to run as root, so that
134
it can make certain scripts setuid, etc.
135

136
##### Step 5 - Setting up root ssh from boss to ops
137

138 139 140 141
The boss node needs to be able to ssh in, as root, to the ops node. To set this
up, copy root's public identity from boss (created by the boss-install script)
to ops's authorized_keys file:
scp /root/.ssh/identity.pub ops:/root/.ssh/authorized_keys
142

143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207

##### Step 6 - Setting up named

The testbed software manipulates DNS zone files for two reasons. First, it
adds your nodes to them so that you don't have to. Second, it creates CNAMEs
for all the nodes in every experiment. (so that you can use, for example,
'nodeA.myexp.myproj.emulab.net' to refer to your node regardless of which
physical node it got mapped to.)

The named_setup script does this by generating zone files - in general, it
concatenates a '.head' file, written by you, with it's own generated entries.
The main zone file is /etc/namedb/OURDOMAIN.db, where OURDOMAIN is from your
defs file. (OURDOMAIN, unless explicitly specified, is taken to be the domain
portion of BOSSNODE.) We also generate reverse zone files (for inverse
lookups, ie. turning IP addresses back into names) in
/etc/named/reverse/SUBNET.db, where SUBNET is the the class-C subnet in which
the addresses reside (ie. 10.0.0.db).

You'll need to create these .head files yourself. The easiest way to do this
is to start with the examples we've provided in this directory:
example.emulab.net.db.head  - the forward zone file
example-155.101.128.db.head - a reverse zone file
If you have more than one class-C subnet for your testbed, you'll need a copy
of the reverse zone file for each one. Follow the examples in these .head
files, making sure to get boss, ops, and any 'infrastructure' equipment (such
as routers and switches) into the zone files.  These zone files do not need to
include the nodes - the nodes will be added to them automatically.

Now edit /etc/namedb/named.conf, and add an entry like this for the forward
zone:
        zone "example.emulab.net" in {
            type master;
            file "example.emulab.net.db";
        };

And one of these for each reverse subnet:
        zone "128.101.155.in-addr.arpa" in {
            type master;
            file "reverse/155.101.128.db";
        };

Once you think you've got things set up, run /usr/testbed/sbin/named_setup,
and make sure that it doesn't give you any error messages.

##### If you are using unroutable private IP addresses for part of the
      testbed:

In order to handle this situation, we'll need to use a feature of bind called
`views` so that the private addresses are only exposed to clients within the
testbed. See the bind documentation for more details on this feature. Note
that you'll want to make sure that loopback queries from boss itself see the
internal view - you want boss to resolve its own hostname to its private
address, not its public one.

In order to use multiple views, we generate multiple zone files.  In addition
to OURDOMAIN.db, which will be used for the 'external' view, we create
OURDOMAIN.internal.db for use with the 'internal' view. So, you'll also need
to create OURDOMAIN.internal.db.head .  When we generate the zone files, only
publicly-routable addresses are put into OURDOMAIN.db .
OURDOMAIN.internal.db contains all addresses, both public and private.  So,
basically, you'll want to put the public addresses for boss, ops, etc.  into
OURDOMAIN.db.head, and their private addresses into
OURDOMAIN.internal.db.head . 

##### Step 7 - Other miscellaneous things to set up
208

209 210
There are a few things we haven't been able to completely automate just yet,
though we hope to soon. 
211

212 213 214 215
hosts file - It's a good idea to put ops' name/IP address in /etc/hosts - this
helps out NFS mounts, which are typically done before the nameserver is started,
and is generally helpful if things go wrong with the nameserver.

216 217
SSL certificates - Our apache config file expects to find SSL certificates in:
/usr/local/etc/apache/ssl.crt/www.<sitename>.crt and
218
/usr/local/etc/apache/ssl.key/www.<sitename>.key
219 220
(where <sitename> is OURDOMAIN from the configure defs file, which defaults to
boss's domain name.) 
221 222
Generate a passwordless certificate (up to you if you want to get a 'real' one
from Verisign, etc., or sign your own), and place the files from it in the
223 224 225 226 227
above locations. An easy way to generate a temporary self-signed certificate is
to run:
make all certificate
... in /usr/ports/www/apache13-modssl . Make sure that you give the same value
for the 'Common name' that you put in WWWHOST in your defs file, and answer
228 229
'N' to the 'Encrypt the private key now?' question. You can just take the
defaults on the rest of the qestions. This script creates the files:
230 231 232
work/apache_<version>/conf/ssl.key/server.key
work/apache_<version>/conf/ssl.crt/server.crt
... which you can move into the locations mentioned above.
233

234 235 236 237 238
tftpboot - There are a few bootloaders, mini-kernels, and MFSes that are used
to boot, reload, etc. testbed machines, which live in /tftpboot . For the time
being, these are not distributed with our source, and require some site
customizations, so ask Utah for the boot loaders, etc.

239
disk images - You'll also, of course, need disk images to go on your nodes.
240 241
Right now, we have no automatic way of generating these, so you'll have to ask
Utah for some.
242

243 244 245 246 247 248 249 250 251 252
locate database - It can be useful to update the 'locate' database to help you
find files as you're learning the system. This normally happends nightly, but
you can force it to happen now by running 'locate.updatedb' as root. This will
take several minutes. You can then find foo.conf by running 'locate foo.conf'.

##### Step 8 - Reboot boss

Okay, go ahead and reboot boss now, and make sure it comes up okay.

##### Step 9 - Filling the database
253 254 255 256

See the file setup-db.txt in this directory for instructions on getting the
proper information about your site and nodes into the database.