setup.txt 11.5 KB
Newer Older
1
#####
2
##### Setting up the Utah Network Testbed software on a boss node
3
##### Tested on FreeBSD 4.9
4
#####
5

6 7 8 9 10 11
##### Important notes

In order to be able to help you debug any problems you run into or answer
certain questions, we'll need have accounts, preferably with root access if
allowed by your institution's AUP, on your boss and ops nodes, and will need to
be able to access the webserver on boss.
12 13 14
This is crucial during testbed installation and bringup; after that it's not
so important, except when you are upgrading to a new version of our software.

15

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
Supported environment:
This software does make some assumptions about the environment in which it is
run. Some of the most basic ones are listed below. In general, we don't have
the resources to adapt it to every possible environment. So, you will need to
either work out a way to match the environment outlined below, or be willing to
invest some work in adapting the software.

(1) You will need at least two network interfaces on each node - one for the
control network, and one for the experimental network. The experimental network
needs to be one on which we can make VLANs with SNMP. Currently, we support
Cisco 6500 and 4000 series switches (though not all switches in these lines
have been tested). The control net must have full multicast support, including
IGMP snooping. Nodes' control network interfaces must support PXE.

(2) We highly, highly recommend that boss, ops, and all the nodes be in
publicly routed IP space. If this is not possible, then boss and ops should be
given two interfaces: One in the nodes' control network, and one in public IP
space. If you must use private IP space for the nodes' control net, we suggest
using the 192.168/16 subnet, which leaves the larger 10/8 subnet available for
the experimental network.

(3) If you have a firewall, you will need to be able to get certain standard
ports through to boss and ops, such as the ports for http, https, ssh, named
(domain), and smtp. Any other strange network setup (such as NAT) between the
boss/ops and the outside world will cause really big headaches.

(4) The whole testbed should be in a domain or subdomain for which boss can be
the name server.

(5) The nodes must be able to reach boss with DHCP requests on the control
network - this means either being in the same broadcast domain (ie. LAN), or,
if there is a router in between, the router must be capable of forwarding
DHCP/BOOTP packets. Since the nodes will DHCP from boss, it is important that
there not be another DHCP server (ie. one for another part of your lab)
answering their requests.

52 53 54 55 56
(6) Boss and ops should have their own local disk space - in particular, the
/usr/testbed directory cannot be shared between them. It may be possible to
use an external machines (other than ops) as a fileserver - talk to Utah abut
this if you'd like to try.

57 58 59 60
##### Other docs:
Useful summary info and diagrams can be found in "build-operate.ppt" and
"security-all.ppt" in http://www.cs.utah.edu/flux/testbed-docs/internals/

Jay Lepreau's avatar
Jay Lepreau committed
61 62 63
##### Step -1 - Set up "ops"

Follow the instructions in the setup-ops.txt file before the ones in this file!
64

65
##### Step 0 - OS installation and setup
66

67
Install FreeBSD on the machine you'll be using for your boss node, using the
68
standard FreeBSD installation process.  When asked by the installer, it's best
69 70
to choose the 'Developer' distribution set - this gets you full sources.  When
it asks if you want to install the ports collection, answer no.
Leigh B. Stoller's avatar
Leigh B. Stoller committed
71 72 73 74 75 76 77

You, will, however, have to make sure that you create a partition large
enough to hold /usr/testbed - in addition to the testbed software, this is
where many disk images will get stored. The /var partition will need to be
large enough to hold the database - 100MB extra for the database should be
sufficient. Also, since we'll be installing a lot of packages, you'll want
to make sure that /usr is at least 8GB and at least a million inodes.
78

79 80 81 82
If you want, you can go ahead and create an account for yourself on boss. For
now, just stick the home directory somewhere local, and move it to /users/ once
you've got it mounted from ops (the boss-install script will set this up). In
general, it's probably simpler to just use 'root' for now.
83

Leigh B. Stoller's avatar
Leigh B. Stoller committed
84 85 86 87 88 89 90 91
##### Step 1 - Installing packages

Again, almost the same as on ops. Download the same tarball, and follow
the same pkg_add procedure, except this time, you're going to install
the emulab-boss-1.8 package instead of emulab-ops.

Also grab a copy og our approved ports tree and install it, the same as
described in setup-ops.txt.
92

Leigh B. Stoller's avatar
Leigh B. Stoller committed
93 94 95
##### Step 2 -  Unpacking source and creating a defs file

Unpack the source tarball somewhere with at least a few dozen MB free.
96
root's home directory is probably best for this.
97

98 99 100
Now, you'll need to create a 'defs file', which is used by the configure
script to describe your enviroment, such as the hostnames of your boss and ops
nodes, and email addresses that certain types of mail will be sent to.
101

102 103
Use the 'defs-example' file in the root of our source distribution as a
template. It contains comments explaining the important variables to set.
104

Leigh B. Stoller's avatar
Leigh B. Stoller committed
105
##### Step 3 - Configuring an object tree
106 107

This works the same as it did on ops:
108

Leigh B. Stoller's avatar
Leigh B. Stoller committed
109 110
	cd ~/tbobj
	~/testbed/configure --with-TBDEFS=/users/ricci/testbed/defs-ricci
111

112
##### Step 4 - Running the boss installation script
113

114 115
Again, this works the same as it did on ops, except that you run
install/boss-install in the object tree, instead of ops-install.
116

117 118
Like the ops-install script, boss-install sets up paswordless sudo for anyone
in the wheel group.
119

120
##### Step 5 - Installing from source.
121

122 123
In your object directory, do a 'gmake && gmake boss-install'. Then, as root, do
a 'gmake post-install'. The post-install target needs to run as root, so that
124
it can make certain scripts setuid, etc.
125

126
##### Step 6 - Setting up root ssh from boss to ops
127

128 129 130 131
This step is now done as part of boss-install/ops-install. To confirm
this, make sure this works:

	boss> sudo ssh ops ls /
132

133
If this *FAILS*, you will need to do this by hand, typing a password:
134

135 136
	scp /root/.ssh/identity.pub ops:/root/.ssh/authorized_keys
	
137
##### Step 7 - Setting up named
138 139 140 141 142 143 144 145 146 147 148

The testbed software manipulates DNS zone files for two reasons. First, it
adds your nodes to them so that you don't have to. Second, it creates CNAMEs
for all the nodes in every experiment. (so that you can use, for example,
'nodeA.myexp.myproj.emulab.net' to refer to your node regardless of which
physical node it got mapped to.)

The named_setup script does this by generating zone files - in general, it
concatenates a '.head' file, written by you, with it's own generated entries.
The main zone file is /etc/namedb/OURDOMAIN.db, where OURDOMAIN is from your
defs file. (OURDOMAIN, unless explicitly specified, is taken to be the domain
149
portion of BOSSNODE). We also generate reverse zone files (for inverse
150 151
lookups, ie. turning IP addresses back into names) in
/etc/named/reverse/SUBNET.db, where SUBNET is the the class-C subnet in which
152 153 154 155 156 157 158 159 160 161 162 163 164 165
the addresses reside (ie. 10.0.0.db). This value is defined in the defs
file created above, as TESTBED_NETWORK.

boss-install makes a reasonable attempt to create a set of named config
files for your, placing them in /etc/named. If your testbed consists of
a single class-C network, then these files will most likely be correct,
although you want to look at them to make sure. Look at these files to make
sure:

	/etc/named/OURDOMAIN.db.head
	/etc/named/reverse/SUBNET.db.head
	/etc/named/named.conf

If you have more than one class-C subnet for your testbed, you'll need a
166
copy of the reverse zone file for each one. You want to put boss, ops, and
167 168 169 170
any 'infrastructure' equipment (such as routers and switches) into the zone
files.  These zone files do not need to include the nodes - the nodes will
be added to them automatically. Be sure to edit /etc/named/named.conf if
you add any reverse map files (follow the format for the existing entry).
171 172

Once you think you've got things set up, run /usr/testbed/sbin/named_setup,
173 174
and make sure that it doesn't give you any error messages. It will generate
the following files:
175 176 177

	/etc/namedb/OURDOMAIN.db
	/etc/namedb/reverse/SUBNET.db
178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198

##### If you are using unroutable private IP addresses for part of the
      testbed:

In order to handle this situation, we'll need to use a feature of bind called
`views` so that the private addresses are only exposed to clients within the
testbed. See the bind documentation for more details on this feature. Note
that you'll want to make sure that loopback queries from boss itself see the
internal view - you want boss to resolve its own hostname to its private
address, not its public one.

In order to use multiple views, we generate multiple zone files.  In addition
to OURDOMAIN.db, which will be used for the 'external' view, we create
OURDOMAIN.internal.db for use with the 'internal' view. So, you'll also need
to create OURDOMAIN.internal.db.head .  When we generate the zone files, only
publicly-routable addresses are put into OURDOMAIN.db .
OURDOMAIN.internal.db contains all addresses, both public and private.  So,
basically, you'll want to put the public addresses for boss, ops, etc.  into
OURDOMAIN.db.head, and their private addresses into
OURDOMAIN.internal.db.head . 

199
##### Step 8 - Other miscellaneous things to set up
200

201 202
There are a few things we haven't been able to completely automate just yet,
though we hope to soon. 
203

204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
hosts file - You want to put boss/ops name/IP addresses in /etc/hosts on
both boss and ops to avoid boottime circular dependencies (cause of NFS
cross mounts). This is done for you in ops-install and boss-install, but
you might want to confirm it was done properly. If you change the IP
addresses of boss/ops later, you will want to be sure to update /etc/hosts
on both machines.

SSL certificates - Our apache config file expects to find SSL certificates
in: 
	/usr/local/etc/apache/ssl.crt/www.<sitename>.crt
	/usr/local/etc/apache/ssl.key/www.<sitename>.key
	
(where <sitename> is OURDOMAIN from the configure defs file, which defaults
to boss's domain name).

boss-install already generated a temporary no-passhrase certificate for you
and placed them in the locations specified above. However, we recommend
221 222 223 224 225 226 227 228 229 230
that you get a "real" certificate from Verisign (or one of the
others).

DHCPD - boss-install generated a dhcpd.conf.template and installed it in
/usr/local/etc (the template is derived from information you provided in
defs file). It then generated an actual dhcpd.conf file and started up
dhcpd for you. Do not edit the dhcpd.conf file directly! If you need need to
make changes, change the template instead and then run:

	/usr/testbed/sbin/dhcpd_makeconf -i -r
231

232 233 234 235 236
tftpboot - There are a few bootloaders, mini-kernels, and MFSes that are used
to boot, reload, etc. testbed machines, which live in /tftpboot . For the time
being, these are not distributed with our source, and require some site
customizations, so ask Utah for the boot loaders, etc.

237
disk images - You'll also, of course, need disk images to go on your nodes.
238 239
Right now, we have no automatic way of generating these, so you'll have to ask
Utah for some.
240

241 242 243 244 245
locate database - It can be useful to update the 'locate' database to help you
find files as you're learning the system. This normally happends nightly, but
you can force it to happen now by running 'locate.updatedb' as root. This will
take several minutes. You can then find foo.conf by running 'locate foo.conf'.

246
##### Step 9 - Reboot boss
247 248 249

Okay, go ahead and reboot boss now, and make sure it comes up okay.

250
##### Step 10 - Filling the database
251 252 253 254

See the file setup-db.txt in this directory for instructions on getting the
proper information about your site and nodes into the database.