... | ... | @@ -16,11 +16,11 @@ people at Utah. |
|
|
|
|
|
This software makes some assumptions about the environment in which it is run. Some of the most basic are listed below. We don't have the resources to adapt it to every possible environment. So, you will need to either work out a way to match the environment outlined below, or be willing to invest substantial work in adapting the software.
|
|
|
|
|
|
* Emulab assumes a minimum of two dedicated server machines, known hereafter as 'boss' and 'ops', for Emulab setup. In the past we have supported use of a separate, dedicated filesystem server, 'fs', but we now encourage use of the 'ops' server to host the Emulab shared filesystems. People have set up the Emulab servers as VMs under VMWare and as of the FreeBSD 10 based release, these servers can be instantiated as VMs under Xen (4.9 or 4.11) as well.
|
|
|
* Emulab assumes a minimum of two dedicated server machines, known hereafter as 'boss' and 'ops', for Emulab setup. In the past we have supported use of a separate, dedicated filesystem server, 'fs', but we now encourage use of the `ops` server to host the Emulab shared filesystems and this document assumes that configuration. People have set up the Emulab servers as VMs under VMWare and Xen (4.9 or 4.11) as well. Any VM hosting environment in which FreeBSD runs as a guest should work.
|
|
|
|
|
|
* For boss and ops server (VM) resources, we recommend that each machine have at least 4 cores and 16GB of RAM; more is better. The amount of disk space required depends on how many users you plan to support, but certainly no less than 1TB on each. It is better to have a separate boot disk on which you can install the OS (FreeBSD), and then an additional one or more disks for the Emulab bits. Redundancy for the latter can be achieved by either using a hardware RAID configuration on the extra disks or by using ZFS to configure the disks within FreeBSD.
|
|
|
* For `boss` and `ops` server (or VM) resources, we recommend that each machine have at least 4 cores and 16GB of RAM; more is better. The amount of disk space required depends on how many users you plan to support, but certainly no less than 1TB on each. It is better to have a separate boot disk on which you can install the OS (FreeBSD), and then an additional one or more disks for the Emulab bits. Redundancy for the latter can be achieved by either using a hardware RAID configuration on the extra disks or by using ZFS to configure the disks within FreeBSD.
|
|
|
|
|
|
* You will need at least two network interfaces on each experimental node: one for the control network and one for the experimental network. The experimental network needs to be one on which you can make VLANs with SNMP. Currently, Emulab supports:
|
|
|
* You will need at least two network interfaces on each experimental node: one for the control network and one for the experimental network. It is advisable to have a third dedicated management port supporting IPMI access for node control. The experimental network needs to be one on which you have control of the switches and can make VLANs with SNMP using the Emulab `snmpit` tool. Currently, `snmpit` supports:
|
|
|
|
|
|
* Dell 'S' and 'Z' series switches running OS9 and OS10. We use these extensively, including S3048, S3148, S4048, S5248, Z9100, Z9264, and Z9500. We do not support the 'N' series switches.
|
|
|
|
... | ... | @@ -30,31 +30,37 @@ This software makes some assumptions about the environment in which it is run. S |
|
|
|
|
|
* HP Procurve. The 5400zl line has seen heavy production use on two Emulab sites. Other models may be easy to support, but this has not been tested.
|
|
|
|
|
|
* Cisco 6500/6500-E and 4000 series switches (though not all switches in these lines have been tested - the 6513, 6509, 6506, 4006, and 4506 are known to work, running either CatOS or IOS). Cisco 3750's probably work but have not been tested. The 2950, 2960 and 2980 switches are known to work, although they are limited to a small number (64) of VLANs. (In general, it's the supervisor module, rather than the chassis, that matters, and Emulab supports all supervisor modules for the 6500s that we know about.)
|
|
|
* Legacy Cisco 6500/6500-E and 4000 series switches (though not all switches in these lines have been tested - the 6513, 6509, 6506, 4006, and 4506 are known to work, running either CatOS or IOS). Cisco 3750's probably work but have not been tested. The 2950, 2960 and 2980 switches are known to work, although they are limited to a small number (64) of VLANs. (In general, it's the supervisor module, rather than the chassis, that matters, and Emulab supports all supervisor modules for the 6500s that we know about.)
|
|
|
|
|
|
* Some Arista switches.
|
|
|
|
|
|
* Some Foundry switches.
|
|
|
|
|
|
* NETSCOUT nGenius 3900 series layer-1 switch.
|
|
|
* NETSCOUT nGenius 3900 series layer-1 switches.
|
|
|
|
|
|
* The control network must have full multicast support, including IGMP snooping. Nodes' control network interfaces must support PXE booting.
|
|
|
* The control network must have full IPv4 multicast support, including IGMP snooping. Nodes' control network interfaces must support PXE booting. It is not required that control net switches be supported by the `snmpit` tool, but it is preferred so that control net monitoring can be done.
|
|
|
|
|
|
* We highly, highly recommend that boss, ops, fs, and all the nodes be in publicly-routed IP space. If this is not possible, then boss, ops, and fs should be given two interfaces: One in the nodes' control network, and one in public IP space. If you must use private IP space for the nodes' control network, we suggest using the 192.168/16 subnet, which leaves the larger 10/8 subnet available for the experimental network. The defs-example-privatecnet file shows an example configuration like this.
|
|
|
* We highly, highly recommend that `boss`, `ops`, and all the nodes be in publicly-routed IP space. If this is not possible, then `boss` and `ops` should be given two interfaces: One in the nodes' control network, and one in public IP space. If you must use private IP space for the nodes' control network, we suggest using the 192.168/16 subnet, which leaves the larger 10/8 subnet available for the experimental network. The defs-example-privatecnet file shows an example configuration like this.
|
|
|
|
|
|
* If you have a firewall, you will need to be able to get certain standard ports through to boss and ops, such as the ports for http, https, ssh, named (domain), and smtp. Any other strange network setup (such as NAT) between the boss/ops/fs servers and the outside world will cause really big headaches.
|
|
|
* If you have a firewall, you will need to be able to get certain standard ports through to `boss` and `ops`, such as the ports for http, https, ssh, named (domain), and smtp. Any other strange network setup (such as NAT) between the `boss` or `ops` servers and the outside world will cause really big headaches.
|
|
|
|
|
|
* The whole testbed should be in a domain or subdomain for which boss can be the name server.
|
|
|
* The whole testbed should be in a domain or subdomain for which `boss` can be the name server.
|
|
|
|
|
|
* The testbed nodes must be able to reach boss with DHCP requests on the control network - this means either being in the same broadcast domain (i.e. LAN), or, if there is a router in between, the router must be capable of forwarding DHCP/BOOTP packets. Since the nodes will DHCP from boss, it is important that there not be another DHCP server (i.e. one for another part of your lab) to answer their requests.
|
|
|
* The testbed nodes must be able to reach `boss` with DHCP requests on the control network - this means either being in the same broadcast domain (i.e. LAN), or, if there is a router in between, the router must be capable of forwarding DHCP/BOOTP packets. Since the nodes will DHCP from `boss`, it is important that there not be another DHCP server (i.e. one for another part of your lab) to answer their requests.
|
|
|
|
|
|
* The boss node should have its own local disk space, for various reasons:
|
|
|
* It is highly advisable that all nodes have remote power cycling capability. This can be provided either through PDUs with per-outlet cycling capability or via IPMI on the nodes themselves. For the former, Emulab supports control of a number of APC, Raritan, and other PDUs. For the latter, we support IPMI 2.0 which is present on Dell, HP, Supermicro, and Cisco servers among others.
|
|
|
|
|
|
* For logistical reasons, /usr/testbed cannot be shared between boss and ops, or between boss and fs, or between ops and fs. All three install different subsets of the Emulab software.
|
|
|
* It is desirable for all nodes to provide remote access to their console. In the past this was handled via serial console RS232 connections to OS-supported serial port multiplexer devices. Currently we only actively support SOL (Serial over Lan) as exposed via IPMI.
|
|
|
|
|
|
* For security reasons, /usr/testbed/images, which is the home of the "trusted" default disk images, should not be hosted on ops/fs since they are potentially more vulnerable.
|
|
|
* When using IPMI on nodes, this management network should be isolated from the control and experimental networks. Typically this is done by using a fabric of cheaper unmanaged 100/1000Mb switches. Management network switches need not be supported by `snmpit`. The `boss` node should have a network interface on this management network for performing operations on nodes and experiment switches.
|
|
|
|
|
|
* Similarly, home directories for "real" (admin) users on boss should not be shared with, or hosted from, ops or fs. See doc/shellonboss.txt for details.
|
|
|
* The `boss` node should have its own local disk space, for various reasons:
|
|
|
|
|
|
* For logistical reasons, /usr/testbed cannot be shared between `boss` and `ops` (both install different subsets of the Emulab software).
|
|
|
|
|
|
* For security reasons, /usr/testbed/images, which is the home of the "trusted" default disk images, should not be hosted on `ops` since they are potentially more vulnerable.
|
|
|
|
|
|
* Similarly, home directories for "real" (admin) users on `boss` should not be shared with, or hosted from, `ops`. See doc/shellonboss.txt for details.
|
|
|
|
|
|
If you have any questions about these requirements, please contact the [emulab-admins list](http://groups.google.com/group/emulab-admins) before proceeding.
|
|
|
|
... | ... | |