|
|
## Install FreeBSD
|
|
|
|
|
|
**NOTE:** Before proceeding, ensure that the machine you will use as your ops server is properly connected to your network and that all of the required software is available either through the network or some other media.
|
|
|
**Note:** Before proceeding, ensure that the machine you will use as your ops server is properly connected to your network and that all of the required software is available either through the network or some other media.
|
|
|
|
|
|
Boot the FreeBSD installation CD (either 11.3 or 12.2) on the machine you will use as your `ops` server. You will want to do an "Install" and then select your keymap and hostname. When it asks about optional components, only select "lib32" and "src", you do not want "ports" since you will be loading pre-built packages from Emulab.
|
|
|
Boot the FreeBSD 12.3 installation CD or memory stick on the machine you will use as your `ops` server. You will want to do an "Install" and then select your keymap and hostname. When it asks about optional components, only select "lib32" and "src", you do not want "ports" since you will be loading pre-built packages from Emulab.
|
|
|
|
|
|
Next it will ask you about disks and disk partitioning. There are a variety of ways to partition disk space depending on how many disks the server has and whether you might want to enforce disk space quotas. For now we just choose the first disk, probably /dev/ada0, and create a single partition in which to install the base system; other partitions will be created as needed later.
|
|
|
|
|
|
**Note:** We do not currently use ZFS for the base system install. It should work fine, we just have not tested it. We do prefer ZFS for the Emulab-related filesystems as described below.
|
|
|
|
|
|
From the menu choose "Auto (UFS)", and then "Entire Disk", confirm erasing the disk and then choose either MBR or GPT. If your disk is over 2TB and the server BIOS supports UEFI boot, select GPT. Otherwise it is probably easiest to stick with MBR and a "legacy boot" BIOS setting.
|
|
|
|
|
|
Next it will put you in the partition editor and want you to review the partitioning. If you are dedicating the entire disk to the OS, then just select "Finish". If you are using a single disk for OS and the Emulab install, then you will need to shrink the FreeBSD partition to leave room for Emulab. Unfortunately, doing this requires you "Delete" the existing "freebsd-ufs" and "freebsd-swap" partitions and "Create" new ones. They should be created under the "BSD" partition (probably ada0s1). The "freebsd-ufs" partition should be 50-100GB and should have a mountpoint of "/". The swap partition you can make 16GB or smaller. Once you have recreated these partitions select "Finish".
|
... | ... | @@ -17,12 +19,11 @@ Do **not** create any user accounts yet, and just log in as root for the time be |
|
|
|
|
|
Installation will complete and it will ask if you want to reboot the machine. Do so, and after rebooting, login as root.
|
|
|
|
|
|
If you are installing FreeBSD 12.2, you should run `freebsd-update` to pick up security patches:
|
|
|
Run `freebsd-update` to pick up security patches:
|
|
|
```
|
|
|
env PAGER=cat freebsd-update fetch
|
|
|
freebsd-update install
|
|
|
```
|
|
|
**Note:** If you are installing FreeBSD 11.3, it has reached end-of-life and there are no updates, so you will skip this step.
|
|
|
|
|
|
## Create Emulab Partitions
|
|
|
|
... | ... | @@ -38,16 +39,16 @@ Now you need to create the partitions and filesystem for the Emulab-required dir |
|
|
|
|
|
* **/groups/** Space for files shared by the sub-groups of projects. Subgroups allow for private storage within a project and are primarily used for "group projects" in classes. If you plan to use this mechanism, you should again allocate from 10GB to 100s of GB. If not, maybe 1-10GB. [100GB]
|
|
|
|
|
|
Because of the vague storage requirements, `/users`, `/proj`, and `/groups` are often just subdirectories on the same filesystem. This allows you to avoid hard decisions about how much space to allocate to each. Just put all remaining space in one filesystem and use it for all three.
|
|
|
Because of the vague storage requirements, `/users`, `/proj`, and `/groups` are often just subdirectories on the same filesystem (UFS) or part of the same storage pool (ZFS). This allows you to avoid hard decisions about how much space to allocate to each. Just put all remaining space in one filesystem and use it for all three.
|
|
|
|
|
|
In fact, the only reasons you might want to make them separate filesystems would be to prevent one of the hierarchies from filling up the entire disk or if you want to enforce distinct quotas for a user in the `users` and `proj` filesystems.
|
|
|
In fact, the only reasons you might want to make them separate filesystems would be to prevent one of the hierarchies from filling up the entire disk or if you want to enforce distinct quotas for a user in the `users` and `proj` filesystems. We recommend that you [use ZFS](#using-zfs) for these cases.
|
|
|
|
|
|
**Note:** since `/share` is exported read-only, FreeBSD requires that it be on a separate filesystem from anything that is exported read-write. So while `/users, /proj` and `/groups` can be on the same filesystem, `/share` cannot.
|
|
|
|
|
|
#### Using UFS
|
|
|
|
|
|
The traditional UFS filesystem is best used with a single redundant underlying volume, either
|
|
|
a hardware RAID provided volume, or a virtual disk provided by the VM host. Otherwise you are taking your chances with a disk failure. While you can use FreeBSD's `gvinum` or `graid` to implement RAID or interact with a software RAID controller, it is recommended that you [use ZFS instead](#using-zfs) if you need to build a multi-disk redundant configuration.
|
|
|
a hardware RAID provided volume, or a virtual disk provided by the VM host. Otherwise you are taking your chances with a disk failure. While you can use FreeBSD's `gmirror` or `graid` to implement RAID or interact with a software RAID controller, it is recommended that you [use ZFS instead](#using-zfs) if you need to build a multi-disk redundant configuration.
|
|
|
|
|
|
Here are two examples of configuring partitions and filesystems on a single disk, either for a system with a large single disk shared by both the OS and Emulab, or with a second disk dedicated to Emulab bits.
|
|
|
|
... | ... | @@ -128,7 +129,7 @@ ln -s /z/groups /groups |
|
|
|
|
|
If you have multiple physical disks available for Emulab storage and they are not combined using hardware RAID, then you can use ZFS to create RAID1, RAID10, RAID5, RAID50, RAID6 or RAID60 style configurations. Using ZFS permits other nice features as well, including easy creation and resizing of filesystems and the ability to create per-filesystem snapshots for backup purposes.
|
|
|
|
|
|
In the simplest form, ZFS can be a drop in replacement for UFS by creating single filesystems for the various Emulab file hierarchies; e.g., using a ZFS filesystem for `/users` instead of a UFS filesystem. It should be noted however that Emulab software does not currently have the ability to manage UFS-style per-user quotas within single ZFS filesystems, so it cannot currently be used in this way for `/users`, `/proj`, or `/groups` when quotas are desired. It does however work fine if you are not using quotas and for the `/usr/testbed` and `/share` hierarchies.
|
|
|
In the simplest form, ZFS can be a drop in replacement for UFS by creating single filesystems for the various Emulab file hierarchies; e.g., using a ZFS filesystem for `/users` instead of a UFS filesystem. It should be noted however that Emulab software does not currently have the ability to manage UFS-style per-user quotas within single ZFS filesystems, so it cannot currently be used in this way for `/users`, `/proj`, or `/groups` if quotas are desired. It does however work fine if you are not using quotas and it works fine for the `/usr/testbed` and `/share` hierarchies.
|
|
|
|
|
|
However, the primary use of ZFS in Emulab is to implement individual per-user and per-project filesystems. This enables disk quotas trivially by limiting the amount of storage each filesystem has, and works regardless of what user is writing to the filesystem. This avoids one of the biggest problems with UFS quotas--the impracticality of enforcing a quota on the "root" user, thereby allowing for example, a user running as root to fill up the entire `/users` filesystem.
|
|
|
|
... | ... | |