... | ... | @@ -126,11 +126,11 @@ ln -s /z/groups /groups |
|
|
|
|
|
If you have multiple physical disks available for Emulab storage and they are not combined using hardware RAID, then you can use ZFS to create RAID1, RAID10, RAID5, RAID50, RAID6 or RAID60 style configurations. Using ZFS permits other nice features as well, including easy creation and resizing of filesystems and the ability to create per-filesystem snapshots for backup purposes.
|
|
|
|
|
|
In the simplest form, ZFS can be a drop in replacement for UFS by creating single filesystems for the various Emulab file hierarchies. It should be noted however that Emulab software does not currently have the ability to manage UFS-style per-user quotas within single ZFS filesystems, so it cannot currently be used for `/users`, `/proj`, or `/groups` when such quotas are desired. It does however work fine for the `/usr/testbed` and `/share` hierarchies.
|
|
|
In the simplest form, ZFS can be a drop in replacement for UFS by creating single filesystems for the various Emulab file hierarchies; e.g., using a ZFS filesystem for `/users` instead of a UFS filesystem. It should be noted however that Emulab software does not currently have the ability to manage UFS-style per-user quotas within single ZFS filesystems, so it cannot currently be used in this way for `/users`, `/proj`, or `/groups` when quotas are desired. It does however work fine if you are not using quotas or for the `/usr/testbed` and `/share` hierarchies.
|
|
|
|
|
|
The primary use of ZFS in Emulab is to implement individual per-user and per-project filesystems. This enables disk quotas by limiting the amount of storage each filesystem can use, regardless of what user is doing the writing. This style of quota avoids one of the biggest problems with UFS quotas--the inability to place a quota on the "root" user, thereby allowing for example, a user running as root to fill up the entire `/users` filesystem.
|
|
|
However, the primary use of ZFS in Emulab is to implement individual per-user and per-project filesystems. This enables disk quotas trivially by limiting the amount of storage each filesystem has, and works regardless of what user is writing to the filesystem. This avoids one of the biggest problems with UFS quotas--the impracticality of enforcing a quota on the "root" user, thereby allowing for example, a user running as root to fill up the entire `/users` filesystem.
|
|
|
|
|
|
ZFS can still be used in a configuration with only a single extra disk, built on top of a HW RAID volume for example. In this way you still gain all the advantages of the ZFS filesystem except for its providing redundancy. It should be noted however that "stacking" volume managers in this way can mask errors and lead to bad performance.
|
|
|
**Note:** ZFS can still be used in a configuration with only a single extra disk, built on top of a HW RAID volume for example. In this way you still gain all the advantages of the ZFS filesystem except for it providing redundancy. It should be noted however that "stacking" volume managers in this way can mask errors and lead to bad performance.
|
|
|
|
|
|
To use ZFS for your install, you first create a zpool. For example, if you have two extra disks, /dev/ada1 and /dev/ada2, you can create a mirrored zpool with:
|
|
|
```
|
... | ... | @@ -144,11 +144,7 @@ or a RAID10 striped mirror: |
|
|
```
|
|
|
zpool create -m none z mirror /dev/ada1 /dev/ada2 mirror /dev/ada3 /dev/ada4
|
|
|
```
|
|
|
To create a single disk zpool you would:
|
|
|
```
|
|
|
zpool create -m none z /dev/ada1
|
|
|
```
|
|
|
Once the zpool is created, you create filesystems for the various hierarchies. For example, for a zpool `z` composed of a mirror of two 1TB drives:
|
|
|
Once the zpool is created, you create filesystems for the various hierarchies. For example, for a 1TB zpool `z`:
|
|
|
```
|
|
|
# 50G for /usr/testbed
|
|
|
zfs create -o mountpoint=/usr/testbed -o quota=50G z/testbed
|
... | ... | @@ -163,17 +159,19 @@ zfs create -o mountpoint=/groups -o quota=100G -o setuid=off z/groups |
|
|
```
|
|
|
A couple of important points about these newly created filesystems. First note that we appear to be creating a single filesystem for `/users`, etc., but these filesystem will actually be the parents of the per-user and per-project filesystems that will be created underneath. These parent filesystems serve as a source of inherited filesystem attributes such as the mount point and the disabling of setuid files. In this way the attributes don't have to be specified for every child filesystem. Second we are setting a quota on the parent filesystems. This limits the total disk space that can be used by all child filesystems and will thus prevent, e.g., `/users` filesystems from filling up all the space in the zpool. These quotas can be changed dynamically as needed or removed entirely if you are not worried about limiting the distribution of space between the file hierarchies.
|
|
|
|
|
|
The Emulab software is responsible for setting the quotas (if any) on individual user or project filesystems. Those limits are specified in the definitions file as described in the next section.
|
|
|
The Emulab software is responsible for creating, and setting quotas (if any) on, individual user or project filesystems. Quota limits are specified in the definitions file as described in the next section.
|
|
|
|
|
|
## Updating the Definitions file
|
|
|
|
|
|
If you have combined any of `/users`, `/proj`, or `/groups` into one filesystem, you will need to update your definitions file to reflect that. The `FSDIR_` variables should reflect the actual "real path" to the corresponding directory. For example, if you combined all three on the /z filesystem as in the above examples, you would need to set:
|
|
|
If you have combined any of `/users`, `/proj`, or `/groups` into one UFS filesystem, you will need to update your definitions file to reflect that. The `FSDIR_` variables should reflect the actual "real path" to the corresponding directory. For example, if you combined all three on the /z filesystem as in the above examples, you would need to set:
|
|
|
```
|
|
|
FSDIR_GROUPS=/z/groups
|
|
|
FSDIR_PROJ=/z/proj
|
|
|
FSDIR_USERS=/z/users
|
|
|
```
|
|
|
If you have enabled quotas on one or more of those directories and they are [based in UFS](#using-ufs), then you need to set the `FS_WITH_QUOTA` variable to reflect the filesystem(s) that are affected:
|
|
|
For separate UFS filesystems, or for ZFS filesystems, the default values are correct.
|
|
|
|
|
|
If you have enabled quotas on one or more of those UFS directories, then you need to set the `FS_WITH_QUOTA` variable to reflect the filesystem(s) that are affected:
|
|
|
```
|
|
|
# If /users, /proj, and /groups are all on one filesystem (/z)
|
|
|
FS_WITH_QUOTA="/z"
|
... | ... | @@ -181,19 +179,21 @@ FS_WITH_QUOTA="/z" |
|
|
# If they are all distinct filesystems
|
|
|
FS_WITH_QUOTA="/users /proj /groups"
|
|
|
```
|
|
|
When using ZFS, you do not need to change this variable.
|
|
|
|
|
|
If you are using ZFS to manage the filesystems you will need:
|
|
|
```
|
|
|
WITHZFS=1
|
|
|
ZFS_ROOT=z
|
|
|
ZFS_NOEXPORT=1
|
|
|
```
|
|
|
where ZFS_ROOT should be set to the name of the zpool you created in [using ZFS](#using-zfs). If you are enforcing quotas with ZFS, add:
|
|
|
where ZFS_ROOT should be set to the name of the zpool you created in [using ZFS](#using-zfs). If you want to enforce quotas with ZFS, add:
|
|
|
```
|
|
|
ZFS_QUOTA_USER="1G"
|
|
|
ZFS_QUOTA_PROJECT="100G"
|
|
|
ZFS_QUOTA_GROUP="5G"
|
|
|
```
|
|
|
recalling that these limits are per-user/project/group and **not** for the entire hierarchy of users, projects, or groups.
|
|
|
using your desired values. Recall that these limits are per *individual* user, project, or group and *not* the sum for the entire hierarchy of users, projects, or groups.
|
|
|
|
|
|
* [Prev](install/Creating the Definitions File)
|
|
|
* [Next](install/Installing Emulab on ops)
|
... | ... | |