Implement an "on server" strategy for copying persistent datasets.
This is implemented as a variant of createdataset. If you do: createdataset -F pid/old pid/new It will create a new dataset, initializing it with the contents of old. The new dataset will of course have the same size, type, and filesystem type (if any). Right now the old and new both have to be in the same project, and new gets placed in the same pool on the same server (i.e., this is a local "zfs send | zfs recv" pipeline). Implementing copy as a variant of create will hopefully make it easy for Leigh in the portal interface as he doesn't have to treat it any different than a normal create: fire it off in the background and wait til the lease state becomes "valid". Since a copy could takes hours or even days, there are plenty of opportunities for failure that I have not considered too much yet, e.g., the storage server rebooting in the middle or boss rebooting in the middle. These things could happen already, but we have just made the window of opportunity much larger. Anyway, this mechanism can serve as the basis for creating persistent datasets from clones or other ephemeral datasets.
Showing with 450 additions and 42 deletions