Address recursive LVM problems caused by local blockstores on a Xen vnode.
As I discovered back in the A3 days, setting up an LVM PV/VG inside of an LVM LV can lead to confusion. The "inner" PV/VG/LV instances are visible to the "outer" LVM setup. As if this did not make things exciting enough, if you have two or more different vnodes each with a local blockstore ("nonsysvol" or "any" placement), they will each have a VG called "emulab" that bleeds through to the outside. Now the outside LVM thinks it has multiple VGs called "emulab" and it really doesn't like that.
But wait, there's more! As an added bonus, vnode teardown can hang or blow up if the inner LVMs are still there at termination time. This is because the outer LV which is the (second) disk for the vnode cannot be torn down because LVM thinks that it is part of a PV!
This likely only happens as the result of a pvscan or vgscan on the part of the vnode host. Otherwise the outer LVM would never be aware of the inner LVM. These scans are certainly done at reboot time, but I think may also be triggered when a new LV is created.
One Very Important Fix will be to give the VGs used by the blockstore subsystem unique names rather than calling them "emulab". This will avoid total chaos on the outside. Tearing down the inner blockstore setup is more problematic. We have no notion of a "final shutdown" for a node, so I cannot just willy-nilly destroy blockstores when a node shuts down. This will probably have to be done on the outside when we fail to teardown a vnode's LVs.