Performance improvements to the vnode startup path.
The bigest improvement happened on day one when I took out the 20 second sleep between vnode starts in bootvnodes. That appears to have been an artifact of an older time and an older Xen. Or, someone smarter than me saw the potential of getting bogged down for, oh say three weeks, trying to micro-optimize the process and instead just went for the conservative fix! Following day one, the ensuing couple of weeks was a long strange trip to find the maximum number of simultaneous vnode creations that could be done without failure. In that time I tried a lot of things, generated a lot of graphs, produced and tweaked a lot of new constants, and in the end, wound up with the same two magic numbers (3 and 5) that were in the original code! To distinguish myself, I added a third magic number (1, the loneliest of them all). All I can say is that now, the choice of 3 or 5 (or 1), is based on more solid evidence than before. Previously it was 5 if you had a thin-provisioning LVM, 3 otherwise. Now it is based more directly on host resources, as described in a long comment in the code, the important part of which is: # # if (dom0 physical RAM < 1GB) MAX = 1; # if (any swap activity) MAX = 1; # # This captures pc3000s/other old machines and overloaded (RAM) machines. # # if (# physical CPUs <= 2) MAX = 3; # if (# physical spindles == 1) MAX = 3; # if (dom0 physical RAM <= 2GB) MAX = 3; # # This captures d710s, Apt r320, and Cloudlab m510s. We may need to # reconsider the latter since its single drive is an NVMe device. # But first we have to get Xen working with them (UEFI issues)... # # else MAX = 5; In my defense, I did fix some bugs and stuff too (and did I mention the cool graphs?) See comments in the code and gitlab emulab/emulab-devel issue #148.