-
Mike Hibler authored
tb-set-node-plab-role $plc plc to make it the PLC node. Then any number of other nodes are declared as: tb-set-node-plab-role $plab1 node to make them inner plab nodes. Unlike elabinelab, there is no magic "tb-plab-in-elab" command which implies the topology, you put all the plab nodes in a LAN or whatever yourself. This may or may not be a good idea. Anyway, these NS commands set DB state in virt_nodes and reserved much like elabinelab. During swapin, the dhcpd.conf file is rewritten so that inner plab nodes have their "filename" set to "pxelinux.0" and their "next-server" set to the designated PLC node. The PLC node will then be loaded/booted before anything is done to the inner-plab nodes. After it comes up, the inner plab nodes are rebooted and declared as up. There is a new tmcd command "eplabconfig" (suggestions for a new name welcom!), which returns info like: NAME=plc ROLE=plc IP=155.98.36.3 MAC=00d0b713f57d NAME=plab1 ROLE=node IP=155.98.36.10 MAC=0002b3877a4f NAME=plab2 ROLE=node IP=155.98.36.34 MAC=00d0b7141057 to just the PLC node (returns nothing to any other node). The implications of this setup are: * The PLC node must act as a TFTP server as we have discussed in the past. The TMCC info above is hopefully enough to configure pxelinux, if not we can change it. * The PLC node is responsible for loading the disks of inner plab nodes. This is implied by the setup, where we change the dhcpd.conf file before doing anything to the inner nodes. Thus, once the inner nodes are rebooted, they will be talking pxelinux with PLC, and not to boss. This step is dubious, as we could no doubt load the disks faster than whatever plab uses can. But it simplified the setup (and is more realistic!). The alternative, which is something that might be useful anyway, is to introduce a "state" after which nodes have been reloaded but before they are rebooted. With that, we can reload the plab nodes and then change the dhcpd.conf file so when they reboot they start talking to the PLC.
9512772e