- 19 Jan, 2018 1 commit
-
-
David Johnson authored
Somebody left something somewhere...
-
- 29 Nov, 2017 1 commit
-
-
David Johnson authored
Now we've found a libc (musl) whose res_init() function just returns 0 :). That is the final hope in tmcc's getbossnode() function. Obviously this is a crappy workaround, but it is better than adding a final gasp like parsing /etc/resolv.conf manually.
-
- 17 Nov, 2017 1 commit
-
-
David Johnson authored
-
- 27 Oct, 2017 4 commits
-
-
David Johnson authored
I'm not sure if all containers will want stdin opened or attached on console (i.e. for the master process), but for now we'll just turn it on.
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
-
- 26 Oct, 2017 1 commit
-
-
David Johnson authored
This means that users can still use our ssh urls to reach containers that don't run sshd. We run a private sshd that has Ports to listen on, and Match blocks containing ForceCommand directives, which run docker exec $vnode_id <shell>. User can configure which shell.
-
- 24 Oct, 2017 1 commit
-
-
David Johnson authored
We do this similarly to Xen. There's a new script (container2pty.py) that attaches to the Docker container, via the docker daemon, and exports its stdio as a pty. Then we run capture on a symlink to that pty. New options to capture tell it to keep retrying to open the pty maxretries times (we invoke with infinitely many retries); and to not prepend /dev to the device string.
-
- 05 Oct, 2017 3 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
(Of course, there is no standard perl Docker Engine API, so I wrote my own thin perl library, basically error handling and JSON marshalling goo. But had to deal with LWP/http over UNIX socket to Docker, which is whacky; fortunately, there was a module, but I can confirm that LWP is a pain when you get off the beaten track. Anyway, I added a little client that is useful for debugging.) Also, I refactored things a bit so there is now an emulabize-image script that can be run to test emulabization of docker images separately from vnode creation. Also, add a specific Docker version of the prepare script. This actually gets run in image commit as an ONBUILD instruction. Also, always copy the image's master passwd files into /etc/emulab instead of using those installed by client-install. Those just don't apply at all here since we have no idea what the base image is; the uids/gids in the clientside dirs could be a complete mismatch for the image.
-
- 24 Sep, 2017 1 commit
-
-
David Johnson authored
-
- 03 May, 2017 4 commits
-
-
David Johnson authored
This really seems to be a worse problem in the community edition; hmm.
-
David Johnson authored
("mysystemd"?? Sigh.)
-
David Johnson authored
-
David Johnson authored
-
- 02 May, 2017 2 commits
-
-
David Johnson authored
This fixes up, improves, and enables by default the use of LVM as infrastructure. Also removes some old copied LVM code from libvnode_xen. The Docker storage backend can still be aufs; the aufs data just gets stored in a massive LV if so. Or the storage backend can also be devicemapper+LVM-thin-provisioning (the new default). See clientside/tmcc/linux/docker/README.md in this commit for more detail, esp on LV sizing.
-
David Johnson authored
(Or we'll use either of these that's installed.) Basically, the CE has been patched to be much more functional than the current Ubuntu release (i.e., it interferes less with bridges and actually makes a Docker restart possible!
-
- 01 May, 2017 5 commits
-
-
David Johnson authored
-
David Johnson authored
-
David Johnson authored
(This can happen if we removed the Docker network first, and that also removed the bridge. More recent Docker libnetworks are patched to recall if they created the bridge and only remove it then; but not all versions in the wild are patched yet.)
-
David Johnson authored
-
David Johnson authored
Our customized runit supports a systemd-like mode allowing init to exit inside containers (and to be signaled to halt from outside, via SIGRTMIN+3). This is critically important for Docker containers, since Docker will wait ~10 seconds before sending a final killall on 'docker restart'. We were already building runit for rpm-based distros, since it's not packaged by default for them, so this just takes it a little further. Now container death is nearly instantaneous. (Also fix a couple stray runit config bugs in 1,2,3.)
-
- 28 Apr, 2017 1 commit
-
-
David Johnson authored
Had to rewrite the Docker control net init code in the case of !$USE_MACVLAN_CNET (i.e. use bridge for control net) and !$ISREMOTENODE. To sum up, there was yet another case I hadn't thought a problem where Docker cannot handle our setup of the bridge (where we set the real control net IP as the primary ip, and the virt control net IP as an alias). See comments in commit for more detail. Hopefully this is the last case.
-
- 27 Apr, 2017 3 commits
-
-
David Johnson authored
Have to consider the case where Docker recreates the control net bridge, and assigns the virtual control net IP to it, but doesn't add the control net device to the bridge. It wouldn't know how to do that anyway. So handle that, and some other weird racy things (i.e., ip addr flush seems to screw a subsequent ip addr replace; just sleep(1), ugh).
-
David Johnson authored
-
David Johnson authored
(And fix it up for Docker...)
-
- 24 Apr, 2017 1 commit
-
-
David Johnson authored
See clientside/tmcc/linux/docker/README.md for design notes. See clientside/tmcc/linux/docker/dockerfiles/README.md for a description of how we automatically Emulabize existing Docker images. Also, this mostly fits within the existing vnodesetup path, but I did modify mkvnode.pl to allow the libvnode backend to provide a vnodePoll wait loop instead of the builtin vnodeState loop.
-