- Sep 02, 2014
-
-
Leigh B Stoller authored
shared hosts. This is required for OpenVZ but not for XEN, so this does mean that when using ZFS, shared OpenVZ hosts are not supported. Not likely to be a problem.
-
- Aug 20, 2014
-
-
Mike Hibler authored
If the libevent package is installed, there will be /usr/local/include/event.h which conflicted with our event.h in the clientside/lib/event directory.
-
- Aug 14, 2014
-
-
Leigh B Stoller authored
-
- Aug 07, 2014
-
-
Leigh B Stoller authored
-
Mike Hibler authored
It was returning info for blockstores of the same name in different experiments...oops!
-
- Jul 25, 2014
-
-
Leigh B Stoller authored
IMAGEPROVENANCE is on, since that means the site has also upgraded their MFSs and XEN client side.
-
- Jul 17, 2014
-
-
Mike Hibler authored
From the comment: * "BOOTPART=" confuses the old rc.frisbee argument parsing * which looks for "PART=" with the RE ".*PART=" which will * match BOOTPART= instead. Thus an old script loading a * whole disk image (PART=0) winds up trying to load it in * partition 2 (BOOTPART=2). So we can pick one of two * versions, the one in effect when rc.frisbee changed its * argument parsing (v30, circa 6/28/2010) or the version * in effect when BOOTPART was added (v36, circa 6/13/2013). * We choose the latter.
-
- Jul 12, 2014
-
-
Mike Hibler authored
This prevents a line with a single "." from meaning EOF to sendmail. How arcane! I discovered this when I ran a create_image and I didn't get the complete log mailed to me. This is because create_image did a frisbee download of an image with a single chunk, which of course printed out: Using Multicast 235.252.1.187 Joined the team after 0 sec. ID is 1586355915. File is 1 chunks (963200 bytes) . Fortunately, "arcane" is my middle name, so it didn't take me long to find this...
-
- Jul 02, 2014
-
-
Leigh B Stoller authored
-
- Jul 01, 2014
-
-
Leigh B Stoller authored
-
- May 30, 2014
-
-
Leigh B Stoller authored
of gre so we can tell on the client to reduce the MTU. MTU is a pain.
-
- May 07, 2014
-
-
Mike Hibler authored
New loadinfo returns: IMAGELOW, IMAGEHIGH: range of sectors covered by the image. This is NOT the same as what imageinfo or imagedump will show. For partition images, these low and high values are adjusted for the MBR offset of the partition in question. So when loading a Linux image, expect values like 6G and 12G. The intent here (not yet realized) is that these values will be used to construct an MBR/GPT on the fly, rather than using hardcode magic MBR versions. You can get the uncompressed size of the image with (high - low + 1). IMAGESSIZE: the units of the low/high values. Always 512 right now, may be 4096 someday. IMAGERELOC: non-zero if the image can be placed at an offset other than IMAGELOW (i.e., it can be relocated). This may or may not prove useful for dynamic MBR construction...we will see. Probably didn't need to bump the version here, but I am playing it safe.
-
- Apr 07, 2014
-
-
Gary Wong authored
I hear people are complaining about getting csh, and it seems reasonable that most GENI folk are more likely to be surprised/confused by tcsh than bash.
-
- Apr 03, 2014
-
-
Gary Wong authored
The old limit (2K) was big enough that essentially any hand-written rspec would work fine, but also small enough that pretty much any manifest for a Flack-generated request rspec would fail.
-
Leigh B Stoller authored
This allows dom0 to set the password of the guest at creation time, so that if something goes wrong, we can get in on the console. This also fixes an error where on a shared node, we were returning the password hash for the physical host. Return a per-node hash instead. Also abstract out the various places we get read from /dev/urandom.
-
- Mar 25, 2014
-
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
This differs from the current firewall support, which assumes a single firewall for an entire experiment, hosted on a dedicated physical node. At some point, it would be better to host the dedicated firewall inside a XEN container, but that is a project for another day (year). Instead, I added two sets of firewall rules to the default_firewall_rules table, one for dom0 and another for domU. These follow the current style setup of open,basic,closed, while elabinelab is ignored since it does not make sense for this yet. These two rules sets are independent, the dom0 rules can be applied to the physical host, and domU rules can be applied to specific containers. My goal is that all shared nodes will get the dom0 closed rules (ssh from local boss only) to avoid the ssh attacks that all of the racks are seeing. DomU rules can be applied on a per-container (node) basis. As mentioned above this is quite different, and needed minor additions to the virt_nodes table to allow it.
-
- Mar 10, 2014
-
-
Mike Hibler authored
We have had the mechanism implemented in the client for some time and available at the site-level or, in special cases, at the node level. New NS command: tb-set-nonfs 1 will ensure that no nodes in the experiment attempt to mount shared filesystems from ops (aka, "fs"). In this case, a minimal homdir is created on each node with basic dotfiles and your .ssh keys. There will also be empty /proj, /share, etc. directories created. One additional mechanism that we have now is that we do not export filesystems from ops to those nodes. Previously, it was all client-side and you could mount the shared FSes if you wanted to. By prohibiting the export of these filesystems, the mechanism is more suitable for "security" experiments.
-
- Jan 22, 2014
-
-
Mike Hibler authored
For persistent blockstores, is based on the value of the "readonly" virt_blockstore_attributes attribute if it exists. The RO attribute is set by libvtop when an attempt is made to use a lease that is in the 'grace' state.
-
- Jan 10, 2014
- Jan 08, 2014
-
-
Leigh B Stoller authored
or per vhost), we return the mac of the xen host since the host is acting as a router for the containers on that vhost.
-
- Dec 18, 2013
-
-
Leigh B Stoller authored
-
- Dec 16, 2013
-
-
Leigh B Stoller authored
info from the capture process we now start for each VM. Capture cannot do this directly, since experimental nodes cannot talk to the capserver.
-
- Dec 11, 2013
-
-
Mike Hibler authored
This is a bit hacky as noted in the comment: * XXX we only put out the PERSIST flag if it is set. * Since the client-side is stupid-picky about unknown * attributes, this will cause an older client to fail * when the attribute is passed. Believe it or not, * that is a good thing! This will cause an older * client to fail if presented with a persistent * blockstore. If it did not fail, the client would * proceed to unconditionally create a filesystem on * the blockstore, wiping out what was previously * there.
-
- Nov 22, 2013
-
-
Mike Hibler authored
A typo that Dave discovered. He is now the new linktest expert.
-
- Nov 07, 2013
-
-
Gary Wong authored
These are changes to the spec since originally discussed at GEC 17.
-
- Sep 09, 2013
-
-
Gary Wong authored
Don't quote results in a couple of places where quotes are not necessary. Omit results from subcommands which report an error.
-
Gary Wong authored
-
Leigh B Stoller authored
-
- Aug 28, 2013
-
-
Leigh B Stoller authored
the same field in the users table, so that clients know to update users.
-
- Aug 27, 2013
-
-
Leigh B Stoller authored
are the details, so they are recorded someplace. The Racks do not have a real 172 router for the "jail" network. This is a mild pain, and one possibility would be to make the router be the physical node, so that each set of VMs is using its own router thus spreading the load. Well, that does not work because we use bridge mode on the physical host, and so the packets leave the node before they have a chance to go through the routing code. Yes, iptables does have something called a brouter via etables, but I could not make that work after a lot of trying and tearing my hair out So the next not so best thing is to make the control node be the router by sticking an alias on xenbr0 for 172.16.0.1. Fine, that works although performance could suffer. But what about NFS traffic to ops? It would be really silly to send that through the routing code on the control node, just to end up bridging into into the ops VM. So figured I would optimize that by changing domounts to return mounts that reference ops address on the jail network. And in fact this worked fine, but only for shared nodes. But it failed for exclusive VMs! In this case, we add a SNAT rule on the physical host that changes the source IP to be that of the physical host so that users cannot spoof a VM on a shared node and mount an NFS filesystem they should not have access to. In fact, it failed for UDP mounts but not for TCP mounts. When I looked at the traffic with tcpdump, it appeared that return TCP traffic from ops was using its jail IP, but return UDP traffic was using the public IP. This confuses SNAT and so the packets never get back into the VM. So, this change basically looks at the sharing mode of the node, and if its shared we use the jailip in the mounts, and if it is exclusive we use the public IP (and thus, that traffic gets routed through the control node). This sucks, but I am worn down on this.
-
- Aug 16, 2013
-
-
Mike Hibler authored
Return VOLNAME instead of BSID and VOLSIZE instead of SIZE. This is what the client-side parser expects based on what I was told previously. Didn't notice til now because we had no local disk state in the DB.
-
- Aug 15, 2013
-
-
Gary Wong authored
This allows nodes in GENI slices to retrieve information about their sliver and slice via tmcc (or equivalent client-side support). The set of queries available and their names were agreed upon in GEC 17 sessions and subsequent discussions.
-
- Aug 13, 2013
-
-
Leigh B Stoller authored
-
- Jul 23, 2013
-
-
Leigh B Stoller authored
Disabled by default, enabled on Utah Emulab for testing.
-
Srikanth Raju authored
This way I can throw away all the manual mysql handling that I was doing
-
Srikanth Raju authored
This was so we won't need a daemon library
-
Srikanth Raju authored
-