- 04 Jun, 2017 1 commit
-
-
Mike Hibler authored
-
- 03 Jun, 2017 2 commits
-
-
Mike Hibler authored
-
Mike Hibler authored
-
- 02 Jun, 2017 15 commits
-
-
Leigh B Stoller authored
-
Mike Hibler authored
-
Mike Hibler authored
-
Mike Hibler authored
-
Mike Hibler authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
supported).
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
- 01 Jun, 2017 6 commits
-
-
Leigh B Stoller authored
-
Mike Hibler authored
-
Mike Hibler authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
this used to work, odd that we even did this.
-
Leigh B Stoller authored
'Expression #1 of ORDER BY clause is not in SELECT list, references column 'tbdb.m.date_approved' which is not in SELECT list; this is incompatible with DISTINCT'
-
- 31 May, 2017 5 commits
-
-
Mike Hibler authored
(But then I just if-0'ed out the whole planetlab related query).
-
Leigh B Stoller authored
-
Mike Hibler authored
partition/partitions/stored/virtual
-
Leigh B Stoller authored
-
- 30 May, 2017 11 commits
-
-
Mike Hibler authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
error location. This closes issue #292.
-
Leigh B Stoller authored
-
Mike Hibler authored
Add setzfsquotas script to handle fixup of existing quotas, add update script to do a one-time invocation of this script at boss-install time, and fix accountsetup so it will properly set both quotas going forward.
-
Mike Hibler authored
Add defs-* multiplier factor variable for setting "quota" based on "refquota". For most sites this will just be 1.0. On the mothership, where we use ZFS snapshots for backup, we are going to start with 2.0.
-
Leigh B Stoller authored
-
Leigh B Stoller authored
In the beginning, the number and size of experiments was small, and so storing the entire slice/sliver status blob as json in the web task was fine, even though we had to lock tables to prevent races between the event updates and the local polling. But lately the size of those json blobs is getting huge and the lock is bogging things down, including not being able to keep up with the number of events coming from all the clusters, we get really far behind. So I have moved the status blobs out of the per-instance web task and into new tables, once per slice and one per node (sliver). This keeps the blobs very small and thus the lock time very small. So now we can keep up with the event stream. If we grow big enough that this problem comes big enough, we can switch to innodb for the per-sliver table and do row locking instead of table locking, but I do not think that will happen
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-
Leigh B Stoller authored
-