1. 26 Apr, 2018 1 commit
  2. 24 Apr, 2018 2 commits
  3. 20 Apr, 2018 3 commits
  4. 16 Apr, 2018 1 commit
  5. 09 Apr, 2018 1 commit
    • Leigh Stoller's avatar
      Add another check for deleting the last node at an aggregate. Kobus' · bf7ace3f
      Leigh Stoller authored
      students trying to delete their one node, and I am not even sure how
      they can since the option is supposed to be disabled (and in my checks,
      it is), but clearly I am missing something. For now, head this off
      earlier and return an error message that perhaps gets them to ask a
      question.
      bf7ace3f
  6. 07 Apr, 2018 1 commit
  7. 30 Mar, 2018 1 commit
  8. 26 Mar, 2018 1 commit
  9. 18 Mar, 2018 1 commit
  10. 14 Mar, 2018 2 commits
  11. 06 Mar, 2018 2 commits
  12. 02 Mar, 2018 1 commit
  13. 20 Feb, 2018 2 commits
  14. 16 Feb, 2018 4 commits
    • Leigh Stoller's avatar
    • Leigh Stoller's avatar
      A lot of work on the RPC code, among other things. · 56f6d601
      Leigh Stoller authored
      I spent a fair amount of improving error handling along the RPC path,
      as well making the code more consistent across the various files. Also
      be more consistent in how the web interface invokes the backend and gets
      errors back, specifically for errors that are generated when taking to a
      remote cluster.
      
      Add checks before every RPC to make sure the cluster is not disabled in
      the database. Also check that we can actually reach the cluster, and
      that the cluster is not offline (NoLogins()) before we try to do
      anything. I might have to relax this a bit, but in general it takes a
      couple of seconds to check, which is a small fraction of what most RPCs
      take. Return precise errors for clusters that are not available, to the
      web interface and show them to user.
      
      Use webtasks more consistently between the web interface and backend
      scripts. Watch specifically for scripts that exit abnormally (exit
      before setting the exitcode in the webtask) which always means an
      internal failure, do not show those to users.
      
      Show just those RPC errors that would make sense users, stop spewing
      script output to the user, send it just to tbops via the email that is
      already generated when a backend script fails fatally.
      
      But do not spew email for clusters that are not reachable or are
      offline. Ditto for several other cases that were generating mail to
      tbops instead of just showing the user a meaningful error message.
      
      Stop using ParRun for single site experiments; 99% of experiments.
      
      For create_instance, a new "async" mode that tells CreateSliver() to
      return before the first mapper run, which is typically very quickly.
      Then watch for errors or for the manifest with Resolve or for the slice
      to disappear. I expect this to be bounded and so we do not need to worry
      so much about timing this wait out (which is a problem on very big
      topologies). When we see the manifest, the RedeemTicket() part of the
      CreateSliver is done and now we are into the StartSliver() phase.
      
      For the StartSliver phase, watch for errors and show them to users,
      previously we mostly lost those errors and just sent the experiment into
      the failed state. I am still working on this.
      56f6d601
    • Leigh Stoller's avatar
    • Leigh Stoller's avatar
      Add routine to check the status of an aggregate; is it enabled and · bb67bf1e
      Leigh Stoller authored
      can we connect to it (using GetVersion()).
      bb67bf1e
  15. 07 Feb, 2018 1 commit
  16. 02 Feb, 2018 1 commit
  17. 31 Jan, 2018 1 commit
  18. 25 Jan, 2018 1 commit
  19. 23 Jan, 2018 1 commit
  20. 22 Jan, 2018 4 commits
  21. 17 Jan, 2018 1 commit
  22. 16 Jan, 2018 4 commits
  23. 09 Jan, 2018 2 commits
  24. 04 Jan, 2018 1 commit