A lot of work on the RPC code, among other things.
I spent a fair amount of improving error handling along the RPC path, as well making the code more consistent across the various files. Also be more consistent in how the web interface invokes the backend and gets errors back, specifically for errors that are generated when taking to a remote cluster. Add checks before every RPC to make sure the cluster is not disabled in the database. Also check that we can actually reach the cluster, and that the cluster is not offline (NoLogins()) before we try to do anything. I might have to relax this a bit, but in general it takes a couple of seconds to check, which is a small fraction of what most RPCs take. Return precise errors for clusters that are not available, to the web interface and show them to user. Use webtasks more consistently between the web interface and backend scripts. Watch specifically for scripts that exit abnormally (exit before setting the exitcode in the webtask) which always means an internal failure, do not show those to users. Show just those RPC errors that would make sense users, stop spewing script output to the user, send it just to tbops via the email that is already generated when a backend script fails fatally. But do not spew email for clusters that are not reachable or are offline. Ditto for several other cases that were generating mail to tbops instead of just showing the user a meaningful error message. Stop using ParRun for single site experiments; 99% of experiments. For create_instance, a new "async" mode that tells CreateSliver() to return before the first mapper run, which is typically very quickly. Then watch for errors or for the manifest with Resolve or for the slice to disappear. I expect this to be bounded and so we do not need to worry so much about timing this wait out (which is a problem on very big topologies). When we see the manifest, the RedeemTicket() part of the CreateSliver is done and now we are into the StartSliver() phase. For the StartSliver phase, watch for errors and show them to users, previously we mostly lost those errors and just sent the experiment into the failed state. I am still working on this.
Showing with 2514 additions and 2174 deletions