-
Mike Hibler authored
The xentt directory contains a library and tools for controlling replay. Maybe a lot of this should be subsumed into the target API, introducing a new "xen replay" target with assorted new methods that apply only to replay, but I'll worry about that later. * tt_record and tt_replay are scripts to simply recording and replay including snapshoting an LVM, restarting daemons, etc. Before using these you will need to "make install" in this directory. Part of the install is to setup and populate a common area for replay configs and state. /local/sda4/xentt is loaded with a domU kernel, the debug symbol version of that kerneol, a memory-based Linux initrd, and a template xm.conf file. /local/sda4/xentt-state is where the state for any recorded VMs go. By default, tt_record uses the initrd ramdisk. I have never tried the LVM-volume-with-snapshot that it "supports". So to record a domU do: sudo tt_record -c foo this will connect you with the console as well, so you can see it boot and then login as root and do something. You can either "halt" at the console or you can do: sudo tt_record -K foo to kill it and leave the replay state. To replay it, do: sudo tt_replay -p foo This will start the replay domain up in the paused state. Now you can go off and "xm unpause foo" or, more likely, use a VMI tool to attach to the domain, set probes, etc. * replay.c containing basic routines for creating, starting, stopping and destroying replay "sessions" via C. The API is in xentt.h. And yes, the C code uses system() to invoke a perl script (tt_replay) which invokes a python script (xm) which invokes C code to do all the actual work. I sense optimization possibilities... * logfile.c contains routines to read records from the replay log. It is a simplified version of Anton's ttd-event-info code. * test_replay.c is a simple test program that exercises replay functions. The tools/bts directory is where I am attempting to come up with some useful programmatic interfaces to BTS during replay. * io.c contains routines for reading a "branch log". These do not occur in nature, but I didn't know that at the time. I thought replay created a separate log of branch records, but it turns out these records are put into the replay log. * symbol.c is a simple interface to Dave's dwdebug library to print out a "GDB style" symbol+offset resolution of an address. It looks up addresses in one or more symbol files to produce an answer. * bts_extract is a tool to extract BTS data from the replay log. The idea is that the guts of this will become a function for injecting BTS data into the "trace". * bts_dump is a test tool to print out branch records, optionally with symbolic address info. * trace_task is supposed to be the culmination of everything else, only it isn't finished yet. I am currently bogged down in groking the probe API. The idea is that it will collect a branch trace of a guest over an indicated time frame and dump that to a file (eventually: add to the trace). "Time frame" here is represented by start and end EIP+brctr values. What is supposed to happen is that a replay session is created, we "run til" the start location (i.e., right now, put a probe at the start EIP and let it run til we hit the probe with the brctr at the right value), put a probe at the end location, turn on BTS, and "run til" that end location. Yes, a lot of this replicates tools that Anton already has, but I need to understand how things work and/or I wasn't aware of Anton's tools.
497f0e7d