Community
Participate
Working Groups
I have a theory .... The docs say that the performance ui app has a -data argument, which control printing the scenarios and line graphs, plus a -fingerprints argument that controls whether the "finger print graphs" are created. According to doc, if neither specified, then both are assume "true" and both printed. (Which, is the way I have been specifying it). At first I thought maybe just documented wrong, but brief glance at code seems to confirm that is still the intent. But, became suspicious since I also specify -data for the "workspace" (before the -application argument) ... so now am wondering if the app somehow "reads" the -data <workspace> argument as "one of it's own", instead of the launcher reading it for "the workspace". Indeed, I've discovered, no workspace is created! So, I'm experimenting with several "debug" methods, to investigate, if there is confusion over that double meaning of the -data argument.
(In reply to David Williams from comment #0) > > According to doc, if neither specified, then both are assume "true" and both > printed. (Which, is the way I have been specifying it). > > At first I thought maybe just documented wrong, but brief glance at code > seems to confirm that is still the intent. > I left out a critical piece of the "documentation" ... if neither specified, both are printed, but if only one is specified, then only that one is printed. (And, we have been getting the 'data' from scenarios printed).
from local debugging it is definitely not a problem getting confused with -data <workspace> (whew) ... but added some 'printlns" anyway, to get a good record of arguments, and ... and some "echo's" to find out what's going on such that the "workspace" specification fails. Probably spelling error ... no matter how many times I've checked that. :) For Finger print graphs, from the local debugging, it "acts like" we simply do not have any scenerios (in the database) that specify "tagAsSummary" or tagAsGlobalSummary". Perhaps running the "long running performance tests" will find a few? Besides simply "looking for some" some more, it might be helpful to create some small test cases, for the simple purpose of testing the functionality.
Its dawned on me that not all applications created "workspaces" ... so, I think that's a non issue ... so removed "and -data workspace not created" from the title.
Just to give status: I've debugged this quite a bit, and it is the case that no "scenerio" meets the criteria to be included in fingerprints, because none of them have a "summary" set. And, for the life of me, can't figure out why not. But, did get one big hint, first small one in "doc" and then larger one in code. There are places, where -- at least according to variable names -- it expects to "get" the buildId, then platformConfig, then VM, in that order. Whereas I had been setting "perf.config" equal to "platformConfig, buildId, then VM". So ... does not seem like anything would work, if not "smart enough" to keep that straight, but maybe some parts are, and some parts of the code are not? The other thing I learned, is that if the data is saved in *.dat files (as we do) it will re-use those, rather than re-reading the database. (So, if some get "written wrong" ... then does not matter if later fixed, unless those files are deleted). This is mostly an issue while debugging ... but ... may want to reconsider "saving" those from one run to the next, and just re-generate them each time (at least until working perfectly). So, for tonight's N-build, and tomorrow's I-build, I have removed existing database, and saved data files, and changed the order of the parameters on the "perf.config" property. also, added a -print option, which will give a little more output during the run. And if none of that helps, will turn on "DEBUG" and "LOG" in the program itself ... though, from local debugging, not sure that is very helpful to me.
After much debugging, and "peeking" into the database, I could see that the correct data was there, and just not being read correctly. Better to have the error in "org.eclipse.test.performance.ui" than in "org.eclipse.test.performance", but ... The heart of the problem was sad to see -- some inappropriate hard coding and assumptions about "database name" that amounts to hard coding. While I've not fixed those problems, I think I can restore the function by hard coding some current assumptions, and opened bug 453958 to track the correct fix. http://git.eclipse.org/c/platform/eclipse.platform.releng.buildtools.git/commit/?id=be3dea9e201cc0c6a21977fdc212c546c81cddb0 Marking as fixed, though still need to build and deploy new version of bundle.