Bug 455035 - The first "long running" performance tests are not displayed, but ran?
Summary: The first "long running" performance tests are not displayed, but ran?
Status: RESOLVED FIXED
Alias: None
Product: Platform
Classification: Eclipse Project
Component: Releng (show other bugs)
Version: 4.5   Edit
Hardware: PC Linux
: P3 normal (vote)
Target Milestone: ---   Edit
Assignee: David Williams CLA
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 454921
  Show dependency tree
 
Reported: 2014-12-12 08:57 EST by David Williams CLA
Modified: 2014-12-12 11:55 EST (History)
0 users

See Also:


Attachments
log of many unit and performance tests during "collection". (277.64 KB, text/plain)
2014-12-12 08:57 EST, David Williams CLA
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description David Williams CLA 2014-12-12 08:57:58 EST
Created attachment 249386 [details]
log of many unit and performance tests during "collection".

It appears the "long running" performance tests are ran, but not displayed on DL page. 

Will attach log of "collection". 

The log shows a "collection run" for jobs related to 44M performance runs. The log can be search for these four lines to see the relevant sections. (The first number is editor line number, which shows the "order" they were ran in). The were ran in the typical (by design) order: 

short tests against baseline
short tests against current build
long running tests against baseline
long running tests against current

2200 inputline: ep44M-perf-lin64-baseline 1 M20141210-0900 4.4.2 9f15502cd721ea8567515caab38b16a863779531 
2331 inputline: ep44M-perf-lin64 1 M20141210-0900 4.4.2 9f15502cd721ea8567515caab38b16a863779531
3337 inputline: ep44MLR-perf-lin64-baseline 1 M20141210-0900 4.4.2 9f15502cd721ea8567515caab38b16a863779531
3466 inputline: ep44MLR-perf-lin64 1 M20141210-0900 4.4.2 9f15502cd721ea8567515caab38b16a863779531

It is only the runs against "current" that invokes performance.ui to produce "results" to display. 

And "by then" the relevant baseline should be available. 

And, in fact, in the log, you can "see" that the performance.ui code is "reading" the data, for example, for "jdt.core": 

 => 68 scenarios data were read from file /shared/eclipse/perfdataDir/org.eclipse.jdt.core.dat

And, I think that provides a hint to this, and probably other problems.
Comment 1 David Williams CLA 2014-12-12 09:06:30 EST
The "hint" is related to those ".dat" files. 

The documentation mentions them as a "time saving" mechanism, to save "data" previously retrieved from database -- presumably in a form that easier/faster to read that from data base? 

The documentation says their use is optional ... but the way the code is written, their use is required. 

I *think* (just my intuition) is these files are not what I think they are, and they should in fact be deleted before each analysis, so they are "recreated" with the correct data ... and, (just my intuition) they may be written to make a "quick and easy" judgment that "it already has the .dat file, so no need to re-fetch the correct data". 

I'm trying a "blind" test run now, to re-create the Maintenance build performance results after manually deleting those files, to force them to be re-created. 

If that seems to work, I will programatically delete them before each analysis.
Comment 2 David Williams CLA 2014-12-12 09:25:27 EST
Initial attempt: 

shows some validity, and promise ... but, adds to the confusion. 

a) no fingerprint graphs are printed! 
b) some "summary titles" say "Performance of I20141210-2000 relative to R-4.4-201406061215" (instead of "M build")! 

The later might be more a quirk than the former, since "looking into the logs" it does really appear to be using the M build. 

Will continue to investigate today. Probably need to run "short tests" data analysis first ... I think I was incorrectly assuming "it run all analysis" over again" ... will will also need to confirm "data is still available on Hudson', etc.
Comment 3 David Williams CLA 2014-12-12 11:55:53 EST
Ok, got the data to display in what appears to be the correct way ... fingerprint graphs are displayed again, and contains "scenarios" from all data sets (long and short). 

I documented some details in bug 455070 which is what I'll use to "wrestle" the "dataDir" concept and try to determine how it's intended to be used ... and/or to make it truly optional again, since it obviously adds a lot of assumptions about "how things are done", (perhaps the order, which data is accumulated there, etc.).