|Re: [ormf-dev] Testopia|
I understand the general wisdom of not denormalising information, as you clearly explain in your message, Achim. In my fucntional testing, I certainly suffered from the pain of having to keep my test scripts, maintained with an automated tool, and my related test cases and procedures, maintained with a text editor and/or a spreadsheet, in sync.
However I would like to share the reasons why I found the ancillary textual and tabular information invaluable:
1) Tracking: the automated test tools I used did not have any traceability maintaining and stats producing facilities; just staring at a large number of test script certaimly could not and did not give me an overall view of where the testing effort was at and what needed doing. In other words, a high level management perspective was totally lacking.
2) Monitoring of state of development: different test scripts were in different states in their life cycle (designed, implemented, in regression, etc.) and it would have been impossible for me to figure out which test script was in which state of development if I had not had some external control documents.
3) Coverage: this is really a subset of traceability, which I mentioned above. Since our development was (and still is) use case driven, all functional tests were based on use case flows and scenarios. Figuring out which use case was completely tested in which suite (especially in the case of included or extending use cases) would have been a nightmare without external matrices to specify it.
4) Bug identification: when a script failed in regression, I would go through the test case manually by repeating the steps as defined in the script. Having a textual document that clearly specified startup, steps and all conditions made the task of spotting the bug much much easier.
5) Test script reproducibility: If for any reason my test script got corrupted and I had to reproduce it, again having a detailed test case document to use as my guideline was pretty invaluable.
I understand when you talk about the different consumers of test cases. However I think that, even when an automatic test tool is chosen, as is our case, there are always circumstances in which human interaction or intervention is needed, as I have tried to highlight in my points above. This is why, at my first superficial reading of Testopia, I thought it might have quite a good role to play.
I just thought I would share my findings and my concerns. As I have said many times, my experience of functional testing is limited. Also it may well be that GUIDancer solves all issues I have mentioned above. I am looking forward to Achim's strategy document and his expertise to learn how the issues I have faced and tried to tackle in a simple, time consuming and perhaps naive, way, can be solved in a more automated, less error prone and more elegant fashion.
On 19 Oct 2008, at 15:16, Achim Lörke wrote: