[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[hyades-dev] Meeting minutes for discussion on proposed extensions to the existing Hyades testing resources (org.eclipse.hyades.tests).
- From: Paul Slauenwhite <paules@xxxxxxxxxx>
- Date: Wed, 20 Oct 2004 13:30:13 -0400
- Delivered-to: firstname.lastname@example.org
Title: Testing Hyades with Hyades - Proposed extensions to the existing
Hyades testing resources.
Date: October 19, 2004 (3 - 4:30 PM EDT).
Moderator: Paul Slauenwhite (paules@xxxxxxxxxx)
Attendees: Paul Slauenwhite, Bruce A Malasky, David J Lofgren, Eugene Chan,
Joseph P Toomey, Marius Slavescu, Poonam Chitale
-Motivation for these extensions.
-General requirements for product testing.
-Existing Hyades testing resources.
-Several proposed organizational extensions.
-Several proposed procedural extensions.
-Future considerations for Hyades Test.
Motivation for these extensions:
-The IBM Toronto Hyades Team currently executes 792 unstructured functional
manual test cases and numerous (1000+) other JUnit test cases for Hyades
plug-ins and functionality.
-Manual test cases are contained within an internal repository with limited
results capturing and tabulation capabilities.
-Internal repository is not portable, not scalable, high maintenance and
unsuited for capturing multi-platform by multi-release test results.
-JUnit test cases are contained in plug-in source trees and removed during
-Recognizing that Hyades is an open-source platform for Automated Software
Quality (ASQ) tools the logical realization was that we should ?practice
what we preach? by leveraging Hyades Test to test Hyades plug-ins.
-Migration of existing 792 unstructured functional manual test cases to
structured Hyades manual test cases.
-Functional manual test cases and JUnit test cases planned to be
amalgamated and persisted in the org.eclipse.hyades.tests project within
-During the planning and execution phases of this migration, several
organizational and procedural enhancements to the existing Hyades testing
resources have been identified.
General requirements for product testing:
-An organized, terse and accessible body of knowledge for assisting the
-A crisp, portable, scalable, persisted and user-maintained testing
infrastructure for organizing, deploying and monitoring manual functional
and automated unit test cases.
-Multi-product by multi-platform by multi-release testing and results
-Structured manual test case descriptions to eliminate confusion and
-An infrastructure that encourages automation and inviting to test case
contribution (e.g. developers).
-Ease of use for increased efficiency, effectiveness and use.
Existing Hyades testing resources:
-org.eclipse.hyades.tests project in CVS.
-Project contains an /org.eclipse.hyades.tests/ directory for
documentation, deployment files and root-level Hyades JUnit and manual test
-Each plug-in provides a directory to the project containing the following
-/deployment/ (Hyades deployments and locations)
-/junit/ (Hyades JUnit test suites)
-/junit_results/ (Hyades JUnit test suite execution results)
-/manual/ (Hyades manual test suites)
-/manual_results/ (Hyades manual test suite execution results)
-/src/ (JUnit test case source code)
Several proposed organizational extensions:
-Provide a root-level /org.eclipse.hyades.use.cases/ directory for
functional test suites for use cases.
-The current organization at the plug-in level does not account for
functional test cases that cross plug-in boundaries.
-Hierarchical test suite definitions (e.g. AllTests.testsuite) for
fine-grained test suite execution and results reporting.
-Reflect the subdirectory organization of the JUnit and manual test suite
directories in the JUnit and manual results directories.
-Internally organize the lowest level JUnit and manual results directories
by platform (e.g. Windows, Linux, AIX, AS/400, HP-UX, Linux 390, OS/390,
Solaris) to persist multi-platform test execution results in CVS.
-Provide a /dependencies/ subdirectory for test suite dependencies such as
common utility classes, required JARs and datapools.
-Utilize a standardized namespace (e.g. plug-in ID) when naming and
identifying test resources.
-Eclipse Java project configuration files (e.g. .project and .classpath)
identifying dependencies and the source trees to compile and run JUnit Java
-A goal of testing with Hyades should be to automate all JUnit test cases
during plug-in builds (e.g. BVTs). For example, launch the root-level
AllTest.testsuite after a build has completed to determine the pass/failure
rate of that build and report the failing test cases to a centralized
location (e.g. web site).
-Joe mentioned that there will be support in Hyades v3.2 (i2) for launching
Hyades JUnit test cases within ANT.
-Paul mentioned that most of these organizational enhancements where for
distributing the manual test effort between multiple test team members.
Several proposed procedural extensions:
-A terse but effective formal test process documentation specifying how
Hyades plug-ins and functionality are to be tested by utilizing the Hyades
-A standardized structured manual test case description including context,
dependencies, ordered steps of execution and expected results (see Appendix
-Utilize formal software quality metrics in measuring test coverage by
providing documentation on how to execute test suites with the necessary
code coverage profiling and filter set.
-Persist test execution reports (e.g. summary and coverage statistics) at
the highest directory level containing all of the test suite execution
-Committers are responsible for generating and checking-in test execution
reports for their components at predetermined milestones in the test pass.
-Each committer provides a summary test execution report for their
components to be used in a web project accessible from the Hyades site.
-Committers are responsible for checking-in test execution results to CVS
for their components.
-Multi-release test suite execution results organized in CVS by release
branches (e.g. v3_1_0) with HEAD containing the current release.
-There was some confusion around the scope of the proposed structure for a
manual test case. Joe has considered that this structure would be at the
test suite level and Paul clarified that the structure was indented fro the
description field of each test case.
-Joe mentioned that contextual information (e.g. dates, authors,
prerequisites) from the proposed structure of the manual test case
could/should be stored in the Hyades test case model.
-Joe mentioned that the name field in the proposed structure of the manual
test case is redundant information and could be removed. Paul agreed.
-Joe posed the question if each result entry was coupled whit the
corresponding execution step. Paul answered by suggesting that a test case
could be structured this way or simply one accumulative result entry.
-Bruce was concerned with checking-in all the execution result files into
CVS would be costly especially if the entire source tree is extracted at
-Paul offered the solution that only execution result files for a
particular release would be associated to a particular branch in CVS. This
still left the outstanding issue of the HEAD stream in CVS with no
-Poonam asked the question how does a execution result define a test case
pass or failure. Joe answered with an explanation on the test case verdict
as seen in the Hyades manual test client.
-Paul suggested a scenario for generating an accumulative results report by
running a report at one level of the directory structure which iterates all
the execution results at that level or lower.
-Joe concluded by offering a suggestion on how Hyades-based Rational
tooling handles multiple execution result files that includes a versioning
schema for results and a mechanism in which testers can promote a certain
version of test results for tabulation.
Future considerations for Hyades Test:
-A preference pane or extension point (e.g. wizard pane) for users to
provide an organization-specific template for manual test case
-Capture contextual execution information such as tester, platform and
build driver in the test suite execution results.
-Provide additional verdict classifications such as n/a (not applicable)
and n/r (not ran).
-Stopping and restarting of test suite executions by providing
functionality to execute test suites with partially completed execution
-Simple summary results reporting for one or more execution result(s).
-More complex statistical analysis on test suites execution results.
-Joe thought a preference pan would be more in-line with the Eclipse user
experience for structuring both manual and JUnit test case descriptions and
that contextual information (e.g. dates, authors, prerequisites) from the
proposed structure of the manual test case could/should be stored in the
Hyades test case model.
-Joe did not think that stopping and restarting of test suite executions by
providing functionality to execute test suites with partially completed
execution results was technically feasible based on the current design. An
alternative is to create smaller test suites to ensure that the same tester
will be able to complete the same test suite at one sitting.
-A discussion started on the four verdict classifications and Joe offered
the solution of not extending the verdict classifications and they would
expose the reason field in the Hyades manual test client which users could
-There was a discussion on how to associated a defect number with a failed
test case result and several people offered their ideas. Paul mentioned
that they add a human-readable comment in the result, David mentioned that
they used a defect number marker and Joe mentioned the work Rational is
doing with linking their testing tools with ClearQuest. The only
alternative given for Hyades was to use the info field in the Hyades manual
test case client to capture the defect for reporting purposes.
-Joe requested that we open Bugzilla enhancements for all of these issues.
-My apologies for the lateness of this meeting time. I realize some people
that where interested could not attend due to different time zones.
-Special thanks to Eugene Chan for transcribing the meeting minutes.
IBM Toronto Lab, Canada
Telephone: (905) 413-3861
Tie Line: 969-3861
Fax: (905) 413-4920