Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[cdt-dev] CDT 3.0 Master Test Plan draft


Hi cdt-dev & cdt-test-dev -

At the last conference call, I promised a draft of the CDT 3.0 Master Test Plan; find it attached.  There's a lot of blanks, and its content will depend on the next revision of the Project Plan and people's commitments.  Nonetheless, if you have feedback, post it to cdt-test-dev (or reply to me).



Cheers,

Brent

Brent Nicolle     Email: bnicolle@xxxxxxxxxx     Tel: (613) 591-7982
Software Quality Developer: Eclipse CDT, Rational Software Architect
IBM Rational Software, 770 Palladium Drive, Kanata, Ontario, Canada.
Title: CDT 3.0 Master Test Plan
CDT 3.0 Master Test Plan
  Organization of testing collaboration for CDT 3.0
 

Change History

Date Revision Author Description
2005/02/08 0.1.0 Brent Nicolle Initial version for CDT 3.0.
2005/02/10
0.2.0
Brent Nicolle
A rough draft.  Awaiting a final Project Plan.
 
Table of Contents
1. Introduction
2. Motivation and Objectives
3. Target Test Items
4. Test Strategy
5. Quality Criteria
6. Deliverables
7. Resourcing
8. Schedule
9. Risks and Mitigations
10. Processes and Procedures

1.1 Document Purpose

The purpose of the Master Test Plan is to gather all of the information necessary to plan and control the Open-Source Testing effort for the C/C++ Development Tools (CDT) 3.0 project. It describes the approach to testing the software, and is the top-level plan generated and used by managers to direct the test effort.

This Test Plan has the following objectives:

  • Identification of the motivations stated by the evaluation mission
  • Identification of the targets for tests, including new features and other non-feature testing areas.
  • Outlines the testing approach that will be used.  Details are included in the test case documents.
  • Identifies the required resources, provides an estimate of the test efforts and lays out a schedule
  • Lists the deliverable elements of the test project
  • Outlines some testing-related processes
1.2 References

See the CDT Project Plan, and probably the Eclipse Project Plan too.

See the CDT Development Plan (TBD) for deliverables that affect the testing schedule.

For details about testing an individual component or aspect of CDT, see its corresponding Test Plan (TBD: list them here).

1.3 Terminology

TBD

2.1 Motivation for Open-Source Testing

CDT is a component used in products ("Productized CDT") of several stakeholders ("Stakeholders").  Stakeholders can choose to test the CDT component in an open-source manner ("Open-Source Testing"), and/or in an internal manner ("Internal Testing").  Internal Testing is "an elephant in the room" that deserves acknowledgment and respect from this test plan.

This test plan necessarily excludes Integration testing of "Productized CDT"; each Stakeholder is responsible for integrating CDT into their own products.

This test plan may also exclude aspects of testing deemed "business confidential" by one or more Stakeholders.  For example, a Stakeholder may withhold evidence from the open-source community that they are testing the CDT on some new platform, until their Productized CDT offering is announced as available for the new platform.

The benefits of Open-Source Testing allow you to:

  • leverage contributions of testing resources by collaborating with other contributing Stakeholders.
  • reduce product risk by detecting and fixing high-priority bugs early in the development process.  The earlier such bugs are identified, the less risk that your Productized CDT will not need major revision to address them later to address them.
  • improve product quality by reducing low-priority bugs early.  Low priority bugs do not typically get addressed in maintenance releases and late-stage private branches, but a great number of such bugs leaves your customers with the impression of poor product quality.  The earlier such bugs are identified, the higher likelihood they will be addressed and the higher the quality of your Productized CDT.
  • reduce duplication of development effort caused by fixing many bugs in several private branches, or in private branches and the public branch.
  • reduce duplication of internal bug-tracking and open-source bug-tracking.  Bugs found in your Productized CDT but resulting in a CDT change should have an open-source Bugzilla entry, in addition to your internal bug-tracking entry.  Bugs found by someone else in CDT will be raised in open-source Bugzilla, but may represent a known problem with your Productized CDT, requiring separate tracking and verification in your bug-tracking system.

2.2 Purpose
The Stakeholders' purpose of testing CDT is to produce a high-quality CDT and from that, a high-quality Productized CDT. 
2.3 Testing Summary
New features should be tested before GA.
Some older features should also be tested.

3.1 Features and other components
The table below lists committed and proposed components of CDT requiring testing.  See the Project Plan or Dev Plan for details on each component.  Components are grouped into Themes where appropriate.

Test Component (or Theme)
Bugzilla ref
Test plan link
QE owner
Compatibility from CDT 2.x:
TBD...
TBD...
TBD...
Compatibility for Workspaces and Projects



API Source/Binary/Contract Compatibility for plugins



Scalability to and Performance of large projects:



Reduce indexing time and cost



Reduce search time and cost (proposed)



Parser Correctness:



K&R C Language Support



Parser/Scanner Environment Discovery



Parser/Scanner language extensions (proposed)



C/C++ Language Variants (proposed)



Abstract Syntax Tree/Data Object Model for C++:



CDT AST/DOM creation (proposed)



User Experience:



Remote system (target) model definition



Managed Build System
81450

Debugger Specialization for GDB embedded targets



Debugger memory view synched with Platform memory view



Launch configuration UI adjustment



Template extension point for new project specialties (KDE/Win32/console/shared lib)



Better support for nested projects and linked resources



Other items:



Pervasive use of workspace variables



Enhanced refactoring



Source based hover help



Java based C/C++ code formatter



Improved ISV documentation



Other testing items:



Build Verification Testing (BVT), or Sanity Testing



Unit Tests



Regression Tests




3.2 Test Environments

We need to identify:
A list of supported platforms
A list of supported JREs
A list of supported configurations, if applicable (eg. national language support in Japanese...)

Specific choice of test strategies are dealt with in the Component Test Plans, but here is a generic list.

4.1 Use of builds
Unless otherwise specified in the Component Test Plan, this strategy applies to all Component testing.

Testing is only performed on identifiable CDT builds (not the latest from HEAD).  This facilitates reporting and reproducing of problems.

Typically, installations should be from clean installs of the latest milestone build Eclipse 3.1, followed by the CDT.

The builds under test should have undergone Sanity testing or BVT, but if a particular component is not blocked by BVT failure this is not strictly necessary.
4.2 Black-box testing

Most testing should be done from an end-user perspective, without access or knowledge of the source-code.  Typically that excludes testing from a PDE workspace.  Typically that also means, only the Eclipse and CDT run-times (not the SDKs) are installed.

4.3 Subjective testing

The "ICED T" model is a useful strategy for reporting and tracking subjective software quality.
For more details, see the white paper, Using The "ICED T" Model to Test Subjective Software Qualities, by Andy Roth.

4.4 Optimize coverage of platform configurations
Given 60 test-cases and 8 platform configurations, that's 480 executions!  It's far more efficient to identify a much smaller subset of test cases to run on all platform configurations, and to distribute other test-cases across all platform configurations.
For example, run test-cases 1 and 2 (high-priority test-cases) and test-cases 3 and 4 (lower priority, but very platform-dependent in nature) across all eight platform configurations, 5-12 on config 1, 13-20 on config 2, ... and 53-60 on config 8.  Or better yet, randomize the distribution prior to executing the suite.  That produces a more reasonable total of 88 executions (82% savings), with considerable platform configuration coverage and negligible risk to quality.

Some suggested quality criteria.

5.1 Sanity Criteria
CDT starts up on the prescribed Eclipse platform.
You can create new projects and artifacts, and run basic C/C++ edit/compile/debug workflows.
All JUnit failures have defects associated with them.

5.1 Milestone Criteria

(All Sanity Criteria, plus...)
Open bug count:
JUnit coverage: more than previous milestone.
Performance metrics (faster than previous milestone):

  • how long does it take to import some arbitrarily big project
  • how long does it take to find some declaration, eg. printf,  for a particular code snippet (eg. which includes stdio.h, iostream, and windows.h)

A list of "pain points"
A list of new features

This section outlines how to report results.

6.1 Sanity results

A quick email to cdt-test-dev@xxxxxxxxxxx is appropriate to indicate testing is complete, with a recommendation for (or warning against) adopting that build.

6.2 Milestone test results

An email to cdt-dev@xxxxxxxxxxx at the start to remind against checking in while milestone testing is underway.
An email to cdt-dev@xxxxxxxxxxx to state testing is complete, and that check-ins can continue.
An email to cdt-dev and cdt-test-dev to state milestone conclusions. 

6.3 Component test results
TBD.  Is there reason to standardize test reports?  HTML would be nice, but not necessarily necessary.

7.1 Stakeholders
IBM Rational Software
QNX
who else? step right up, step right up....
7.2 Staffing
The number of people from each Stakeholder, and their names.
Good question.
7.4 Tooling

Test automation tools are used where available and where appropriate. Test automation is recommendable for regression tests and frequently run tests.  See the Component Test Plans for specific details.

Since commercial automation tools are expensive and some contributors have their favourites, no standardization upon particular commercial tools between all test contributors is expected.  However, contributors are generally amenable to sharing of commercial automation scripts between contributors.  This may include the version-control of automation scripts.  Non-commercial automation tools continue to be evaluated.

JUnit tests are used for developer-level unit-testing and regression tests.  The JUnit tests are largely written by developers.

8.1 Documentation Schedule
CDT 3.0 Deliverable
Plan Date
Dependencies
Project Plan
2005/02/10 (draft)
none
Master Test Plan
2005/02/16
Project Plan
<Feature> Requirements
may be contained in <Feature> Test Plan
Project Plan
<Feature> Test Plan

Master Test Plan
Feature Test Plan
<Feature> Test Report
GA
Feature Test Plan
Feature



8.2 Milestone and Release Candidates Test Criteria
A brief summary of Eclipse milestones is summarized here:
Expected Eclipse milestone declaration date, and content
Corresponding CDT milestone candidate (MC), first build date
Eclipse 3.1 M5 (Fri 2005/02/18) CDT 3.0 MC5 (Mon 2005/03/14)
Eclipse 3.1 M6 (Fri 2005/04/01),
API complete and API frozen
CDT 3.0 MC6 (Mon 2005/04/11)
Eclipse 3.1 M7 (Fri 2005/05/13),
feature complete,
development freeze and lock-down
CDT 3.0 MC7 (Mon 2005/05/30)
Eclipse 3.1 GA (Fri 2005/07/01) CDT 3.0 RC3 (Mon 2005/07/18)

The CDT Milestones criteria are summarized here:
Build Milestone Candidate (MC)
Content
CDT Testing Criteria
CDT 3.0 MC5 (from Mon 2005/03/14)
Eclipse 3.1 M5 integration
stable build reflecting progress
stable build reflecting progress
integrates with Eclipse 3.1 M5
CDT 3.0 MC6 (from Mon 2005/04/11)
Eclipse 3.1 M6 integration,
APIs are documented
stable build reflecting progress

CDT 3.0 MC7 (from Mon 2005/05/30)
Eclipse 3.1 M7 integration
Feature complete
APIs frozen
Development lock-down (bug-fixes only)
stable build

CDT 3.0 M7 declared (expect Fri 2005/06/04)

External (beta) testing begins
CDT 3.0 RC1 (from Mon 2005/06/13)
features complete
features are complete
All major bugs verified
CDT 3.0 RC2 (from Mon 2005/06/27)

Zero major bugs
All major bugs verified
CDT 3.0 RC3 (from Mon 2005/07/18)
Eclipse 3.1 GA integration
Testing reports due
Zero major bugs
All major bugs verified
CDT 3.0 GA declared (expect Fri 2005/07/22)

All major bugs verified

This section lists any project-wide risks to Open-Source Testing, if applicable.  They are followed by various mitigating options:

"X delivered a feature F at the last minute that requires testing."

  • Reject or repeal F from the stable stream, particularly if it was late and unplanned.
  • Branch the stable stream and allow F into the unstable stream.
  • Request that X test the feature.
  • Delay the release until F is tested
"X promises to test something, F, upon which my Productized CDT depends.  How do I know they will do a good job?"
  • Make sure that X knows you are a Stakeholder in F.
  • Review X's test plan and results.
  • If necessary, add testing resources to F.
  • Spot-check X's work in testing F.
"X promised to test something, F, upon which I depend, then reneged."
  • Make sure that X knows you are a Stakeholder in F.
  • Commit some testing resources to F.
  • Review X's test plan and monitor X's progress.
  • If all else fails, complete the testing yourself.  Delay the release of CDT or of Productized CDT until the testing is complete.  Lower X's status as a Stakeholder.

10.1 Bugzilla
Bugzilla bugs should generally be verified by the submitter.
In some cases, the submitter was also the implementer and should request an alternate person to verify.
In some cases, the submitter is not available and a Stakeholder designates an alternate person to verify.
Bugzilla bugs should be closed once the release is released.
10.2 Version control of test plan documents
Let's put them in CVS.
10.3 Build Verification Testing (BVT), or Sanity Testing
See the Sanity Test Plan for an actual set of tests.
Sanity testing needs to be scheduled.  See the Sanity Test Plan.
We need to rotate the responsibility of performing BVTs.
We need to decide when and how often they are necessary or useful.
The Sanity Test report should also report on any recent JUnit failures.

10.4 Suggested Feature Testing Process
1. Produce a test plan, containing:
  • Purpose: what requirements you want to test
  • Strategy: how you're going to test it (eg. Blackbox), how much coverage, on what platforms, etc.
  • Summary: how many test-cases will need to be written and executed, and their difficulty level.
  • Effort Estimation: how long implementation and execution is going to take.
2. Review it and approve it
3. Implementation: Write the test-cases as necessary.
4. Execution: Execute the test-cases, monitoring your progress.  Raise defects as necessary.
5. Verify defects as they are resolved.  Re-run previously failed or blocked test-cases.
6. Report the completion rate, failure rate, list defects and "pain points" to Stakeholder(s)
7. Repeat execution test cycle (4,5,6) as necessary.


Back to the top