Files
openmct/docs/src/process/testing/plan.md
Victor Woeltjen 431b836568 [Branding] Change name in docs hierarchy
...from Open MCT Web to plain-old Open MCT.
2016-05-04 10:13:09 -07:00

4.8 KiB

Test Plan

Test Levels

Testing for Open MCT includes:

  • Smoke testing: Brief, informal testing to verify that no major issues or regressions are present in the software, or in specific features of the software.
  • Unit testing: Automated verification of the performance of individual software components.
  • User testing: Testing with a representative user base to verify that application behaves usably and as specified.
  • Long-duration testing: Testing which takes place over a long period of time to detect issues which are not readily noticeable during shorter test periods.

Smoke Testing

Manual, non-rigorous testing of the software and/or specific features of interest. Verifies that the software runs and that basic functionality is present.

Unit Testing

Unit tests are automated tests which exercise individual software components. Tests are subject to code review along with the actual implementation, to ensure that tests are applicable and useful.

Unit tests should meet test standards as described in the contributing guide.

User Testing

User testing is performed at scheduled times involving target users of the software or reasonable representatives, along with members of the development team exercising known use cases. Users test the software directly; the software should be configured as similarly to its planned production configuration as is feasible without introducing other risks (e.g. damage to data in a production instance.)

User testing will focus on the following activities:

  • Verifying issues resolved since the last test session.
  • Checking for regressions in areas related to recent changes.
  • Using major or important features of the software, as determined by the user.
  • General "trying to break things."

During user testing, users will report issues as they are encountered.

Desired outcomes of user testing are:

  • Identified software defects.
  • Areas for usability improvement.
  • Feature requests (particularly missed requirements.)
  • Recorded issue verification.

Long-duration Testing

Long-duration testing occurs over a twenty-four hour period. The software is run in one or more stressing cases representative of expected usage. After twenty-four hours, the software is evaluated for:

  • Performance metrics: Have memory usage or CPU utilization increased during this time period in unexpected or undesirable ways?
  • Subjective usability: Does the software behave in the same way it did at the start of the test? Is it as responsive?

Any defects or unexpected behavior identified during testing should be reported as issues and reviewed for severity.

Test Performance

Tests are performed at various levels of frequency.

  • Per-merge: Performed before any new changes are integrated into the software.
  • Per-sprint: Performed at the end of every sprint.
  • Per-release: Performed at the end of every release.

Per-merge Testing

Before changes are merged, the author of the changes must perform:

  • Smoke testing (both generally, and for areas which interact with the new changes.)
  • Unit testing (as part of the automated build step.)

Changes are not merged until the author has affirmed that both forms of testing have been performed successfully; this is documented by the Author Checklist.

Per-sprint Testing

Before a sprint is closed, the development team must additionally perform:

Issues are reported as a product of both forms of testing.

A sprint is not closed until both categories have been performed on the latest snapshot of the software, and no issues labelled as "blocker" remain open.

Per-release Testing

As per-sprint testing, except that user testing should cover all test cases, with less focus on changes from the specific sprint or release.

Per-release testing should also include any acceptance testing steps agreed upon with recipients of the software.

A release is not closed until both categories have been performed on the latest snapshot of the software, and no issues labelled as "blocker" or "critical" remain open.