You dont have javascript enabled! Please enable it! VAL-050 Functional Testing Guide for Computerised System Pharmaceuticals quality assurance & validation procedures GMPSOP

VAL-050 Functional Testing Guide for Computerised System

DepartmentValidation/Technical ServicesDocument noVAL-050
Prepared by: Date: Supersedes: 
Checked by: Date: Date Issued: 
Approved by: Date: Review Date:

Document Owner

Validation Manager

Affected Parties

All Validation, Technical Service, Operations, Quality Assurance, Engineering and Project staffs involved in computer validation projects.

Purpose

To guide the nature and extent of functional testing of a computerised system.

To provide the documentary evidence that the system performs as specified.

Scope

This SOP provides guidance on functional testing during the development or change of computerised systems which have GxP impact at a GMP manufacturing site.  What constitutes a change to a computerised system is described in manufacturing change control procedure.

Definition

Data IntegrityThe degree to which a collection of data is complete, consistent, and accurate.
Documented EvidenceThe documentation generated during the life cycle of a computerised system that establishes and maintains its validated status.  “Define it, do it, prove it, keep it.”
GxP

The range of good practices, i.e.

·         GMP (Good Manufacturing Practices),

·         GLP (Good Laboratory Practices),

·         GCP (Good Clinical Practices),

·         GDP (Good Distribution Practice).

GAMPGood Automated Manufacturing Practice, as described in the “GAMP Guide for Validation of Automated Systems” published by ISPE.
GEPGood Engineering Practice – The application of established engineering methods and standards throughout the project life cycle to deliver appropriate, cost-effective solutions. Refer to GAMP version 4, Section 11.3.  See also SOP VAL-030.
Test CaseDocumentation specifying inputs, predicted results, and a set of execution conditions for a test item.
Test, Boundary

A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain.

Intended to confirm consistency of the process within the expected range of values.

Test, Branch

Testing technique to satisfy the requirement that for each decision point, each possible branch [outcome] be executed at least once.

Intended to confirm the performance of each option resulting from a decision branch.

Test, Functional

Testing that ignores the internal mechanism or structure of a computerised system or component and focuses on the outputs generated in response to selected inputs and execution conditions.  It evaluates the compliance of a system or component with specified functional requirements and corresponding predicted results.

This is a high-level term that encompasses a number of test types, (also defined in this SOP) which are intended to confirm the actual, operational outcomes of a system.

Test, Integration

A technique involving an orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.

Intended to confirm the correct coordination of the elements of a total system.  Elements would typically be separately verified at a lower level, (e.g. see Testing, Unit).

Test, InterfaceTesting conducted to evaluate whether systems or components pass data and control correctly to one another.
Test, ModuleSee Test, Unit
Test, RegressionRerunning test cases, which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.
Test, RobustnessTesting conducted to evaluate the degree to which a software system or component can function correctly in the presence of invalid inputs or stressful environmental conditions.
Test, StatementTesting conducted to satisfy the criterion that each statement in a program be executed at least once during program testing.
Test, StressTesting conducted to evaluate a system or component at or beyond the limits of its specified requirements.
Test, UnitTesting conducted to verify the implementation of the design for one software component, e.g. a unit or module, or a collection of software components, (also known as module testing).
Time SharingA mode of operation that permits two or more users to execute computer programs concurrently on the same computer system by interleaving the execution of their programs.  May be implemented by time slicing, priority-based interrupts, or other scheduling methods.

Related Documents

VAL-030Equipment Specification and Qualification
VAL-040Computerised Systems Validation – Overview
VAL-045Impact Assessment for Computerised Systems
VAL-055Design Qualification Guidelines
VAL-060Protecting the Reliability of Electronic GMP Records

EHS Statement

Consideration should be given to requirements for the safe operation of equipment, both in normal use and whilst conducting tests.

Procedure

1. Introduction / Definition / Purpose of Functional Testing

An overview of the testing philosophy for computerised systems is given in SOP VAL-040.  Systems are categorised in line with GAMP principles and their functions are listed and rated for GxP impact in Impact Assessment forms, per SOP VAL-045.  The level of functional testing applicable to each function within the computerised system is rated as “Minimal”, “Some” or “Extensive”.  All levels aim to verify that the system functions correctly (performs as specified in normal situations) whereas the increased levels also emphasise verification that the system does not function incorrectly (does not malfunction when challenged with unusual circumstances).  The selection of tests also takes the Risk Assessment process into account, per SOP VAL-055.

Functional Testing cannot be effective if specifications are either missing or inadequate, or if requirements are written in a manner which is not testable.  Refer to Appendix 1 for some examples of software specification requirements.

A range of detailed issues is described below for appropriate inclusion in Functional Test Scripts.  Functional Testing can be applied at all levels of software testing, from unit to system level testing.  Knowledge of the internal operation of the computerised system (e.g. from a Structural Code Review) and from processes and equipment relevant to the computerised system may be used to enhance Functional Testing and to determine the range of appropriate test conditions and inputs.  The extension of Functional Testing may be appropriate where the Structural Code Review yields insufficient information.

Computerised systems are optimally developed to a high standard prior to formal testing.  Testing alone cannot fully verify that software is complete and correct.  The detection of an undue number of issues on formal testing reduces confidence that the system will operate correctly, despite rectification of the known defects.

In certain circumstances, the manufacturing site may accept either statements of assurance from suppliers based on user experience, or software development standards and the provisions required from the supplier to satisfy site’s requirements.

The Functional Testing is establishing documented evidence, which provides a high degree of assurance that a computerised system will consistently function in accordance with its pre-determined specifications and quality attributes throughout its lifecycle.

2. Design of Formal Functional Tests

2.1. Stages of Software Development

It is advisable to structure / design the test approach be to capture problems as early as possible and at the appropriate level.  For that reason a ‘bottom-up’ approach is usually suggested, i.e. from smallest level of operation, up to the total system.  The usual Test Level sequence is Unit/Module testing -> Integration Testing -> User Acceptance Testing.

Testing during software building addresses the issues of correct coding, correct functioning of modules and correct functioning when modules are integrated.  Reviewable documentation of testing becomes part of the development project.  Sufficient detail for the tests to be repeatable will facilitate any Regression Testing that may be required later.

2.1.1. Research Testing

Research programming is a pre-development technique sometimes used to investigate how programs might be constructed and whether deliverables are achievable. Research testing is informal and aids the generation of Specifications.  Once the specifications are finalised, the system is ideally built from a fresh start without use of code copied from research programming, as these may contain left-over code which could have unintended consequences.

2.1.2. Unit Testing (Syn. Module Testing)

Unit Testing evaluates the code to ensure that controlled inputs deliver predicted outputs, i.e. compliance with detailed specifications.  Higher level testing, which concentrates on the macroscopic properties of the system, is dependant on the correct operation of small sections of code, particularly where insidious problems may occur in unusual circumstances.  Unit Testing examines the code as implemented rather than the functional output of the system, and requires an understanding of the design intent.  Testing may be performed using special test code or test programs, or it may be tested in its final environment with code inserted to force test cases.  Test programs establish the test environment and establish values for data structures, as well as providing dummy external functions for the test.  Testing appropriately includes Statement Testing, Branch Testing, Boundary Testing and Robustness Testing.

Unit Testing addresses:

a. Does the code implement what the designer intended,

b. For each conditional statement, is the condition correct,

c. Do all special cases work correctly,

d. Are error cases detected correctly,

e. Does it behave in the way the design assumes, over the full range of operational possibilities?

Each statement is ideally covered at least once and each conditional statement executed at least once each way.  Appropriate boundary cases need to be exercised.

The robustness of the programming needs also to be considered, for instance to ensure that inappropriate inputs are excluded and resultant error messages are generated, e.g. non-numeric or no data where a numeric entry is required; and that failure to perform a function is reported with no loss, e.g. failure to store data.

A case may be documented for desk-review only of low risk issues.

2.1.3. Integration Testing

Integration testing identifies problems that occur when modules are combined.  By using a test plan that requires each module to be tested to ensure its viability before modules are combined, any errors discovered when combining units are likely to be related to the interface or interaction between modules.  Testing focuses on the appropriate transfer of data and control across a program’s internal and external interfaces.

Integration Testing may involve writing software shell “driver” programs, and calling up several modules to test flow and control between these modules.  Shell programs are purpose written code that link modules for test purposes, where the code to perform this function in the final version has not yet been developed.  Shell programs are replaced in the final version.  Typical issues examined include:

a. Resource loss – examines the consumption of system resources over time.

b. Throughput – examines the amount of data transferred from one place to another or processed in a specified amount of time.

c. Performance – where the performance requirements are checked.  These may include the size of the software when installed, the amount of main memory and/or secondary storage it requires and the demands made of the operating system when running within normal limits or the response time.

d. Security – where unauthorised attempts to operate the software, or parts of it, are attempted.  It might also include attempts to obtain access to the data, or harm the software installation or the system software.

e. Recovery – where the software is deliberately interrupted in a number of ways, for example, taking its hard disc off line or turning the computer off, to ensure that the appropriate techniques for restoring any lost data will function.

f. Transaction synchronization – examines requirements for a concurrency control algorithm, i.e. Time Sharing.

g. Stress Testing – where abnormal demands are made upon the software by increasing the rate at which it is asked to accept data, or the rate at which it is asked to produce information.  More complex tests may attempt to create very large data sets or cause the software to make excessive demands on the operating system.

2.1.4.‘Operational’/User Acceptance Testing

User Acceptance Testing concentrates on the macroscopic properties of the system. It is dependant upon successful Module and Integration testing.  It is not possible to ensure that all functions of a computerised system operate correctly in all permutations of execution and under all conditions by performing User Acceptance Testing alone.  Techniques for User Acceptance Testing (Functional) are described in the next section.

Testing design aims to:

a. Emphasize aspects that are of critical importance,

b. Find errors in the software, and not merely to confirm correct operation in normal conditions,

c. Challenge the limits of operations,

d. Check responses to invalid inputs or actions,

e. Show that both valid and invalid data or conditions are handled appropriately,

f. Exercise the system over its full range of operability.

2.2. Methods of Testing

It is not possible to generally prescribe which sorts of test must be applied for all cases and systems.  A variety of methods is available and described below to assist in the design of testing.

2.2.1. Functional Tests

Functional Testing verifies that:

a. The code branches correctly at each branch point identified in the Structural Code Review, when challenged with Test Cases designed to elicit each response.

b. The operations within each branch perform in accordance with their specification.

c. Calculations that are critical to product quality or product release decisions are accurate, using known input data.

d. Hard copy report data accurately reflects its source data.

e. Pass/Fail signals from interfacing computerised systems and sensors are processed correctly, e.g. accurately rejecting the failed unit of product.

f. Correct operation and indexing is maintained after cycle stop, emergency stop, and safety guard being opened.

g. Sensor signals for adverse conditions are responded to appropriately, e.g. jam-ups, label outages, etc.

h. Data is displayed accurately at operational screen displays and user interfaces.

2.2.2. Boundary Tests

Boundary Testing is used to challenge the specification limits of custom programs and of user-defined configurations. Maximum and minimum values are to be tested in critical input and output fields.

2.2.3. Alarm Tests

Alarm conditions are tested to verify that:

a. Entry of invalid data causes an appropriate error message to be displayed.

b. Invalid, illegal or adverse conditions such as alarms, errors and hardware failures are identified and highlighted to the user in a way that allows appropriate response.

2.2.4. Error Tests

Error tests (sometimes called negative tests) examine responses to specifically selected invalid inputs or operations, and calculations based on this data.  Conditions that are expected to cause errors are thereby deliberately generated and the system response monitored and documented.

Test conditions might include:

a. Detection and recovery from incorrect data entry, (e.g. out of range, mismatch with data type).

b. Use of input values that seem likely to cause program errors; e.g., “0”, “1”, NULL, empty string.

c. Attempts to generate calculations with confounding special values e.g. division by zero, square root of a negative value.

d. Detection of failure of critical sensors and operational interfaces to other computerised systems, (e.g. checking of operation in both ON and OFF state; secondary ‘Healthy’ inputs; response to the connection being severed mid-operation).

2.2.5. Configuration Tests

Confirm that the validated status is defined, tested and documented for Recipes, (user-defined GAMP 4 configuration files, such as for files containing operating parameters and sequences, vision models, autoclave cycles etc.).  These are subject to version control, back-up and archiving procedures.
It also confirms that there are appropriate SOP’s to define their use and updating; and that any recovered files are the validated versions.

Note: This is different from system configurations, which are normally defined in Installation Qualification.

2.2.6. Stress Test

The aim of Stress Testing is to detect any problems when the system is pushed to its performance limits, and to establish a performance baseline.  Test Cases need to be designed to simulate the maximum projected load and over.  Functions that require significant time to complete, involve large amounts of data, or include complex processing will be selected for testing.  An attempt to identify any compatibility or interference issues with other applications on the server is also beneficial.

2.2.7. Security Tests

Security testing ensures that the system provides correct access for the specific responsibilities and associated functions according to the predefined security hierarchy.  Issues to be verified may include that:

a. Different levels of security restrict access as specified, (i.e. that unauthorised access is prevented in accordance with the profile design).

b. Passwords are required and meet formatting and updating rules.

c. Output data files are unable to be amended either from within the application or by using external programs such as text editors.

d. Critical data files are secure from unauthorised deletion or overwriting.

e. The ordinary user has access to data entry fields only.

f. Updates to program and user-defined configuration files require high-level access.

g. Attempts to access system functionality without the correct level of authorisation are logged for review.

2.2.8. Format Tests

Format testing verifies that displayed information and printed reports contain required data, and details of the product/event and of the context of the report.  It confirms that the report:

a. Identifies its title/purpose.

b. Contains descriptive information identifying product/lot and/or GxP event.

c. Contains the timestamp of the event being reported.

d. Gives the specified data in corrects units of measure.

e. Contains the date the report was printed and identifies the responsible authority.

f. Identifies the application number and software version.

2.2.9. Data Integrity Tests

Data integrity testing focuses on data monitoring and controls.  This includes verification of transaction logging to the audit trail, invalid entry checking, error messages, etc.  The requirements of SOP VAL-060. may be used as a guide.

Testing of data migration programs to verify that data will be accurately converted, will enhance confidence in their use.  This involves confirmation of data types, indexing of databases and alignment of Tag lists with correct PLC memory addresses.

2.2.10. Restart and Recovery Tests

Testing focuses on specific conditions that may cause the system to terminate unexpectedly.  The testing will ensure that the integrity of the data within the application is not compromised in the event of a system, application, system interface, or network failure.  It will also ensure that the user can determine the status of operations in process at the time of termination.  Testing involves forcing unique conditions (outside of normal application operations) that will result in application termination or significant application errors, e.g.

a. Incorrect start-up and shutdown procedures.

b. Recovery after power outage without data loss, or loss of cycle information.

c. Disaster Recovery Plan for operating programs, user-defined configuration files and data files.

2.3. Extent of Testing

Effective software testing requires that considerable effort be put into the definition of what is to be tested.  The appropriate amount of testing to perform depends on the potential impact on Quality and on the risks associated with the system.  SOP VAL-045 is followed to determine the C level (Impact Assessment Category) and consequently the general extent of testing.  This will be based on the use of the system, (i.e. its Impact on Quality) and the novelty of the system, (i.e. its GAMP rating).  The three GxP test extents of “Extensive”, “Some” and “Minimal” (which arise from the C levels of C5, C4 and C3 respectively) are defined inSOP VAL-045.  GEP Functional testing arises from the C levels of C2 and C1.

Within each Impact Assessment Category, Risk Assessment tools are used as described in SOP VAL-055 to determine the critical functions / components and the sources / consequences of potential hazards / faults, etc.  Test cases appropriately include the potential hazards and faults, and challenge not only the normal function.  Areas with most at stake merit an increased testing focus.

Where “Extensive” Functional Testing is specified, documented reductions in test requirements may be justified.

2.3.1. GEP Functional

GEP functional testing is used to confirm that the system operates as intended, per its URS and per industry standards.  It verifies commercial suitability.  Results may be noted in the IQ or Change Request.  Where the URS identifies GxP functions, these may require additional testing as per the Impact Assessment.

2.3.2. Minimal Functional

This testing level ignores the internal mechanism or structure of a software system and focuses on the outputs generated in response to selected inputs and execution conditions (sometimes this is described as “Black-box” testing), to confirm that:

Critical GxP functions operate in accordance with their specifications.

2.3.3. Some Functional

This testing takes into account the structure of a software system for the ‘Direct Impact’ or ‘Indirect Impact’ functions or modules.  In addition to the “Minimal” requirements, it confirms that:

a. Branch logic operates as specified for important and/or frequently used paths, (see Functional Tests),

b. Deliverables within important and/or frequently used paths are generated as specified, (see Functional Tests),

c. Inputs exceeding the allowable or expected, upper and lower limits result in defined responses, (i.e. Boundary Tests),

d. Alarms operate correctly when boundaries are exceeded, (i.e. Alarm Tests),

e. Inputs that are of an inappropriate type or size result in defined responses, and error messages are generated and recorded appropriately, (i.e. Error Tests).

f. Recipes are managed effectively, (user-defined GAMP 4 configuration files, such as for files containing operating parameters and sequences, vision models, autoclave cycles, etc.), (i.e. Configuration Tests).

2.3.4. Extensive Functional

This testing takes into account the structure of a software system for the ‘Direct Impact’ functions or modules.  In addition to “Minimal” and “Some” requirements, confirm that:

a. Branch logic operates as specified for all ‘Direct Impact’ paths, (see Functional Tests),

b. Deliverables within all ‘Direct Impact’ paths are generated as specified, (see Functional Tests),

c. Correct operation is maintained when the system is subjected to maximum load, (i.e. Stress Tests),

d. Access to the computerised system, its operating programs and its user-defined configuration files is within defined security parameters, (i.e. Security Tests),

e. Reports generated by the system contain the specified descriptive and numerical information, expressed in correct units of measure, (i.e. Format Tests),

f. Recovery procedures operate effectively for operating programs, user-defined configuration files and data records, (i.e. Restart and Recovery Tests),

g. All GxP records generated by the system are reliable, (i.e. Data Integrity Tests).

2.4. Standard of Testing

The system specification guides the design of Functional Testing.  It may also reference other relevant standards.  Structural Testing (Code Review) can also be used to guide the selection of tests.

Good testing practice ensures that tests:

a. Cover all relevant aspects of a piece of a system or equipment,

b. Refer to specifications,

c. Are reproducible throughout the life cycle of the system or equipment, and

d. Are executed and documented well enough to enable traceability of:

*    The test,

*    The test results,

*    The handling of deviations, and

*    The responsible person for each activity.

Description of the test procedure in sufficient detail will enable repetition of the test.  Testing needs to be documented as it is performed, and any raw data referenced, dated and retained to demonstrate that the testing was performed to an agreed standard.  Results need to be recorded unambiguously as factual observations, including a clear pass or fail statement.  All results need to be kept as primary evidence of testing and as such, need to be filled out with care.  Each test needs to reference the relevant clause number of its controlling specification for traceability.

Functional Testing is performed in a test environment, which is representative of the target production environment.  Where use of a test environment is not possible due to the nature of the system, the live environment may be used without saleable product, and with a recovery plan to restore the previous system upon failure of testing of the new system.

Applicability of results between Test and Live environments is required to ensure that Functional Test results adequately represent the final operating system.  Functional Testing is only valid in the context of the technical architecture and system configuration that supports the computerised system.  Functional Testing also forms the basis of equipment qualification after transfer to the live environment.  Consistency of results with those achieved in the test environment is important.

Tests are optimally be performed by persons who are familiar with the system or similar systems.  Testers need to be as independent as possible and not authors of testing procedures, if possible.  Connected devices are to be within calibration.

2.5. Formality of Testing

Validation evidence is formal, which requires it to be structured, detailed, objective, documented and subject to review and approval.  Generating such evidence requires discipline and care.  Documented Evidence may be reviewed by Regulatory Auditors to support our Licence to Manufacture.

During the development of a system it may not be feasible to perform all testing at a formal level.  ‘Informal’ testing can be of benefit in building confidence in the system performance.  Changes as a result of issues discovered during informal testing are optimally controlled by a change management process involving logging of issues and version control, with the aim of enhancing the performance and assurance of the system.

Suppliers might perform formal testing with the provision of appropriate evidence.  This can be used by the iste to reduce the scope of formal testing.  Systems that require ‘validation’ cannot rely solely on informal testing and / or statements of assurance from suppliers.

3. Guidance on Performing Functional Testing

3.1. Planning

The test strategy is planned as above, including the use of Risk Assessment to determine appropriate conditions.  Test Conditions and Expected Results are documented using the appropriate functional testing template. The Functional Test template should be approved prior to proceeding.

3.2. Review and Approval of the Plan

The Functional Test Script is reviewed by the Business/Process Owner (or the Project Manager who is acting on their behalf) to confirm that the proposed testing adequately demonstrates that the computerised system will meet their needs, i.e. complies with Specifications.

It is also reviewed by the Validation Manager or Q.A. Manager for GxP compliance.

The following types of information assist in review of the in the Functional Test Script or protocol:

a. Where the report fits into the overall test strategy, e.g. Module Test, Integration Test, SAT, etc.

b. Assumptions, e.g.

“Data used is representative of production, both in type and volume”; or,

“The network and technical architecture have remained in a qualified state via Change Control since the last formal qualification”.

c. Exclusions, e.g.

“Network and web portals are outside the scope of this testing” or,

”Modules of the program with no impact on GxP and no significant connection to GxP modules are tested under GEP”.

d. Limitations, e.g.

”Documented system limitations will not be tested, however workarounds defined for these limitations will be tested”.

e. Dependencies, e.g.

“Interfacing equipment and/or instrumentation upon which Functional Testing depends is in a static state and will not change during testing.  Any changes made after testing will be considered as possible grounds for re-testing the interfaces to the system”.

3.3. Execute the Test

The environment where test is performed is recorded, e.g. Test, Live.  The software version is also recorded, especially where it changes during testing.  Where a test is dependant on other tests, the test sequence must be correct.

Test results are recorded with unambiguous “pass” / “fail” statements.  Individual results are signed/initialled and dated.  Attachment and referencing in the Test Script of screen prints of data values or error messages or prints of graphical displays provides supporting evidence.

Failed results and responses are to be preserved.  The Failures Retesting section is filled out where the result is not a pass, and a number is used to track to defects.  New Test Conditions and Expected Results are recorded.

The Correction / Failure Cause column is used to:

a. Record that a system defect was identified and how it was corrected, (i.e. errors attributable to software or system design),

b. Record tester errors and their resolution, (i.e. any mistake that the individual executing the script makes),

c. Explain the impact of environment or set-up errors, (i.e. any anomalies that might occur due to the incorrect or incomplete set-up of the test environment or that are caused by environmental incidents such as blackouts).

d. Revise the test criteria used validate a function, (i.e. where incorrect test instructions, data or expected results were initially used).

3.4. Evaluate the Test

The conclusion section of the Test Script will contain a summary of the overall test effort, including any discrepancies and their resolution.

A Test Report can be a helpful summary where there are several test scripts.

3.5. Review and Approval of Results

The Functional Test Script is again reviewed by the Business/Process Owner (or the Project Manager who is acting on their behalf) to verify that the computerised system meets their needs, i.e. complies with Specifications.

It is also reviewed by the Validation Manager or Q.A. Manager for GxP compliance.

4. Testing to Support Change

Due to the complexity of software, a seemingly small local change may have a significant global system impact.  Maintenance of the validated status of the software requires:

a. Testing of the individual change, and

b. Determination of the extent and impact of the change on the software system, or

c. An analysis to document the case for no impact, (if appropriate).

Regression Testing (or alternatively before-after tests) may be appropriate to show that unchanged but vulnerable portions of the system have not been adversely affected by the change.  Where possible, previous test results and repeat test scripts are re-run to in order to verify the system is working as a whole after the change.

The extent of testing, including Regression Testing is guided by the section above on “Extent of Testing”.

5. Appendix 1 – Software Specifications – Useful Criteria

Typical software requirements specify the following:

All software system inputs;

All software system outputs;

All functions that the software system will perform;

All performance requirements that the software will meet, (e.g. data throughput, reliability, and timing);

The definition of all external and user interfaces, as well as any internal software-to-system interfaces;

How users will interact with the system;

What constitutes an error and how errors are to be handled;

Required response times;

The intended operating environment for the software, if this is a design constraint,
(e.g. hardware platform, operating system);

All ranges, limits, defaults, and specific values that the software will accept; and

All safety related requirements, specifications, features, or functions that will be implemented in software.

6. Summary of Changes

Version #Revision History
VAL-050New