The tools that sponsors use in clinical research must be validated to ensure they work as intended. Regulators refer to the FDA's Code of Federal Regulation Title 21 Part 11 for the validation of electronic data capture (EDC) systems, the International Society for Pharmaceutical Engineering's (ISPE) Good Automated Manufacturing Practice (GAMP) for the validation of systems used in the manufacturing of medicinal products and the Parenteral Drug Association's (PDA) Technical Report 80 for the validation of laboratory equipment. However, centralized monitoring tools that clinical operations use to make quality-related decisions are not explicitly covered in any of these documents. How can one know what applies to centralized monitoring tools and to what extent they must be validated? I asked the regulatory affairs professional society (RAPS) and their response is summed up as follows: You don't need a guidance document. Just validate it.
Indeed, one should validate a centralized monitoring tool to gain the assurance that it works as intended, not to satisfy some regulatory requirement. Nevertheless, the RAPS people did mention some regulatory guidances which one should look at to get a better idea of what is expected from regulators. One should keep in mind that no single but several guidances collectively provide directives for the validation of centralized monitoring tools. The General Principles of Software Validation document specifically states that it is not possible to state in one document all of the specific validation elements that are applicable. However, a general application of several broad concepts can be used successfully as guidance for software validation. These broad concepts provide an acceptable framework for building a comprehensive approach to software validation. Below are the broad concepts to take away from the different guidances.
ISO 13485 and the General Principles of Software Validation are intended mainly for medical devices and state that All production and/or quality system software, even if purchased off-the-shelf, should have documented requirements that fully define its intended use, and information against which testing results and other evidence can be compared, to show that the software is validated for its intended use. Accordingly, one should state what a centralized monitoring tool is expected to do in the form of written requirements.
21 C.F.R. Part 820.70(i), AAMI TIR36 and GAMP5 are intended for manufacturing systems and state that [the manufacturer] shall validate computer software for its intended use according to an established protocol. All software changes shall be validated before approval and issuance. These validation activities and results shall be documented. Accordingly, one should write a validation plan stating the specific strategy used to prove that a tool works as intended. One should specifically prepare tests cases related to each identified requirements. One should also prepare a validation report to document the outcome of testing and the important observations made while testing.
The PDA Technical Report 80 section 6.5.3 is intended for lab equipment and states that Many laboratories use customized spreadsheets for calculations. To avoid possible data breaches, the Quality Unit should validate the customized spreadsheet template for its intended use and protect it by restricting permissions to alter the template or delete data. Typically, customized spreadsheets are validated by customizing them to the intended use with a standardized formula in the USP or another valid source, comparing the manual calculations against the spreadsheet calculations, and testing boundaries and functions. Accordingly, one should prevent changes from being made to their tools while validation is being performed. This can be done by placing a read-only copy on a secure server with limited access before initiating validation. Most software do not come with access to source codes but if the tool happens to be an Excel workbook, cells containing formulas should be made un-editable using the formula protection functionalities. Here are good resources for protecting worksheets and workbooks. Means of protection should remain in effect while the tool is being used in production and any required changes should be made following a change control process. Another point to take away from the excerpt above is that boundaries should be tested. For instance, if a centralized monitoring tool is designed to flag calculated values that fall above a given threshold, one should test scenarios where calculated values fall on each side of threshold values.
Guidances are broad in scope and require some interpretation to determine how they apply to different situations. One must judge the relevance of directives to a given context and evaluate if alternatives can comply with the intent of guidelines.
For example, 21 CFR Part 11 states the necessity of preserving the trustworthiness and reliability of records with eSignature and audit trail functionalities but centralized monitoring tools do not produce records that are used to perform statistical analysis and draw studies’ conclusions. Centralized monitoring tools computes KRI metrics using raw data which might actually be incomplete and incorrect as they are exported from EDC and CTMS in real-time. Therefore, having an eSignature and audit trail functions in centralized monitoring tools would do little to ensure safety and quality. On the other hand, centralized monitoring reports may be considered trial-monitoring records and it is a good idea to ensure their trustworthiness and reliability by documenting their review. To keep things simple, one can avoid the hassles of eSignature (user access management, training, etc.) by producing hard copies of centralized monitoring reports and having them signed by the persons who review them.
As another example, an Excel workbook which is customized corresponds to a GAMP Category 5 software and the recommended validation approach includes performing Installation Qualification (IQ), Operational Qualification (OQ) and Performance Qualification (PQ). However, this approach may be excessive for the validation of tools that leverages Excel outside the manufacturing context and the General Principles of Software Validation specifically recommends using the least burdensome approach. One may thus devise a simpler approach that does not include IQ or PQ and still ensure that a workbook has been properly customized for its intended purpose.
The goal of validation is to prove that a tool does what it is intended to do and to document the proof. Namely, validation documents should include requirements which describe what a tool should do and specifications which describes how a tool meets the requirements. Test cases should be prepared to show that requirements are consistently met. The whole approach to validation should be described in a Validation Plan and the outcome of testing should be documented in a Validation Summary Report. To help reviewers, it is a good idea to take screenshots of the tool while performing test cases, especially if charts and conditional formatting are involved.
Below is an example of the traceability matrix used to validate the RI Calculator which is an Excel workbook designed to be customized for the centralized monitoring of different studies. It provides a clear link between requirements, specifications and test cases. It also provides the outcome of each tests as well as the specific test dates and the identity of the testers.