May 02, 2018:Technology advancement brought paradigm shift in Clinical Trial design, execution and management. Despite technology advancement and regular monitoring, the clinical trial industry and regulatory agencies are facing new challenges and fail to enhance patient safety.
Trials on human volunteers or clinical trials are performed to evaluate the ‘outcomes’ or ‘effects’ through observations or interventions, essentially evaluating a hypothesis that would improve the way the medical conditions are diagnosed and/or treated.The conduct of clinical trials has a very long history, dating back to the medieval times. Since then, as a concept, it has been continuously evolving, becoming more structured and humane, holding safety of volunteers as paramount consideration. ICH-GCP is an overarching framework adapted across international regulatory agencies, providing guidance to ensure that the trials are designed, conducted, recorded and reported in an ethical and scientific manner.
It goes without saying that all players in the ecosystem (sponsors, clinical sites/facilities, investigators and support structure) follow ICH-GCP and applicable national and international regulatory requirements. The quality assurance processes currently practiced by most of the organizations and research community is quite old fashioned, where QA processes are instituted after a problem or crisis has developed. Unlike in other areas, flawed clinical data cannot simply be excluded or ignored for analysis. Any such attempts would significantly affect trial outcomes and compliance to the ethical requirements.
There has been an argument that clinical trials are so highly self-regulated that an external concept of quality management is would only be an additional attribute.However, regulatory agencies and international quality think tanks have an opposing view.It is more meaningful to have a Quality Assurance process that detects and controls potential quality issues much before they become serious issues.This strategic approach of deploying a well-integrated Quality Management System for trial conduct on risk-based thinking has more relevance than a conservative approach.
The current evolving landscape of clinical trials in an increasingly globalized society calls for clinical evaluation in different populations. This is leading to the application of complex designs, and an involvement of multiple countries, numerous sites and investigators, and thereby an increase in trial outsourcing for competent trial management by Clinical Research Organizations(CROs). Additionally, the governmental push for promoting availability of cost effective generic medicines across different geographies has resulted in a flurry of BE/BA trials being conducted across the globe. This has placed a greater burden on the regulators to confirm authenticity and integrity of trial data to approve generic medicines for the public use.
Since the development of the ICH GCP Guideline, the scale, complexity, and cost of clinical trials have increased. Growing innovations and approaches in technology such as electronic recording, reporting and centralized risk based monitoring present greater avenues to increase efficiency and direct attention on core objectives. The regulatory expectations have already been published in this regard and have been made effective. Agencies have already deployed the required technology to receive the data electronically and perform a systematic scrutiny, which can be followed up with focused field inspections. In this scenario, it is imperative that organizations who want to hold their ground and be competent adapt themselves and embrace this technology, and provide required risk based monitoring analytics with suitable Quality Metrics right from an early stage in the trial to the final dossier submission.
Quality Management System and Good Clinical Practices – A synergistic combination
The Quality Management System as defined by the ISO 9001:2015 standard amalgamates effectively with Good Clinical Practice if implemented strategically and thoughtfully. The ISO 9001 (QMS) standard and ICH-GCP were revised around the same time and both incorporated the risk based approach for the Quality Management. The former defines takes a ‘Plan-Do-Check-Act’ approach, using risk-based thinking as a means of achieving effective quality management5, while the latter (ICH-GCP (R2)) recommends that a quality management system should use a risk-based approach factored into prevalent guidelines on generating trial data.The illustration (an example, worked around ISO 9001:2015 standard illustration) provided in Figure-1 demonstrates the adaptation of QMS in Clinical Trial Set-up from the perspective of a service provider. The illustration demonstrates that the requirements of QMS and clinical trial processes can be easily interconnected without any principle based conflicts.
Risk Based Thinking and defining Quality Tolerance limits
The Guidance for the industry issued by the USFDA and the reflection paper published by EMA have already provided the guidelines to enable risk based thinking for clinical trial quality management. The thinking behind these recommendations is that the trial conduct approaches with risk-based thinking provides better oversight and control to the sponsor over trial conduct and human subjects protection.Additionally, adaptation of risk based thinking is expected to remove bottlenecks and make informed decisions and achieve desired compliance goals in clinical trials. Over a period, this approach significantly reduces the operational cost of conducting clinical trials with improved level of productivity and compliance. Identifying suitable quality metrics that provide early indications on the risk associated with clinical study outcome is the cornerstone for the success of the whole process.As appropriately echoed in the EMA reflection paper, risk based thinking should start with protocol design to ensure that mitigation is built into the protocol and monitoring plan, and deployed through the course of a project.
In line with the establishment of risk assessment and quality matrices, the agencies expect sponsor organizations to define Quality Tolerance Limits, which essentially is a practicable variation limit for an identified clinical trial data or procedural quality metric. The tolerance limits are designed with the consideration of trial objective, design, complexity, size, and endpoints. This approach allows identification of potential issues in advance and contributes to focused data measurement, collection and reporting. Using the example of ‘protocol deviations’ as a quality metric – which is detected through a compliant, can be validated by a centralized monitoring system managed by well-trained monitors as per defined quality tolerance limits, based on which, impact severity can be assessed and CAPA may be necessitated as required.
Trial data integrity:
The inspectional observations of varied severity are typically the result of issues with trial data that significantly deviates from the pre-stated conditions or is the result of an application/device/methodology that does not fulfil the necessary validation requirements.
It is very important that the commercial off-the-shelf (COTS) and customized electronic systems like electronic case report forms (eCRFs); electronic data capture (EDC) systems, mobile technology used for patient compliance and/or trial monitoring owned or managed by sponsors and other regulated entities are well qualified and validated to verify their conformity to the applicable standards and legislature (eg: 21CFR part 11) to demonstrate data integrity.Quality metrics and tolerance limits play a very important role in measuring performance of those system and the reliability of data generated through them.It is critical to assure integrity of data when the electronic systems used in the clinical study are updated with the new version of the software, and when data relocation takes place.Accuracy, precision, user and time stamps of the executed clinical measurements and select elements in the electronic audit trail can be suitable quality indicators to demonstrate data integrity.
Having identified the quality matrices and their tolerance limits, attention should be focussed on those situations where these established tolerance limits are exceeded by more than a set range or frequency.intaking the example of a pharmacokinetic set-up, if a blood sample is expected to be collected at a specified point for an ambulatory visit and for some reasons, the same was not possible, the applied data collection tool (manual/electronic) should reflect that variation. In this case, time of sample collection reflects a quality metric, and variation observed between specified and actual time of collection enables evaluation of predefined tolerance limit. Collectively, they help in the assessing and mitigating the issue of delayed sample collection. Based on the tolerance limit conformity, the pharmacokinetician can suitably adjust and calculate the Pharmacokinetics, resulting in meaningful evaluation of the trial objective.
The clinical research sphere is witnessing new challenges pertaining to the conduct and reliability of trials. Deployment of a technologically driven Quality Management System with risk based thinking is an expectation and a requirement. It is not an exaggeration to say that the future belongs to those organizations who deploy technology enabled systems and processes to eliminate systematic bottlenecks in the clinical trials conduct and make the trials efficiently managed, reliable, and compliant to the regulatory requirements while upholding the rights and safety of the trial volunteers. This concept of integration of GCP and QMS with risk based thinking will definitely have an overall positive impact on the very purpose of clinical trials.
General Manager – Head Quality Assurance and Computer System Validation,
Navitas Life Sciences Limited.
|Aruna Sudheendra Theerthahalli
Assistant General Manager – Quality Assurance (BA/BE Services),
Navitas Life Sciences Limited.