Cloud Computing: Workaround for non-compliant PaaS

The trend in the pharmaceutical industry is also moving towards cloud computing. Financial but also organizational advantages speak for the cloud. At the same time, however, potential dangers and regulatory restrictions should also be taken into account. Nine experts from the pharmaceutical industry and regulatory authorities answer a comprehensive catalog of questions from the following GxP-relevant topics:

  • Basics of Cloud Computing Technology
  • Regulations and Expectations of Inspectors
  • Customer-Supplier-Relationship
  • Requirements for Cloud Service Providers (CSP)
  • Requirements for Supplier Evaluation and Supplier Audits
  • Requirements for Qualification / Validation

The following question is one of a series of questions that we will publish in further GMP News articles on this site in the coming weeks.

Question 19: A non-(GXP-)qualified PAAS could change the versions of some of its generic microservices used by the application to be deployed as a GXP SAAS. Changing the versions of such generic microservices could be beyond the control of the SAAS provider. What would be required to make this scenario GXP-compliant? - Requirements for Cloud Service Providers (CSP).

Not qualifying a GxP-relevant IT infrastructure platform (= PaaS) contradicts basic GxP compliance requirements: Using such an IT infrastructure platform in the context of an application that requires validation (= SaaS), the IT infrastructure concerned must be qualified (EU GMP Annex 11 (Principle): The application should be validated; IT infrastructure should be qualified). Thus, the (trivial) answer to the question of how to make the scenario presented GxP-compliant is simply that the concerned platform-as-service must be qualified.

Given that the GxP world is not ideal either, one could speculate about workarounds to counter such non-compliance.  E.g. by reducing the degree of risk for a transitional period with interim measures, or generally becoming more resilient to such weaknesses by means of an appropriate process design. The following is certainly an incomplete list of possible countermeasures:

  • The example of the massive security vulnerability in the Java library log4j has shown how important it is to have knowledge and control over the software components used. Consequently, a complete overview of the building blocks or libraries contained in the application should be kept as part of the validation. In the case vulnerabilities or security breaches become known such a software bill of materials (SBOM) will allow a targeted measure.
  • In general, it is a good idea to be aware of the change management of the cloud service provider, regardless of whether or not the services are subject to validation/qualification. Active change management includes information about planned updates or (important) patches that are made available to the customers in a timely manner. For its part, the regulated user must assess such information in a controlled process (e.g. for relevance) and, if necessary, schedule its own measures to safeguard the announced change (e.g., regression tests). A prerequisite, of course, is that the regulated user has the necessary know-how and resources to contribute to be able to contribute to change management. One might think about of a complete shift of change management to the side of the regulated operator. This will generally fail due to the lack of knowledge about which software components are used for the platform service.
  • If no information about upcoming platform changes is known, either as regular, planned updates or as part of patch management, the only control measures available to the regulated user are active monitoring or close-meshed regression tests:
    - Active monitoring of the application allows to detect anomalies, limitations, or performance changes. This is provided that the monitoring is carried out in a tight timeframe and, above all, that appropriately qualified employees (IT or key users) assess the monitoring data in order to initiate suitable measures if necessary.
    - However, the effectiveness of such a monitoring process stands and falls with the existence of meaningful key figures that allow an objective assessment of the "health status" of the GxP-relevant application.
    - Regression tests can provide information about whether the existing (and validated) system functionality is still unchanged. In the scenario assumed here, however, the constraints for such tests are extremely unfavorable, since it is not known when system changes will occur, nor what, if anything, has been changed. This means that effective regression tests would have to be scheduled and performed with high frequency in order to detect impaired or missing system functions in time. Therefore, the cost-benefit ratio for this workaround is very bad.

Find more Q&As on the topic "Cloud Computing" which have been answered by the expert team.

The Experts

Frank Behnisch, CSL Behring GmbH, Marburg
Klaus Feuerhelm, Formerly Local GMP Inspectorate / Regierungspräsidium Tübingen
Oliver Herrmann; Q-FINITY Quality Management, Dillingen
Eberhard Kwiatkowski, PharmAdvantageIT GmbH, Neuschoo
Stefan Münch, Körber Pharma Consulting, Karlsruhe
Yves Samson, Kereon AG, Basel
Dr. Wolfgang Schumacher, Formerly F. Hoffmann-La Roche AG, Basel
Dr. Arno Terhechte, Local GMP Inspecorate / Bezirksregierung Münster
Sieghard Wagner, Chemgineering Germany GmbH, Stuttgart

Go back

x