Impact of Statistical Tools on Process Performance Qualification

In 2011, FDA issued a “Guidance for Industry Process Validation: General Principles and Practices Guidance”, which calls for a lifecycle approach to process validation and heavily references the use of statistics throughout the product lifecycle. For many, the use of statistics is new and could seem daunting due to the large number of possible statistical tools which could be used and the complexity of understanding the tools and ensuring they are appropriately applied. Statistics are a powerful tool which
can enhance our level of process understanding and ultimately guide us to improve process performance and product quality/reliability.

This discussion paper uses segments of typical validation case studies (validation of key attributes such as: content uniformity, packaging key attributes and packaging critical defects) to apply various statistical tools and compare the outcomes of applying each tool pointing out the pros and cons of each application. General comment is also made on the statistical tools applied with some advantages, disadvantages and misuses briefly summarized.

This paper does not intend to teach or provide readers with a better understanding of the mathematics behind the tool, only to overview outcomes when applying each of the tools to the same data set. Having this comparison may help guide selection of the most appropriate tool, or in most cases combination of tools, to inform the scientist validating the process of the level of process variation and control within and across batches. Readers are encouraged to work with trained statistical experts to ensure the tools are applied appropriately for their specific scenario taking into account the sample plan and intent of the analysis being conducted.

The team which worked on this paper hopes you find this exercise of value and appreciate your thoughts and suggestions to further the discussion on the topic of statistical analysis of validation data.

Please direct all feedback to pvpapers@ispe.org.

Authors: Jim Bergum (Bergum STATS, LLC), Richard Montes (Hospira), Tara Scherder (Arlenda), Helen
Strickland (GSK), Jenn Walsh (BMS)

The 2011 FDA Process Validation guidance “outlines the general principles and approaches that FDA considers appropriate elements of process validation” and “aligns process validation activities with a product lifecycle concept and with existing FDA guidance, including the FDA/International Conference on Harmonization (ICH) Guidance for Industry, Q8 (R2) Pharmaceutical Development, Q9 Quality Risk Management, and Q10 Pharmaceutical Quality System.” The guidance states that “process validation
involves a series of activities taking place over the lifecycle of the product and process” and that these activities are described in three stages, 1. Process Design, 2.  Process Qualification, and 3. Continued Process Verification. The Process Qualification stage has two elements, 1. Facilities Design and Utilities and Equipment Qualification and 2. Process Performance Qualification (PPQ). This paper focuses on the evaluation of data collected during PPQ. The objective of PPQ is to collect and evaluate product performance data in order to demonstrate whether or not the process is in a state of control and to confirm whether or not the process is capable of producing batches that meet its quality requirements. Justification of the sampling plan and acceptance criteria is required to be justified in the approved protocol; however, these elements are outside the scope of this paper. There are multiple statistical analysis tools available to confirm whether or not the process is operating in a state control and/or confirm the product meets its quality requirements as measured via the output drug product quality. This paper presents some of the statistical tools which may be selected and the strengths and weaknesses of each.

Overview

In January 2011, the FDA issued a Guidance for Industry on “Process Validation: General Principles and Practices” [1]. In this guidance, process validation is defined “as the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product.” This paper examines numerous statistical analysis techniques for evaluating drug product performance data collected during Process Performance Qualification (i.e., Element 2 of the Process Qualification Stage). This paper also provides some general comments on the advantages and disadvantages of these statistical techniques.

1 Background and Scope

The 2011 FDA Process Validation guidance “outlines the general principles and approaches that FDA considers appropriate elements of process validation” and “aligns process validation activities with a product lifecycle concept and with existing FDA guidance, including the FDA/International Conference
on Harmonization (ICH) Guidance for Industry, Q8 (R2) Pharmaceutical Development, Q9 Quality Risk Management, and Q10 Pharmaceutical Quality System.” The guidance states that “process validation involves a series of activities taking place over the lifecycle of the product and process” and that these activities are described in three stages, 1. Process Design, 2. Process Qualification, and 3. Continued Process Verification. The Process Qualification stage has two elements, 1. Facilities Design and Utilities and Equipment Qualification and 2. Process Performance Qualification (PPQ). This paper focuses on the evaluation of data collected during PPQ. The objective of PPQ is to collect and evaluate product performance data in order to demonstrate whether or not the process is in a state of control and to confirm whether or not the process is capable of producing batches that meet its quality requirements. Justification of the sampling plan and acceptance criteria is required to be justified in the approved protocol; however, these elements are outside the scope of this paper. There are multiple statistical analysis tools available to confirm whether or not the process is operating in a state control and/or confirm the product meets its quality requirements as measured via the output drug product quality. This paper presents some of the statistical tools which may be selected and the strengths and weaknesses of each.

2 Data Analysis and Review

This paper presents two data examples specifically uniformity of dosage and two packaging quality measurements. In each case, the data used for analysis were simulated to reflect data which may be expected from a typical manufacturing scenario.

2.1 Oral Solid Dosage Tablet Uniformity of Dosage Units (Content Uniformity)

Uniformity of dosage unit data was collected so that the drug content profile across each batch could be evaluated to determine if there are unexplained, unexpected, significant patterns in the attributes that could lead to bias or inaccurate interpretation of results during routine commercial distribution. Throughout the document the term Content Uniformity (CU) is used to describe the content of drug within the tablet by the Content Uniformity method.

2.1.1 Exploratory Data Analysis of Content Uniformity – Two Sampling Plans

Sampling Plan 1 represents a systematic random sample of 30 dosage units across the batch with one unit at the beginning, one unit at the end, and 28 equally spaced locations throughout the batch where the locations are not time based, but rather on the product volume. Sampling Plan 2 also represents a systematic random sample of 60 dosage units, with four units at the beginning, four units at the end, and four units at each of 13 equally spaced locations throughout the batch. The data used for Sample Plan 1 and Sample Plan 2 analysis are provided in Appendix 3 for reference. Table 15 provides a summary comparison of the impact of applying each of the statistical tools evaluated and Appendix 1 provides a general statistical summary of the tools.

Graphs are critical tools to any analytical analysis and should be the first analysis performed. They can provide insight into a data set regarding relationships and data integrity that a distilled statistic result, such as a mean or a p-value, cannot. This can be essential to the proper interpretation of the analysis. Conclusions regarding the significance of factors may be assessed with simple graphs followed by an appropriate analytical analysis, such as hypothesis testing or regression.

Sampling Plan 1

Boxplots display groups of data in their quartiles and can be used to display differences between populations without making assumptions of their statistical distribution. Box plots for the Sampling Plan 1 data is shown in Figure 1 and provide an overall sense of the distributions of data, including identification of potential outliers to a distribution. These outliers can represent real data or in some cases, are a quick indicator of data integrity issues. From this boxplot, it is clear that the average content
uniformity result for Batch A tends to be lower than the other batches. This may be due to either special cause effects causing Batch A to be on average lower or random batch to batch variation. The within batch variation looks similar between the three batches.

Read more by downloading Impact of Statistical Tools on Process Performance Qualification (PPQ)(Published August 2014).

Back to Top