Chapter 32
Downstream Process Design, Scale-Up Principles, and Process Modeling* Karol M. Łącki,*, John Joseph†, Kjell O. Eriksson‡ *
Karol Lacki Consulting AB, Höllviken, Sweden, †GE Healthcare Lifesciences, Amersham, United Kingdom, ‡Gozo Biotech Consulting, Gozo, Malta
32.1 INTRODUCTION Once a therapeutic biological molecule has the potential to become a new product candidate, a complex, multi-faceted development program must begin [1]. A big part of this program is the chemistry, manufacturing, and control (CMC) activity that focuses on development of a reliable manufacturing process, including the necessary quality control system (see Chapter 50). As stated in ICH Q8 guidelines [2], “the aim of pharmaceutical development is to design a quality product and the manufacturing process to deliver the product in a reproducible manner.” Therefore, the developed process must be validated to ensure that the produced product meets the safety requirement for human administration throughout the whole product lifecycle. Process validation is defined as the collection and evaluation of data, from development through to commercial production. It establishes scientific evidence that a process is capable of consistently delivering quality product and involves a series of activities taking place over the lifecycle of the product and process. These activities can be classified into three stages [3]: ●
●
●
Stage 1—Process Design: The commercial manufacturing process is defined during this stage based on knowledge gained through development and scale-up activities. Stage 2—Process Qualification: During this stage, the process design is evaluated to determine if the process is capable of reproducible commercial manufacturing. Stage 3—Continued Process Verification: Ongoing assurance is gained during routine production that the process remains in a state of control. This guidance describes activities typical of each stage, but in practice, some activities might occur in multiple stages.
In this chapter, we will only focus on Stage 1, the process design. The objective of designing a manufacturing process for a biopharmaceutical product is to find the best tools and procedures to consistently and economically make sufficient quantity of the target molecule, isolate it from the production system, and then purify the target molecule to the level of purity specified for the final product, in other words, the active pharmaceutical ingredient (API). Each API must have wellunderstood characteristics, must meet predetermined quality attributes, and must be manufactured according to a robust and validated manufacturing process. As such, the process design should ensure that the production process is (i) suitable for routine commercial manufacturing that can consistently deliver a product that meets its quality attributes and (ii) these attributes can be reflected in planned master production and control records [3]. Robustness of a manufacturing process has been of key importance since the early days of the biopharmaceutical industry. Rightfully, patient safety has been the focus for the early developed processes. However, as the industry has matured and manufacturing and regulatory experience increased, an additional challenge has emerged, that of cost of manufacturing. The pressure is now on to reduce the cost of manufacture without compromising the product quality, hence the patient safety. The pathway to achieve a robust manufacturing process can be rather complex and, in some cases, lengthy. It involves several key business and scientific decisions. In this chapter, we will focus on the latter, thus assuming that the business case is solid and that the molecule to be produced will be introduced to the market, providing it fulfils all regulatory specifications. *Parts of text, figures, and tables used in this chapter are reproduced from L. Hagel, G. Jagschies and G. Sofer, 3—Process-design concepts, Handbook of Process Chromatography, second ed., 2008, Academic Press, Amsterdam, 41–80, with permission. Biopharmaceutical Processing. https://doi.org/10.1016/B978-0-08-100623-8.00032-3 © 2018 Elsevier Ltd. All rights reserved.
637
638 SECTION | VI Industrial Process Design
This chapter will provide the reader with the basic understanding of process design and introduce a few concepts important for consideration during this stage of the product lifecycle. The main focus of the chapter will be on the downstream purification part of a manufacturing process. However, because the successful design of a downstream process always takes into account the various interdependencies between upstream and downstream parts, some important aspects of the upstream process design will also be discussed. Throughout the chapter, an example of a universal roadmap for process and control strategy development will be outlined and discussed by focusing on general rules and dedicated tools for developing, characterizing, and scaling-up of downstream processes. Generally applicable methodologies for sizing of filtration and chromatography unit operations will be outlined. Because the process design should also account for the functionality and limitations of commercial manufacturing equipment [3], examples of relevant constraints will be given. The final section of this chapter discusses the use of various computational modeling strategies that could facilitate the design and scale-up of manufacturing processes.
32.2 PROCESS DESIGN LANDSCAPE While the aim of every firm is to be able to design a process that will be robust and consistent in producing material for toxicology studies to the manufacture of a licensed product, in reality, changes will need to be introduced as the development and clinical phases progress. The process evolves in several stages, each addressing a specific need: (1) the production of material for pre-clinical studies; (2) the production of material for clinical studies Phase I/II, and (3) large-scale production, usually first conducted for Phase III, but ultimately for the commercial market (see Fig. 32.1). During each of these stages, one or more well-managed transfers between the development lab and internal or external manufacturing sites and groups will be required. Costly changes of the process and concomitant critical attributes of the product will result in risk for delays or even failure of the project. Therefore, the impact of these types of changes need to be minimized during the project progression from toxicology studies to licensed product, through coordinated and well- documented activities related to process development and establishment of a process control strategy. Efficient and robust process design can help mitigate the risk of change and delay on the overall process.
32.3 THE CORE ELEMENTS OF PROCESS DESIGN Process design relies on two equally important and interdependent elements: (i) process development, and (ii) process control.
32.3.1 Process Development Process development (PD) activities should lead to the establishment of what is termed the “design space,” which represents the multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality [4]. Regulatory authorities expect process development to be based
Preclinical
Phase I
Phase II
high risk points source of potential delays
Stage 1
Manufacturing
high risk points source of potential delays
Stage 3
Stage 2 Process transfer (internal/external)
Process transfer (internal/external) Manufacturing for toxicology First version of the process developed w/ the initial product quality specification
Phase III
Manufacturing for clinical trials Process should have potential to scale to full scale manufacturing w/o much modification
Manufacturing at scale Need for cost efficient process (high does indications and other high dose therapies may not be covered by health insurance systems long time*)
FIG. 32.1 Evolution of a process to manufacture a biopharmaceutical from process development to commercialization. Artwork courtesy of GE Healthcare, reproduced with permission.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 639
on sound scientific methods and principles, in combination with risk management tools which are applied throughout the development process. The current industry dogma is that quality, which is generally expressed as identity, concentration, and purity of the product cannot be tested, but instead should be assured through process understanding and process control. This dogma is commonly referred to as assuring quality by design (QbD). In general, process development studies should provide the basis for process improvement, process validation, continuous verification, and any process control requirements [4]. The PD program will identify any critical and key process parameters that should be monitored and controlled; that is, those parameters which could affect the product critical quality attributes (CQAs) and those that will be key to process performance from the economical perspective (that is, generally aiming for an optimization of process yield). Depending on the level of experience within the organization, overall process development efforts and timelines can vary tremendously. Even within an experienced organization, a new molecule type can generate unexpected delays and challenges. Process development is usually performed first on individual unit operations. These are then connected in a logical sequence to form a process capable of delivering a drug substance with the specified quality attributes. The process sequence itself may vary between the different product development stages (Fig. 32.1), in other words, between an early process development and the processes used to manufacture material for clinical phases I and II, and even phase III and commercial manufacturing. These changes might be dictated by large-scale manufacturing constraints, introduction of new technologies, or even an improved process understanding leading to a more robust process. Furthermore, because the manufacturing process is based on sequential operations, any change in one step may have some impact on subsequent steps. The earlier in the sequence the change will take place, the stronger might be its impact. To confound this situation further, development of the fermentation/cell culture process (upstream) and the recovery and purification process (downstream) is almost always carried out by different groups, often without much coordination of these activities. Hence, while the upstream team is already working on a new process version, the downstream team is still developing a process for material from the previous upstream process iteration. Similarly, development of process steps may occur without sufficient awareness of the existing operation limits and the capability of large-scale equipment in manufacturing. Therefore, an early awareness of the final scale of manufacturing and its potential constraints is essential for successful process design. The above-mentioned interdependencies have been realized over the years and companies have developed management frameworks and associated workflows to assure that potential effects of technology changes or process optimization are minimized. To manage this, companies have introduced decision points, or project progression toll gates [5], in their upstream process development by considering the impact of any key process variables on product manufacturability. For instance, as discussed in Chapter 4, should product titer be improved through optimization of the cell culture process, the impact is evaluated on the downstream purification steps of the process. Although a higher titer may lead to gains in product quantity, it may also lead to unfavorable changes in the impurity profile of feed material entering the purification operations. This could necessitate an increase in the number of purification steps or even change in methods, which may ultimately lead to a reduction in overall process yield.
32.3.2 Control Strategy Process control aims to ensure that process variability is controlled within specifically defined boundaries, and derived from current product and process understanding, to guarantee quality of the product. At a minimum, the process control strategy should address monitoring and control of critical process parameters. These are parameters whose variability have a direct impact on the physical, chemical, biological or microbiological property or characteristic of the product. These characteristics are more formally termed as critical quality attributes (CQAs). CQAs should be within an appropriate limit, range, or distribution to ensure the desired product quality [2]. Control strategy can also encompass material analysis, equipment and facility monitoring, in-process controls, and the finished product specifications. It should even cover the description of associated methods and frequency of monitoring and control [6]. In general, the concept of design space and the appropriate process control should lead to more flexible, and ideally cheaper, manufacturing processes over time. This can be achieved through process improvements and real-time quality control, eventually leading to a reduction of end-product release testing. In theory, improvements of an approved process, if they occur within the design space, should not be considered a change, and would thus not initiate a regulatory postapproval process change procedure. Process knowledge and understanding is the basis for establishing an approach to process control for each unit operation, and the process overall [3]. Strategies for process control can be designed to reduce and/or adjust input variation during manufacturing (and so reduce its impact on the output), or combine both approaches.
640 SECTION | VI Industrial Process Design
As with the approach to process development, decisions regarding the type and extent of process controls can be aided by early risk assessments, and later enhanced and improved as process experience is gained during performance qualification (PQ) and continuous manufacturing.
32.4 A PROCESS DESIGN FRAMEWORK To keep the time and cost of process development under control, a structured approach, or framework, to process design should be followed. The framework will guide different process-development groups and make sure that the experience of process-development staff is shared and recorded over time. This approach creates a corporate culture in which individual knowledge and experience is turned into corporately owned and easily accessible assets yielding well-understood process solutions and ideally forming technology and process platforms. Although the structural approaches will differ in details between companies, it could be argued that all should contain the following three core elements: selection of industrial tools, selection of industrial methods, and process integration (Table 32.1). The first element should address selection of relevant industrial technologies and materials. These will include cell lines with full traceability of their origin and history of development, preferred chromatography resins and filters, and raw materials (chemicals), again with full traceability of their origin. The second element will contain a list of selected methods for the intended purpose and subsequent process development and process control activities. The last element should address integration of all processing steps, including minimization of all associated activities, such as buffer preparation or column packing.
32.4.1 Example of Process Design Workflow An example of the structured approach to process design that the three elements are parts of is shown in Fig. 32.2. The workflow starts with a target molecule, which is chosen based on the medical indication and the molecule’s mechanism of action. At this point, it is known if the molecule belongs to a class of molecules that have already been used as APIs (e.g., monoclonal antibodies), or if the class the molecule belongs to has a potential of becoming a next API platform, or if its uniqueness suggests that the process design project will be a one-time development effort. In the first case, the process design project should be based on the already existing knowledge and experience about processes designed for similar molecules to take a full advantage of the similarities of these molecules. Such processes are referred to as platform processes, and consist of standardized technologies, procedures, and methods. If the organization has already worked with a similar molecule, the process developer should be able to find instructions and guidance in the use of the company platform concepts referenced in the company development plan. If the developed molecule is the first in its class and there are indications that this class has a potential of becoming a next biopharmaceutical platform, it could be advisable to already at this point consider desired features of a platform and invest in high throughput process development to study as many process parameters as possible for solid understanding of
TABLE 32.1 Three Core Elements of Process Design Framework Element
Example
Comment
Selection of industrial tools
Cell lines Raw materials Consumables
Documented evidence, internal and vendor audits, manufacturing experience
Selection of technologies and methods
Analytical methods Cell separation methods Purification methods Viral clearance
Product and impurity profiles, risk analysis, heuristic designs, experimental performance evaluation
Integration
Use of one buffer system for multiple steps Column packing Use of disposables
Reduction of time-consuming associated activities, eliminate non-productive steps.
Reprinted from L. Hagel, G. Jagschies, G. Sofer, 3—Process-design concepts, in Handbook of Process Chromatography (second ed.), Academic Press, Amsterdam, 2008, pp. 41–80 with permission.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 641
Target molecule
Platform Feasible?
Medical indication, Mechanisms of action, Product quality attributes
non-Platform process development
NO
YES
Upstream Analytics
Platform process
Upstream technology platform
YES
Platform Exists?
NO
Platform process development
eam
Upstream
Platform TECHNOLOGY Downstream technology platform
a common or standard method, equipment, procedure or work practice that may be applied across multiple products under development or manufacture
Analytics Downstream
Platform PROCESS Analytical technology platform Use validated methods for CQA determination
a common or standard set of technology platforms combined in a logical sequence that may be applied across multiple products under development or manufacture
Analytics focus areas • Product and impurity profiles characterization • Detection and quantification limits • Throughput • Reporting system • Prioritization: in process measurements vs QC lab analysis
Upstream focus areas • Cell line development • Cell bank preparation, validation, maintenance • Media System • Product and impurity profiles characterization • Clarification and recovery • Model systems • Scale-up principles
Downstream focus areas • Product and impurity profiles characterization • Buffer system • Model systems for unit operations • Purification steps sequence: (Capture, Removal/Intermediate, Polishing) • Virus clearance methods (if necessary) • Perform risk analysis & prioritize tasks • Purification steps integration • On-/ln-line process monitoring system • Scale-up principles
FIG. 32.2 An example of a generic process design workflow. Adapted from L. Hagel, G. Jagschies, G. Sofer, 3—Process-design concepts, in Handbook of Process Chromatography (second ed.), Academic Press, Amsterdam, 2008, pp. 41–80.
the process. Specialized analytical instruments, and even novel upstream and downstream technologies, etc. may be part of such extended development plan. In the long run, when new processes for the next candidate molecules from this molecule class are developed, this would result in significant time and resource savings. Examples of respective focus areas within those parts that need to be addressed during the process design are given in Fig. 32.2 and discussed in the paragraphs below. Regardless whether the process might become a part of a platform or not, it needs to be thoroughly developed, characterized, and, finally, described so the relations between process parameters and CQAs of the product are well-understood. This is accomplished by following a generic process-design guide that is based on a commanding set of rules that reduce process failure, risk for product quality problems, and time to reach an acceptable process. The process design will need to focus on the upstream and downstream processes, and on the development of an analytical package that will support development of these processes. Each process design starts with a selection of an expression system that will be used to produce the drug candidate. The expression system will, to some extent, define parts of the impurities profile the purification process will need to deal with. Therefore, from a process design perspective, the best place to start is with careful characterization of the target product and its impurity profile, followed by a risk assessment that leads to prioritization of the purification process tasks. Defining the different steps in the most appropriate sequence is the next activity. Already at this stage, initial process integration should be discussed. Process integration facilitates the transfer of intermediate product between steps by optimizing the steps’ links, and should reduce the amount of non-productive activities required for each process step (e.g., preparation of too many buffers, unnecessary hold times, the need for buffer exchange between steps, extensive column packing, etc.). Product stability under a range of typical process conditions needs to be in focus as early as possible into the process design activities. Understanding of the product stability will help to avoid process-related issues that impact the product’s biological activity. The analytical methods that are suitable for assessing stability and biological activity need to be available early in process design. As a matter of fact, they should be a part of the analytical package that is used for product and impurity profile characterization. Furthermore, the process development team needs to understand the capability of each assay (be familiar with the variability, the limits of quantification, LQ, and limits of detection, LD, for each of the methods) to avoid making false conclusions (e.g., about satisfactory levels of impurity clearance). At the same time, the analytical methods development team needs information about acceptable levels of removal of each key impurity category so that assays with sufficient sensitivity and specificity can be developed. More details on this topic can be found in Chapter 47.
642 SECTION | VI Industrial Process Design
A development timeframe should be developed by the analytics and validation teams. Those teams are also best suited to select assays that can support ongoing development, process monitoring, and process validation. Furthermore, while the development of the different parts of the process is carried out by different scientists and/or separate groups or sites, understanding limitations associated with different scale of operations is very important. What may be practical in the laboratory can lead to equipment design needs that are very tricky and costly to realize on the manufacturing floor, for example, a complex product peak fractionation scheme easily realized with a laboratory fraction collector would need to be translated into a cascade of valves with a dead volume large enough to risk elimination of some of the resolution achieved by the purification step. Establishing communication channels through proper reporting procedure will reduce potential surprises. For instance, communicating with manufacturing teams can eliminate issues related to constraints in the manufacturing facility. Examples of such constraints could be pumping capabilities (flow rates required for operating within the desired time frame), a different sensitivity of process monitoring/measurement equipment used in development labs and in manufacturing, or the type of wetted material and column distribution system used at the two scales. Finally, a word about reporting in a process-design project: it is essential that activities and studies resulting in process understanding be documented at every stage, and that documentation should reflect the basis for decisions made about the process. A good reporting practice includes both describing what has been done and documenting what has been left out. The development records make the selection of methods and all major decisions on options clearly understandable and all open issues easily identifiable. Modern electronic document storage and retrieval systems are recommended to enable appropriate documentation management with the lowest risk for error and most efficient use of time. Use of template reports for the same or similar types of studies is recommended, as it allows for easier comparison of experiments performed at different time points, both from the data as well as the troubleshooting perspective.
32.4.2 Platform Processes In some instances, the process design activity can be significantly simplified if experience with process development and with manufacturing of a similar product exists within the organization. In those cases, one can utilize so-called platform technology and platform manufacturing concepts. The platform technology/process is “a common or standard method, equipment, procedure, or work practice that may be applied to the research, development, or manufacture of different products sharing a set of common properties” [7]. Platform manufacturing is defined as “implementation of standard technologies, systems, and work practices within manufacturing facilities, and their use for the manufacture of different products,” thus it is the approach of developing a production strategy for a new drug starting from manufacturing processes similar to those used by the same manufacturer to manufacture of other drugs of the same type [7]. In principle, any platform technology, regardless of the industry/area that it is applied in, must be built on a foundation of knowledge and experience. The same applies to platform approaches employed in process design for a biologic. Experience in processing the same class of proteins such as IgG antibodies provides the basis for successful repeated use of platform technologies, such as Protein A chromatography and the subsequent low-pH virus inactivation step, as well as for applying the same or a very similar design of the cell culture sequence and the methods to run the different culture stages. The evolution of platform‐based approaches in biotechnology product development and manufacture is presented in Fig. 32.3. In the 1980s, platform approaches were not employed, but as the industry matured, the concept of a platform approach to PD and manufacture started to emerge in the late 1990s, within companies’ development programs, and started to be discussed in public in the early 2000s [8]. Currently, platforms are widespread within product development organizations, and many production facilities [7]. The advantages of using more standardization with the development process are multi-fold. One of the more attractive benefits is that the time required for process design and development can be more accurately estimated [8]. Because platform processes are composed of several defined unit operations and methods, they lend themselves to simplified process design. Not only the order and type of steps can be templated, but also many process conditions (e.g., buffer, flow rate) can be fixed. The use of platform technologies for cell culture and cell clarification provides a greater likelihood of success for the downstream platform because the process impurities, as well as most host cell impurities, will be similar. Consequently, analytical approaches will also be the same, or can be leveraged, and as such, require reduced development time. A reduction in overall development time facilitates a reduction in the time to toxicology and first-in-human (FIH) studies. Process standardization also delivers benefits at manufacturing scale, as the modern commercial manufacturing facilities use standard/platform technologies and work practices to improve efficiencies and execution timelines. Platform-process derived data at many drug developers/manufacturers have been accumulated and input into extensive databases. The biopharmaceutical industry, including the regulatory bodies/health authorities, are now able to exploit these
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 643
Downstream (purification)
Establishment of chromatography as a workhorse for purification Introduction of commercial Protein A resins Establishment of process development tools enabling standardized approach to PD
Any technology that worked was considered good enough: Aqueous Two Phase Extraction, membrane filtration, precipitation, etc. Non specific modes of preparative chromatography
Introduction of alkali stable Protein A (longer life time, more economic operations) Large number of molecules (shorten development timelines) Introduction of HTPD concept Multimodal chromatography QbD paradigm 2000
1990
1980 Limited number of products Innovation- and research-driven environment Little focus on manufacturability No need for platform like processes
Biotechnology products approved New dedicated facilities established Broad array of upstream technologies, Inflexible facilities First platform approach introduced within company development programs
Prepacked columns Single pass filtration Two step purification Weak partitioning, maturity of HTPD Establishment of continuous technologies Modularity concept
2010+
Establishment of single use (SU) technologies Matured Platforms within Improved manufacturing process and product flexibility development programs Multiproduct facilities, Many manufacturing facilities using Localized manufacturing for standardized practices and some products capable of multiproduct Approval of biosimilars facilities Quest for manufacturing Mergers leading to larger platforms for new molecular internal manufacturing entities, Fab's, nanobodies, networks etc., Establishment of CMOs New types of molecules w/o a common backbone may pose implementation issues in established facilities.
General
FIG. 32.3 The evolution of general platform-based approaches in biotechnology product development and manufacture. Adapted from E. Moran, et al., Platform Manufacturing of Biopharmaceuticals: Putting Accumulated Data and Experience to Work. EBE Publications, 2013, pp. 1–21.
data to improve biopharmaceutical drug development, manufacture, and regulation. It is also expected that platform derived data at many drug developers/manufacturers will allow the biopharmaceutical industry, in cooperation with regulatory bodies/health authorities, to exploit this data set to improve regulatory procedures based on more solid, in-depth information. Although it is well understood that there will always be some product‐specific nuances to deal with in every development program, the available platform data could be presented for review to support clinical trial and marketing authorization applications, instead of presenting the same type of data for a new molecule developed on the platform every time again. These platform data packages might include those describing viral clearance capabilities of process steps, clearance of process related impurities, cleaning regimes for process equipment, etc. [7]. Other potential advantages are the reduction of the number of suppliers for raw materials and established waste disposal routines for the selected consumables [5]. One of the most widely used platform approaches is that for the development of monoclonal antibodies. This is described in more detail within Chapters 39 and 51, readers are also referred to a recent review of a platform monoclonal antibody process from several biopharma companies and future trends in platform processing for mAb’s [9].
32.5 DOWNSTREAM PROCESS DESIGN METHODOLOGIES AND TOOLS Details of process design for the upstream and the recovery parts of a representative manufacturing process are provided within Chapter 31, so here we are only briefly summarizing the most important facts about these parts. Typically, the upstream process is started from a working cell bank (WCB), which has been prepared from a master cell bank (MCB). A seed train with several stages of increasing culture volume and cell mass leads to the final-scale fermenter, or bioreactor production phase. Fermentation (microbial cells) and cell culture (mammalian and insect cells) require carefully controlled growth conditions that are supported by the culture medium, which contains nutrients and chemicals in dilute aqueous solution. Whether mammalian or microbial cells are used, recovery of the product involves product isolation from the cell mass. Cell debris or whole cell removal is achieved with techniques such as centrifugation or filtration. A similar basic process layout is also applied to transgenic and insect cell production systems. More detailed descriptions of the recovery part of downstream processing is provided in Section III of this book (Chapters 9 and 15). From the downstream process development perspective, the type of expression system has no impact on the development methodology itself. The type of process-related impurities, and most likely even the product- related impurities will vary with the expression system, but the development workflow and the goals for downstream process development will be the same (i.e., establishing a design space that will guarantee manufacturing of quality product with its specific characteristic and purity profile). Therefore, starting downstream process development with a thorough characterization of the product, its impurity profile, and listing potential contaminants should be the focus from the beginning of the project.
644 SECTION | VI Industrial Process Design
32.5.1 Product in-Process Stability and Impurity Profiles With the expression system chosen, and with the information about the target molecule acquired, downstream process design can begin. Characteristic features of the target molecule that are relevant to process designers include: (i) the physicochemical and biological properties, (ii) its stability under the chemical and physical conditions it might be in contact with throughout the process, and (iii) the type of impurities that will need to be removed before the target molecule can be formulated into the drug product. For a successful process design all these features need to be characterized, and their dependency on the process conditions should be investigated, understood, and described. Methods for process and product characterization are discussed in Chapter 47.
Stability In general, keeping the molecule in its natural environment will support preserving its activity. It should be noted, though, that this environment may contain harmful enzymes or lack stabilizing conditions that the production cell did provide. During biosynthesis of a therapeutic protein and its isolation from other biological material, the environmental conditions the target molecule is exposed to will most certainly need to be changed, sometimes toward harsh ones. Various changes to process conditions will affect the protein properties to a different extent, but the goal of the process development scientist is to characterize these changes and to make sure they are minimized or, preferably, prevented. Factors affecting the stability of protein-based target molecules include the presence of protease, protein concentration, pH, temperature, co-solvents, salts (concentration and type), co-factors, and redox potentials [5]. Knowledge of the conditions, such as pH, salt concentrations, additives, etc., that preserve the product will be crucial to a cost-effective manufacturing strategy. Some of this information will be obtained during preliminary testing of the starting material, some during the process development. Furthermore, because molecule stability is more or less almost always related to a type of chemical reaction, the effect of time of exposure to a given condition needs to be considered, and the rates of the target molecule degradation need to be determined. Finally, stability of the target molecule may also strongly affect its propensity toward certain types of interactions that can be exploited in a purification process. For instance, chromatographic separations are based upon the controlled interaction between the solute and a sorbent (resin). The critical properties, from an interaction perspective, of the protein may be exposed on the surface of the molecule or hidden in the interior parts and only available for interaction after modification (e.g., denaturation). This exposure of interior parts can be used to achieve a high degree of separation (e.g., as in reversed-phase chromatography), but may also result in irreversible loss of material and/or activity. To understand the limits of the stability window for the target molecule, as well as for the environment it is kept in, is thus essential knowledge obtained during process development.
Impurities and Contaminants The objective of purification is to either completely remove any quality deteriorating components from the drug substance, or to reduce their content below acceptable levels from the patient safety perspective. As discussed in Chapter 4, in general, one distinguishes between product-related and process-related impurities, which differ both in origin and strategy for prevention or removal. Product-related impurities are molecular variants arising during manufacture and/or storage, which do not have properties comparable to those of the desired product with respect to activity, efficacy, and safety [10]. They may be present as a result of an inherent degree of structural heterogeneity occurring in proteins due to the biosynthetic processes used by living organisms to produce them [10], or can be formed during processing as a consequence of enzymatic activity or exposure to certain process conditions such as high or low pH, extended storage, high shear conditions, etc. Any of these modifications will lead to different forms of the product, which may or may not be acceptable from the patient safety or mechanism of action perspective. When variants of the desired product are acceptable, they are considered product-related substances, and not impurities [10]. Consequently, purity demands will need to be determined based on the efficacy, potency, and safety profiles of the various forms. Removal of product-related impurities is among the most challenging tasks for purification technology, as it usually requires resolution of substances with a very similar chemical composition and molecular structure. This similarity also creates a challenge for analytical departments, not only because the applicable methods need to be used to help develop and later monitor the manufacturing process, but also because the exact composition of the final product could be considered as an important company asset that could be used in legal disputes when a biosimilar version for the drug is developed. More information about different properties of biological products and analytical methods used in bioprocessing can be found in Chapters 3 and 47, respectively. Process-related impurities can be classified into three groups (Table 32.2) where two of them are related to the upstream operation and one to the downstream operation. The upstream-derived process impurities are dependent on the selected
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 645
TABLE 32.2 Examples of Typical Process-Related Impurities Description
Examples
Brief Explanation
Molecules and compounds that are derived from the manufacturing process, and are classified into three general categories [10]: (i) cell substratesderived (e.g., host cell proteins, host cell DNA) (ii) cell culturederived (e.g., inducers, antibiotics, or media components, and extractables and leachables) (iii) downstream processingderived (e.g., enzymes, chemical and biochemical processing reagents, inorganic salts, solvents, ligands and other leachables and extratables)
Cell-culture nutrients, chemicals
Non-consumed component of cell culture media and byproducts from the cell metabolic pathways
Host cell proteins
Any production source has its own native proteins, which will need to be removed to obtain the target product with the desired purity. In cellular production systems one refers to them as host cell proteins (HCP). Typically, with mammalian cells, such as CHO, HCP are released when the cells are damaged during recovery of the product from the bioreactor, or because of natural cell death toward the end of cell culture. With E. coli cells, the isolation of inclusion bodies and extensive washing steps allow removal of most of the bacterial HCP despite their significant release upon cell disruption (see Chapter 9)
Proteolytic enzymes, other enzymatic activity
Particularly important category of HCPs characterized by enzymatic activity causing degradation and/or modification of the desired product and other HCPs. Examples: proteases and glycosidases
Endotoxins
In E. coli, and other Gram-negative bacteria, endotoxins are present in the cell wall and are released during cell disruption—one reason why there has been an effort to design the cells so they can secrete product, rather than sequester it in inclusion bodies. For mammalian cultures, endotoxin should not be an issue, or at least it is one that is controlled by compliance with good manufacturing practices. In cell culture, endotoxins are considered a contaminant, not an impurity derived from the host organism. Endotoxins can be introduced into purification processes by contaminated water, buffers, additives and resins
Cellular DNA, other nucleic acids
Cellular nucleic acids, such as genomic DNA, are released in the same way that HCP are released. The amount of nucleic acid in the starting material is also dependent on the degree of cell death during late cell culture or the disruption of producer cells during isolation of the product
Virus
Production sources of mammalian origin such as cells, tissue, human plasma, and transgenic milk can be contaminated by exogenous virus. Mammalian cells often carry endogenous virus, e.g., CHO cells inherently contain retroviral particles. There is also a risk that transmissible spongiform encephalopathy (TSE) agents can contaminate mammalian production sources as well as in-process raw materials that are of mammalian origin or manufactured with materials of mammalian origin. From a potential virus contamination perspective, use of bacterial expression system could remove the contamination risk, but those systems suffer from other limitations, e.g., higher HCP levels and endotoxins
Cell debris, lipids
The lowest initial impurity levels are generally achieved with secretion systems grown in chemically defined, protein-free culture media and when the product is secreted into the cell culture medium. The highest levels are present when production cells need to be disrupted to release the intracellular product and cellular debris contaminate the target product during the initial recovery steps
Antifoams, antibiotics
Used to reduce foaming during the cell culture process, usually based on surface active agents (surfactants) that have a deterioration effect on column and filters. Antibiotics are considered API and should not be present in another drug formulation
Leakage
Leached ligands present in the elution pools. Example: Protein A
Extractables, e.g., from plastic surfaces
widespread use of single use components, e.g., bags, flow paths, connectors, potentially harmful leachable or extractable impurities that could be released from the plastic material into product stream under harsh conditions also fall into the process impurities
expression systems (cell-substrate derived) and process conditions (cell-culture derive). A list of typical cell-substrate derived process impurities can be found in Chapter 4 (Table 4.5). These impurities will also depend on the methods required to isolate the product. The lowest initial impurity levels are generally achieved with secretion systems grown in chemically defined, protein-free culture media and when the product is secreted into the cell culture medium. The highest levels are present when production cells need to be disrupted in order to release an intracellular product. The resulting cellular debris formed contaminates the target product during the initial recovery steps. From the process design perspective, it should be
646 SECTION | VI Industrial Process Design
remembered that the amount of and type of the process-related impurities present in the starting material for purification operations can be minimized through selection of processing methods or process controls. Downstream-derived impurities include column leachable components, buffer components, surfactants, flocculation, and precipitation agents. These days, they also include compounds introduced via the use of single use components (e.g., bags, flow paths, connectors). These include potentially harmful leachable or extractable impurities that could be released from the plastic material into the product stream under harsh conditions, which also fall into the process impurities.1 In general, the level and type of leachable and extractable will depend on process conditions including exposure time, temperature, pH, and the type of buffer salts used. Attempts to align consensus on standardization of procedures and methods when it comes to single-use system testing have resulted in the Biophorum Operations Group’s (BPOG) extractables protocol [11]. The protocol provides a consensus of testing needs as discussed and agreed upon by the members of BPOG and can be followed by non-members as well. As discussed in Chapter 4, a high level of process-related impurities and cell debris requires greater efforts during recovery and purification as they might have detrimental effects on the recovery and/or purification steps, leading to a higher cost of the downstream process.
32.5.2 Basics of Downstream Process Development With the product characteristic and impurity profile in hand, process development can focus on identifying potential technologies to be used in the process being designed. As discussed herein, if the process is to be based on a platform process, the choice of these technologies is already made. If, on the other hand, a process needs to be developed from scratch, then based on the product characteristics and types of impurities, process data available from the product development activities, heuristic information, and literature reviews a list of most promising technologies can be compiled and a few process sequence alternatives proposed. These alternatives should be ranked and/or quickly tested to determine those on which focus should be placed. Also at this stage, it could be advisable to consider the functionality and limitations of commercial manufacturing equipment. Furthermore, before starting any extensive process development work, care must be taken to assure that PD can be performed using established laboratory and pilot-scale scale-down models, and that scientific principles employed will assure that results obtained and conclusions drawn from PD studies are representative of the commercial manufacturing. Use of high-throughput process development, high-end modern analytical techniques, and often modeling, both statistical and mechanistic, should also be discussed at this stage and a decision made about which of those will be applied to what question or challenge throughout the PD activities. Although in the past, experimental work was addressed in a random fashion, today the approach to process development is based on systematic risk management, structured experimental planning, and execution. The focus of the new approach, sometimes called the enhanced approach, as compared with the traditional approach that focused on specifying set points and operating ranges for process parameters, is to use risk management and scientific knowledge to identify and understand the material attributes and process parameters that influence the critical quality attributes of a product [12]. There are a few methods to assure that the right level of scientific knowledge is attained. These include (i) evaluation of the effect of process variables one at a time, (ii) applying mathematical/mechanistic modeling to support experimentation effort to formulate the model that is later used to describe the process, (iii) use of the concept of the design of experiments, DoE. Of course, any combination of these methods can be applied in practice, depending on the level of complexity and initial knowledge about the challenge at hand. Evaluation of the process variables one at a time is not recommended as a first choice, unless it is known from previous studies that the effect of one variable on the process outcome can be decoupled from the effect of other variables. In the case of protein purification, this is rarely the case. Few simple examples include effect of load concentration in a capture step or elution pH (more examples can be found in Table 32.5). Mechanistic modeling provides a very powerful tool for guiding process design. With a formulated, and validated, model, an in silico assessment of the impact of process parameters can be performed, thus saving time and resources. However, what needs to be remembered is that a mechanistic model does not replace experimental effort, after all, in a majority of cases, initial experimental data is needed to formulate the model. Instead, the model is used as a guiding tool for designing more relevant experiments and testing effects of various process variables (including those difficult to test experimentally, e.g., lot-to-lot variability of chromatography resins and filtration membranes). The validated model can, and should, be used for optimization to find the best compromise between the optimum conditions and the process robustness. Model validation is very important, as a use of an incorrect physical model, or even wrong model assumptions, will 1. The same impurity concern regarding the presence of leachable and extractable compounds applies to the use of single-use bioreactors (SUBs) commonly utilized at different stages of the upstream operation.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 647
lead to erroneous results. However, it could be argued, at least from the scientific perspective, that the underlying principles for almost all unit operations employed in today’s downstream processing are well understood and described in scientific literature. This fact, together with the unprecedented pace in improvement of computational resources, makes mechanistic modeling the future method of choice for process development and process control. While mechanistic modeling is gaining more popularity, it seems like the use of DoE is still the industry standard. DoE, if applied correctly, will help, from a statistical perspective, in quantifying relationships between process parameters and process outcomes, including the product CQAs. The correctness of this approach is built on the foundation of process know-how and all empirical knowledge available. Results from DoE studies are evaluated using statistical analysis, and are expressed in terms of the so-called statistical model. The statistical model can be used to quantify the impact of the important process variables (chosen from the list of the variables tested experimentally) on the process output, and define the robustness of the process through analysis of normal process variability [13].
32.5.3 Introduction to Risk Analysis, Design Space Concept, and Process Control Risk Analysis Regulatory authorities endorse a “science and risk-based approach” for designing and validating biomanufacturing processes. They recommend that process design, including development and control strategies, is facilitated by early risk assessments that are subsequently enhanced and improved as process experience is gained. Process development activities play a pivotal role in this process as they provide answers on what risks may be present, and how they can be eliminated. Elimination of these risks should be based on thorough characterization of different process steps to establish relations between process parameters/variables and the process outcome from the product quality perspective. These relations will allow a multidimensional (several variables) operating space, the so-called design space, to be defined, either for the whole process or for separate unit operations. Different types of risks associated with bioprocesses have been discussed in Chapter 4 (Table 4.3). Some of them can be used in the early risk assessment process, which should help in specifying a preliminary list of process parameters, and their interdependencies, to be considered in the process development.
Design Space The concept of the design space is based on identification of process parameters and their subsequent classification into those: (1) affecting the product CQAs, the so-called critical process parameters, CPPs, (2) those that need to be controlled to maintain the process performance but do not influence CQAs, the so-called key process parameters, KPPs, (3) the non-key process parameters, also referred to as general process parameters (GPPs) that do not have a meaningful effect on product quality or process performance (Table 32.3). The parameter classification process starts with identification of potential process variables that might need to be tested (e.g., process parameters such as flow rate, pH, and temperature). Although the initial list of potential parameters can be quite extensive, typically it can be simplified by performing a preliminary ranking of parameters applying risk assessment tools based on prior knowledge and existing experimental data. In principle, the list will be refined (both the parameters and
TABLE 32.3 Classification of Process Parameters
Quality
Narrow Rangea
Wide Rangeb
Critical
Non-key (General) An adjustable parameter of the process that has been demonstrated to be well controlled within a wide range, although at extremes could have an impact on quality
An adjustable parameter (variable) of the process that should be maintained within a narrow range so as not to affect critical product quality attributes Process
a
Key An adjustable parameter or the process that, when maintained within a narrow range, ensures operational reliability
And/or difficult to control. And/or easy to control. Adapted from PDA, Technical Report No. 42: Process Validation and Protein Manufacturing. PDA J. Pharm. Sci. Technol. 59(S-4) (2005). b
648 SECTION | VI Industrial Process Design
their ranking) throughout the whole process development program by determining the significance of individual process parameters and their potential interactions. At the end of PD the list will yield a ranking of the significant process variables that were identified based on studies performed to achieve a higher level of process understanding, and establishing the design space. The type of studies that can be considered to achieve those goals include a combination of design of experiments, mathematical models, or studies that lead to mechanistic understanding [2]. It is recommended by regulatory authorities that the rationale for inclusion of a given parameter in the design space is presented. Also, it can be expected that, in some cases, arguments behind exclusion of parameters should be provided. Although the inclusion of the parameters should be self-evident from the results describing the design space, the exclusion rationale might be more difficult to explain as the rationale can be based on multiple factors. An example of a risk-assessment approach and subsequent process characterization workflow is presented as follows. It is based on one of the approaches described in CMC’s Biotech Working Group A-mAb case study [14]. It uses risk ranking to classify process variables based on their potential impact on CQAs, process performance, and possible interactions with other parameters. In this risk assessment method, each parameter is assigned two rankings: one addressing the parameter's potential impact on CQAs (main effect) and the other describing the parameter’s potential interactions with other parameters. The rankings for impact on the main effects have higher weights assigned than the rankings for impact on lower criticality quality and process attributes. In the case of lack of data and/or a weak rationale for the assessment, the parameter should always be ranked at the highest level. It should be remembered that the impact assessment is always based on the variation of the parameters within the anticipated design space. Main and interaction effects assessed are then multiplied and the overall “severity score” (SS) is calculated. The severity score is used to classify parameters and determine which type of studies (i.e., DoE, univariate, or none) should be performed during process characterization (Table 32.4). Severity score calculations help to assign process parameters into three categories: (i) primary parameters warranting the multivariate evaluation (e.g., DoE studies); (ii) secondary parameters whose evaluation could be based on univariate studies either if a proper justification can be provided or the severity score is low, and (iii) parameters that don’t require new studies whose ranges could be established based on prior knowledge or modular claims. Similar to the logic used in the impact assessment step, if no data or rationale is available to justify either no study or univariate studies, the parameter will be considered as part of the multivariate studies. An example of risk ranking for the Protein A chromatography step is shown in Table 32.5. In this case, all variables with a severity score equal to or greater than 8 would be identified for the multivariate studies: load flow rate, protein load, equilibration/wash flow rate, elution buffer molality and end pool collection point; all parameters with score 4 would be warranted univariate studies and the remaining parameters would be classified as having no impact (within the ranges included in the expected design space). The results obtained in those studies, and possibly in follow-up experiments, would lead to establishment of the parameter classification (CPP, KPP, etc.), their ranges, and corresponding controls. One of the inherent drawbacks of the DoE approach used for process validation studies is that the number of experiments increases exponentially with the number of parameters. Consequently, for purely practical reasons, it is expected that the risk assessment exercise should end up with a suitably low number of relevant parameters (4–6), which in turn can compromise its credibility [15]. To address this DoE drawback and to address the process validation guidelines [3], a new approach to process validation and thus to classification of process parameters has been proposed [15,16]. The approach is based on what is known in the field of mathematics as the Latin Hypercube Sampling (LHS). In the LHS approach adopted for process design, all the process parameters that are considered for the risk assessment exercise are tested experimentally, but only in combination with other process variables/parameters. Thus, each parameter is tested at N distinct levels (values) chosen from a statistically relevant parameter range (assuming a relevant probability density function describing the parameter variability, e.g., normal or uniform), where N is equal to the number of process parameters to be tested. As a result, the TABLE 32.4 Parameter and Severity Classification Severity Score
Parameter Classification
Type of Studies
Very high
Primary
Multivariate study
Medium-high
Primary or secondary
Multivariate or univariate with justification
Low-medium
Secondary
Univariate
Low
No impact
No additional study required
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 649
TABLE 32.5 Example of risk Ranking Analysis for Protein A Chromatography Stepa Parameter Interaction Effect
Phase
Parameter
CQA
PA
highest Main effect score (hM)
All phases
Column bed height (cm)
1
1
1
4
2
4
4
Load (CCCF)
Flow rate (CV/h)
4
2
4
2
2
2
8
Protein Load (g/Lbed)
4
4
4
4
1
4
16
Load concentration (g/L)
1
1
1
1
1
1
1
Buffer pH
1
1
1
1
1
1
1
Buffer molarity (mM TRIS)
1
1
1
4
1
4
4
Buffer molarity (mM NaCl)
1
1
1
4
1
4
4
Flow rate (CV/h)
4
2
4
4
1
4
16
Volume (CV)
1
1
1
4
1
4
4
Buffer molarity (mM acid)
4
1
4
4
1
4
16
Flow rate (CV/h)
1
2
2
1
1
1
2
Start pool (CV)
1
1
1
1
1
1
1
End pool (CV)
1
1
1
8
1
8
8
Main Effect
Equilibration and Wash(s)
Elution
CQA
PA
highest Interaction Score (hI)
Severity (hM × hI)
a
the list of parameters is provided here for illustration purposes only. Adopted from CBW Group, A-Mab: A Case Study in Bioprocess Development, 2009, pp. 1–278.
LHS approach delivers a statistically relevant experimental design, which produces data representative to a “routine manufacturing outcome” (i.e., a sort of a process control chart). In principle, the LHS approach has a huge potential to simplify risk assessment discussions and identify process parameters based on hard evidence rather than on qualitative historical data that are often not even properly archived. However, a widespread implementation of this method will require examples to be presented to regulatory authorities and discussed within the bioprocessing community. Until then, other approaches will need to be used to justify process design based on a risk assessment approach. A design space can be established for each single-unit operation, or it can be developed for a series of unit operations constituting a part or the whole process. The former approach is simpler to develop, but it might require overall more efforts to characterize the whole process and might carry a risk that potential interactions between unit operations are not uncovered. The latter approach might, on the other hand, provide higher operational flexibility. Indeed, some novel DoE methods [17] and risk assessment approaches [18] can be used to characterize multiple sequential process steps simultaneously. These new approaches hold the potential for uncovering hidden interdependencies between steps while still being in line with the design space concept. A design space can be developed at any scale [2]. From the scientific perspective, this is correct as long as the model systems used in the lab and/or the pilot scale are representative of the final manufacturing scale operation. However, if potential risks in the scaled-up operations exist, they should be described and a control strategy proposed. In general, it is recommended that the design space be described in terms of relevant scale-independent parameters (e.g., column volumes, residence and contact times, loads normalized per volume or surface area, etc.). This approach allows the same design space description to be applicable to multiple operational scales. Principles of scale-up of downstream processes are described later in this chapter. As discussed before in connection to the process design workflow, a good documentation practice is pivotal when designing a process, and therefore it plays an important role in the establishment of the design space. Good documentation of development and characterization studies, even those resulting in unexpected results, is critical in enabling technology transfer, troubleshooting and investigations of out-of-specification (OOS) results. Development reports are also critical for future process changes. When a process change is made and there are no development data, it may not be clear what effect the change will have on product quality.
650 SECTION | VI Industrial Process Design
Finally, a word about an important, yet non-mandatory part of the design space analysis, the so-called edge of failure analysis, where the edge of failure is defined as the point in the design space where a boundary to a variable or parameter exists, beyond which the relevant quality attributes or specification cannot be met [14]. Although the FDA does not generally expect manufacturers to develop and test the process until it fails [3], the edge of failure analysis might help with assessing and defining process risks [19]. The edge of failure can be determined experimentally by exploring the design space until failures are found and by simulating the extrapolated design space even though failures were not experimentally detected. The latter approach is preferred simply because of the cost argument, but it requires a validated model to be developed.
Process Control The critical and key process parameters identified and their operating ranges become an essential part of the process control strategy. The process understanding gained during the process development should, in addition to classification of process parameters, identify sources of process variability and propose adequate control mechanisms to reduce their impact on the product quality and process performance. As discussed in Chapter 4, variability can be caused by the biological production system, but also by post-cell culture effects, such as the impact of process conditions on the biological molecule, incompletely removed enzymatic activity, or inadequate process control. Additionally, chromatographic resins, filter membranes, and other re-usable consumables have a certain batch-to-batch variability during processing, especially as their usable lifetime approaches. As such, process design includes the definition of a window of operation for each step that ensures consistency of product quality and accommodates the ‘natural’ variability as well as that of raw materials and their performance. In practice, this is applied through the use of a safety factor/margin that involves underutilising the full capacity of a specific operation (e.g., chromatographic resin or filter loading). Underutilising the capacity of a resin or filter ensures a constant level of performance for a longer length of time before degradation causes a reduction or variability in specified performance. At the same time, success of the control strategy proposed will heavily rely on the fact that all sources of variability, such as different component lots, production operators, environmental conditions, and differences in measurement systems in the production and laboratory settings are considered. Furthermore, it should be assured during the PD phase that variability typical to commercial processes is tested while establishing the design space. The importance of having a focus on process variability is exemplified by the notion that the ability to minimize process variability as early as possible in the process train will simplify the overall process control strategy, improve process robustness, and even minimize the need for end product testing [2]. On the other hand, it could be envisioned that with the right level of process understanding, future manufacturing control paradigms will allow for the variability of input materials to be less tightly constrained. Instead, the process parameters can be automatically adjusted to account for the variability of incoming material through an appropriate process control strategy to ensure a consistent product quality. An example of such an approach to deal with effect related to lot-to-lot variability of an HIC resin has been proposed [20] (see Chapter 19).
32.6 COMBINING STEPS FOR AN EFFICIENT PROCESS As already mentioned when discussing the design space for sequential operations, in a manufacturing process for biologics, the outcome from a previous step will influence the performance of the subsequent steps. However, the steps are not necessarily linked to each other in an optimal fashion in the first version of the process. Sub-optimal links can be time consuming and costly; they may even force the introduction of additional steps or adjustments to the intermediate product (associated operations). Process integration is the part of the process development phase where these issues are resolved. For process integration to be successful, the effect of process stream variability is accounted for when characterizing different purification stages. Furthermore, strategies for minimizing these effects need to be developed. If process steps are developed independent of each other, it is likely that they use different buffers, and elution conditions of one step that may not be adjusted to allow direct loading onto the next. This can be changed so that each step uses only one or two basic buffers, and elution conditions match loading conditions for the next step as closely as feasible for good performance overall. An extreme, yet elegant example of such a strategy was proposed by Mothes et al. [21]. The strategy, also referred to as ASAP (Accelerated, Seamless Antibody Purification), is based on a buffer system that allows a seamless integration of three chromatography steps where the elution buffer from one step is the load buffer for the subsequent step. Specifically, the eluate from the Protein A step can be loaded directly onto a second-step column and the eluate from the second step is applied directly onto the polishing column. A key benefit of ASAP continuous processing is the elimination of intermediate product storage, adjustment of the intermediate for load onto the next step, and potential extra steps such as UF/DF associated with pH, buffer molarity, and protein concentration adjustment. With those steps removed, either process cycle times or column sizes can be reduced.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 651
If the use of this, or a similar, buffer system is not possible, one may still design for the possibility of modifying the intermediate product composition with in-line adjustments (i.e., without hold-tanks between the steps). An example of a technology-enabling operation of several chromatography steps connected in series is the so-called straight through processing (STP) concept developed in collaboration between GE Healthcare and Janssen (see Chapter 27). Alternatively, the recently introduced single pass tangential flow filtration technology (SPTFF) [22] makes the in-line adjustment, including a concentration step, feasible. However, in general, for the whole process, the number of different buffers and cleaning solutions should be minimized to three or four, and sodium hydroxide can be made the standard for cleaning columns and filters in many processes. With this strategy, the process integration will be much simpler.
32.7 SCALE-UP AS PART OF PROCESS DESIGN Once the correct steps and sequence of unit operations have been identified for achieving the required product purity and quality, the process design itself should be evaluated for its feasibility of operation at the manufacturing scale. Process parameters, including considerations for the constraints at larger scales, should be accounted for at the bench level. However, quite often not all data required for successful scale-up can be collated from the bench, particularly for new processes without any historical implementation at larger scales. Therefore, refinements of the small-scale process may need to be evaluated and tested at an intermediate scale between that of the bench and production to allow a more efficient scale-up. Thus, small-scale processes are generally scaled to a bridging, pilot scale for further process optimization before scaling to the final manufacturing level. Before performing scale-up, one needs to decide on which scale the final process is to be operated. Scale-up is driven by market demand, but also on the manufacturing capacity of the facility where the process is to be installed. As discussed in Chapter 4, the demand is addressed by calculating how many batches will need to be run per year, at a given overall process yield, assuming: (i) a range of realistic titer levels and (ii) feasible working volumes of the bioreactors. The former will be based on the advances in cell culture technologies and the latter on facility capability (e.g., available floor space, auxiliary equipment, ability to run multiple bioreactors in parallel and/or staggered fashion, etc.). Once feasible scenarios are identified and the mass of product per batch is known, it is a simple task to scale all unit operations in the manufacturing process. For instance, after knowing how much product is produced in a single bioreactor, the size of DSP equipment is determined from the available processing time for production. The DSP may be sized to accommodate the batch to be processed either in one single lot or in several cycles using smaller equipment. Although the latter approach is sometimes more cost effective, it should be noted that cycle time is independent of the scale of operation, hence more cycles will take longer to complete. Once equipment size is determined, their feasibility should be evaluated for any constraints of the manufacturing facility. For instance, work shift patterns for batch operations and process scheduling in the cases where the facility is to manufacture multiple products will need to be considered. This is to ensure that the total process time does not exceed the allocated manufacturing campaign window. If a process is to be retro-fitted into an existing facility, there may be space restrictions for certain equipment, limiting the feasible sizes that can be utilized. In the case of a new facility, constraints related to equipment size can be relaxed by purchasing larger equipment or more units. Flexibility for equipment dimensioning at large scale can be created by avoiding locking the key dimensioning parameters too early. In general, a successful scale-up should balance the needs of the technical solutions utilized with that of the economy of a manufacturing run, while ensuring the product quality requirements for patient safety.
32.7.1 Upstream A cell culture process is typically developed in bench scale bioreactors and then scaled up for commercial production. However, although the recent advances in cell technologies have enabled high titer processes, these high cell density cultures begin to become a challenge from the perspective of process scale-up. Especially factors such as mixing time, oxygen transfer, and carbon dioxide removal need to be carefully considered when scaling up high cell density processes. For example, poor mixing can result in local nutrient gradients within the bioreactor, leading to reduced cell growth and productivity. However, at the same time, low-power input mixing is recommended because of high sensitivity of mammalian cells to shear stress. The mixing will, in turn, affect both oxygen transfer and dissolved CO2 removal. The former is of critical importance, as mammalian cultures are aerobic processes and often oxygen is the limiting nutrient, while the latter has been linked with lower cell productivities and even to product quality due to a different glycosylation. In other words, optimizing oxygen supply and carbon dioxide removal, while avoiding cell damage, is the key to the mammalian cell culture scale-up.
652 SECTION | VI Industrial Process Design
TABLE 32.6 Three Criteria for Scaling Suspension Cell Cultures Criterion Scale-up Parameter
A
B
Geometric similarity (bioreactor/fermenter)
+
+
Constant oxygen mass transfer coefficient, kLa
+
+
Constant impeller tip speed (max. shear rate)
+
Constant impeller circulation (specific impeller pump) rate
C
+ +
+
+
A more detailed discussion on cell culture scale-up is provided in Chapter 31. For the purpose of this chapter we are only providing a very brief example of three criteria for scaling suspension cell cultures [23]. The criteria are listed in Table 32.6. In terms of manufacturability and scalability, mammalian cells have historically been considered difficult to work with due to factors such as low yield, medium complexity, serum requirement, and shear sensitivity, although the latter has generally been incorrectly overemphasized. After two decades of intensive development work in cell line, media, and bioreactor condition optimization, cell densities of 15–25 million viable cells/ml can be routinely achieved for monoclonal antibody fed-batch processes, giving production titers of 3–5 g/L; high titers up to ~10 g/L, and cell densities of more than 50 million viable cells/mL in fed-batch processes, which have been recently reported by a few companies at major conferences [24,25]. The enhancement of specific productivity per cell is achieved not only by selection of highly productive clones, but also by optimization of medium composition and bioreactor operation conditions. Cell line stability is another factor that should be considered because volumetric and specific productivity decline as cell age increases for some cell lines. Such unstable clones are not suitable for large-scale production because cell age increases with scale as the cell culture process is scaled up through serial culture passages of the seed train and inoculum train. In addition to cell line stability, growth and metabolite characteristics that can affect process robustness and scalability also need to be assessed. Robust cell growth with high viability and low lactate synthesis is usually desirable. High-lactate producing clones are not preferred to avoid the osmolality increase that accompanies the addition of base needed to maintain pH. Therapeutic antibodies are produced in mammalian host cell lines, including NS0 murine myeloma cells, and Chinese hamster ovary (CHO) cells [25–27]. The selection of expression system is determined by its ability to deliver high productivity with acceptable product quality attributes and the preferences of individual companies, which is often influenced by their historical experiences. As discussed in Chapters 5 and 31, a typical cell culture manufacturing process begins with thawing of a cryopreserved cell-bank vial, followed by successive expansions into larger culture vessels such as shake flasks, spinners, rocking bags, and stirred bioreactors [27]. When culture volume and cell density meet predetermined criteria, the culture is transferred to the production bioreactor in which cells continue to grow and eventually express product. Getting the required cell density and volume of culture to inoculate the production bioreactor is achieved via a cell culture seed train. The cells are usually run through many cultivation systems that become larger with each passage. The seed train steps have a significant impact on the product titer and cell growth in production scale, as well as the success and the reproducibility of the seed train as a whole. Batch-to-batch transfers from small to subsequently larger bioreactors commonly have split ratios from 1:5 to 1: 10 [28]. Generally, the scale of the production bioreactor is determined by the capacity needs of the facility to meet the market requirements. However, the duration of cell culturing and the length and composition of the seed culture train is determined by the cell clone and culturing conditions. Fig. 32.4 shows an example of a cell culture train from vial to production bioreactor for a hypothetical mammalian cell process. The seed train expansion will typically be determined during the process development phase, based on the expansion capabilities of the chosen cell line. Typically, durations are assumed for the culture phases of each bioreactor. What can be noted is that the production bioreactor is usually the longest step in a batch or fed-batch cell culture train and process. As such, it becomes the rate-determining step within the overall production process. The correct assumptions around the cell culture durations is important, as it will determine if the production demand of the annual number of batches required each year will be fulfilled. At the commercial scale, facility utilization is key (see Chapters 45 and 55), therefore to increase the productivity of a batch or fed-batch cell culture train, the upstream process design may consider the use of multiple production bioreactors to increase the batch throughput of the process. The simplest form of this approach would be replication of the entire cell culture line as shown in Fig. 32.5A. This allows either parallel processing (i.e., multiple batches to be harvested at the same time), or staggered processing, whereby line 2 is started a few days after line 1 and line 3 is started a few days after line 2.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 653
WCB* vial
Shake / Spinner flasks
N-3 Bioreactor (w.v. = 20L)
N-4 Bioreactor (w.v.=5L)
<4 days
N-2 Bioreactor (w.v. = 200L)
4 days
<4 days
N-1 Bioreactor (w.v . = 1000L)
Production Bioreactor (w.v . = 5000L)
4 days
4 days
14 days
FIG. 32.4 Cell culture train from vial to production bioreactor for a hypothetical 5000 L batch. Time durations stated are assumed to represent equipment occupancy times.
WCB* vial
Shake / Spinner flasks
N-4 Bioreactor (w.v.=5L)
N-3 Bioreactor (w.v. = 20L)
N-2 Bioreactor (w.v. = 200L)
<4 days
4 days
4 days
N-1 Bioreactor (w.v . = 1000L)
Production Bioreactor (w.v. = 5000L)
Line 1
Line 2
Line 3
(A)
<4 days
4 days
14 days
Production Bioreactor (w.v. = 1000 / 2000L)
N-1 Bioreactor (w.v. = 500L / 800L)
Spinner flasks
(B)
<10 days
N-3 N-2 Bioreactor Bioreactor (w.v. = 7L / 11L) (w.v. = 70L / 107L)
<7 days
6 days
6 days
6 days
2-4 days
FIG. 32.5 (A) dedicated cell culture lines supporting multiple bioreactors (B) Cell culture train showing a single seed train supporting three production bioreactors operated in a staggered mode. Time durations stated are assumed to represent equipment occupancy times.
This approach allows for a high degree of flexibility of operation; however, it is expensive, given that multiple seed and production bioreactors are needed with large space requirements within the facility. An alternative option is that of inoculating multiple production bioreactors with a single seed train as depicted in Fig. 32.5B. The rationale behind this approach is that the seed bioreactors are typically culturing cells for less time than the production bioreactor. Therefore, when batches are run sequentially with the minimum downtime between operations, the
654 SECTION | VI Industrial Process Design
production bioreactor is usually still culturing a batch while the seed train may already be completed for the next successive batch. Increasing the number of production bioreactors facilitates the ability of the seed train to run in the most productive manner. Further advantages of such an approach are that the total number of bioreactors is minimized to reduce the impact on cost and facility space. Within this approach, however, the flexibility of operation is slightly diminished compared with having multiple parallel lines supporting each production bioreactor. With the use of dedicated seed trains supporting each production bioreactor, batch frequency is limited only by the number of production bioreactors chosen. In the case where the number of seed lines are minimised to support multiple production bioreactors; batch frequency is a function of the number of production bioreactors chosen, as well as the number of seed lines. Table 32.7 shows how the batch frequency reduces with an increasing number of production bioreactors assuming culture times that are stated within Fig. 32.5B. It can be noted that as the number of production bioreactors increases, the minimum number of seed trains required to support their inoculation also increases. The reasons for this can be observed via Eq. (32.1) N PPB =
TPB N ST max ( TST )
(32.1)
where NPPB is the number of possible production bioreactors, TPB is the total equipment occupancy time in production bioreactor, NST is the number of seed trains, TST is equipment occupancy time in seed train. The availability of production bioreactors for inoculation from the seed line increases as their number increases. As Table 32.7 shows, should one production bioreactor be used, its availability for inoculation is only once every 14 days. Should 6 bioreactors be utilized, then a production bioreactor would be available for inoculation once every ~2 days. As the number of production bioreactors is increased, the rate-determining step of the cell culture line falls more on the seed bioreactors. In Fig. 32.5, we see that the longest seed culture step is 4 days. As such, maintaining this as a single seed bioreactor would limit the operation of the six production bioreactors to a batch once every 4 days as opposed to the potential of every ~2 days. To make full use of the maximum productivity of, for instance, six bioreactors, the seed bioreactor with the longest occupancy duration would need to be replicated. Should the seed bioreactors all have the same occupancy duration, as is the case outlined in Fig. 32.5, then the whole line would need to be replicated to ensure maximum productivity can be achieved. It should be noted, the determination of the number of seed trains may not solely be down to productivity-related aspects. Running multiple bioreactors intensely with a single seed train does have risks associated with equipment breakdown. If a single or multiple bioreactors break down, the whole cell culture cycle may need to be started again, resulting in significant loss in production time. As such, multiple seed bioreactors may be designed as redundancy to allow mitigation against any equipment failure. Given that productivity considerations are a concern in large-scale process design, another option open to the process designer is changing the cell culturing technology from batch/fed-batch to a more continuous operation, such as a perfusion process. This could be implemented within the seed train or the production bioreactor. In comparison with batch operation, a seed bioreactor in perfusion mode is extremely flexible in allowing an extended window of inoculation (see Chapter 31). Because higher cell densities can be achieved in perfusion mode relative to batch mode, a perfusion seed reactor can inoculate larger reactor volumes or multiple bioreactors simultaneously [28]. Alternatively, use of a production bioreactor TABLE 32.7 Batch Frequency and Number of Seed Trains Required to Support Multiple Production Bioreactors Run in a Staggered Mode Number of Production Bioreactors
Maximum Batch Frequency (Days)
Minimum Number of Required Seed Trains
1
14.0
1
2
7.0
1
3
4.7
1
4
3.5
2
5
2.8
2
6
2.3
2
Assumptions for culture durations are taken as the same as stated within Fig. 32.5B.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 655
in perfusion mode would allow a continuous daily harvest from one bioreactor (Chapter 4 and Chapter 31); hence, overall productivity per bioreactor volume would increase as compared with the fed-batch/batch technologies. The use of perfusion technology will require significant process development work at the bench scale and its choice of use should be driven by results at that stage of activity. It would not be advisable to develop a fed-batch process and try to switch to a perfusion mode to increase productivity at the large scale. In addition to the cell line adaptation required for such technology, the facility requirements of a perfusion process differ significantly compared with that of a fed-batch process. Most notably on the upstream side, the cell culture media requirement will be much greater in the perfusion case, and tasks such as media preparation, and requirements such as hold vessels will need to be designed and sized appropriately in the facility. Regardless of the upstream technology used, the batch frequency emanating from the cell culture will significantly impact the sizing of the downstream purification (DSP) train. The DSP will therefore need to be sized to match the frequency of batch being passed on from the upstream. The solutions available to the designer may be sizing the equipment appropriately to meet the productivity need, or in the worst case, designing for additional DSP lines.
32.8 DOWNSTREAM With the scale of operation for the upstream chosen to fulfil the mass demand per batch/campaign (also see Chapter 4), the scaling up of a downstream process becomes the next task. Taking as an example a typical process for purification of monoclonal antibodies, it consists of three main types of unit operations: (1) liquid/solid handling; (2) filtration; and (3) chromatography. The liquid handling includes storage, transfer, and mixing of process solutions, including product pools, buffers and cleaning solutions. The solid handling specifically refers to handling of biomass by centrifugation. The filtration includes all the procedures where either filters or membranes are used, and can be further divided into normal flow filtration (NFF) and cross-flow or tangential flow filtration, (CFF) and (TFF), respectively. The chromatography includes both slurry (batch) and packed-column separations that are either based on interactions, or size, or both. Membrane chromatography is also included in this category. Following is a brief description of guidelines for scale-up for each of these unit operations. And, although these rules are relatively simple, it is important to bear in mind that these are guidelines and that some detailed studies should be always performed to ensure that no loss in product activity and/or in process yield are encountered at the final scale due to unaccounted for phenomena.
32.8.1 Liquid Handling Liquid handling operation includes liquid storage (buffer preparation and hold, and in-process hold-up steps), liquid transfer, and mixing. Among these, only liquid transfer will not be covered here, but from the scale-up perspective, care must be taken that no extensive shear stress is introduced during product-containing solution transfer to avoid unnecessary yield losses due to antibody unfolding or aggregation [29]. It has been suggested that a shear rate as low as 10,000 s−1 could induce unfolding of an mAb [30].
Intermediate Product Hold Vessels The simplest approach to scaling up hold vessels in a downstream process is to use the concept of concentration. By setting a minimum final concentration of a product in a tank at any stage of the process, the maximum operating volume for each tank can be easily calculated. The same approach can also be used to identify bottlenecks in an existing facility [31]. The actual size (volume) of a vessel will be larger, as the operating range of volumes for typical tanks is around 80%–90% of the total tank volume. In cases where a product hold step requires adjustment of liquid composition and this cannot be done in a dynamic mode using a static mixer (in-line adjustment), the scale-up exercise must account for extra volume needed and for effects related to mixing efficiency as variation in local composition within a vessel can affect product quality. For instance, during pH adjustment prior to the post-Protein A low-pH virus deactivation step, a local decrease in pH can cause irreversible aggregation of an antibody. Similarly, product solutions may require agitation to ensure a homogenous mixture is achieved for further unit operation processing (e.g., loading onto a chromatography column). Agitation speeds and the shear impact that they may/may not have on the product within the solution would need to be investigated and considered. Additionally, temperature needs of a specific step should also be considered. Although parameters at the small scale would help determine the operational temperature for processing, care must be taken for product stability, should the product be required to be held for elongated periods, such as overnight, due to shift schedule restrictions, or in the case of unit operation failure or an emergency shut-down situation.
656 SECTION | VI Industrial Process Design
In recent years, use of single-use bags for buffer distributions and intermediate product hold has changed the way processes are designed. It has been shown [32] that significant savings can be realized when replacing a stainless steel tank park with single-use bags. Due to limitations in available bag sizes, there is a certain limit on the scale at which the singleuse bags can be used. Currently, the single-use bags used in manufacturing of biologics range from 1 to 3000 L in volume. One approach to increasing the use of single-use bags for larger-than-available scales is to use several smaller bags to reach the total volume required, linked together with a manifold to facilitate transfer. However, this approach would need to be evaluated based on the increased consumable, labor, and space management that it would require as opposed to the use of a single large re-usable tank. Care must be taken to ensure the selected volume of the bag can accommodate the mixing, temperature, and solution monitoring needs of the process. As the purification process progresses, the product becomes more concentrated, and hence volume reduces. As such, product hold vessels need to accommodate these reduced volumes. In general, the industry trend is to utilize single-use technology at smaller scales; currently only limited mixing technology exists for smaller 2D single-use bags (1–20 L). Either rocking technology mixers or pump recirculation, where a single-use pump is used to recirculate the product within its hold bag, can be utilized at this scale. Recirculation options should be evaluated for their impact on the product in terms of any temperature increase or shear. Temperature control could also be problematic for 2D single-use bags because these are typically not available in a jacketed format. However, newer technology, such as single-use heat exchangers may be utilized to resolve this issue.
Solution Preparation Vessels The scale-up of a solution preparation vessel becomes a little bit more complicated because a mixing process is involved. Generally, these are limited to solution preparation of buffer and cell culture media in which dry raw material components are added to water or in which two or more liquid solutions are added together. From the perspective of mixing efficiency, the criterion for scale-up are geometry, impeller speed, and mixing time. A successful scale-up should maintain these factors at the pilot and full scale; however, this is rarely practical with any significant scale change. From a practical mixing scale-up perspective, a concept that greatly simplifies design calculations is geometric similarity. Geometric similarity means that a single ratio between small scale and large scale applies to every length of dimension. With geometric similarity, the only remaining variable for scale-up to large-scale mixing is the rotational speed of the impeller, which, in many cases can be made fairly similar to the respective speed at a smaller scale. However, even with geometric similarity, scaleup will result in less surface per volume because surface area increases as length squared and volume increases as length cubed. For this reason, vessel shape has also been a significant area of consideration in selecting solution preparation unit operations. Larger stainless steel vessels have been traditionally cylindrical in shape, owing to the mechanical stability and footprint considerations that come with scaling up their use at large volumes (5000 L and greater). Newer, single-use solution preparation devices have explored different vessel geometries, limited as they are to smaller volumes (up to 5000 L presently). Fig. 32.6 shows a study conducted by GE Healthcare evaluating the mixing parameters of different geometries of single-use systems [33]. The authors concluded that the lateral cuboid tank shape with dual impeller was very good at dispersing the solid particle additions with negligible settling compared with other tank shapes. Generally, such single-use mixing systems have tended to be more of a lateral cuboid shape, not just for mixing optimization, but also to facilitate bag exchange. The intended technology to be used at large scales should be considered at the process development stage to ensure the appropriate scaling parameters are chosen. The traditional way of solution preparation is through a batch-wise manner that involves off-line quality control measures to ensure solution accuracy with potentially large-volume solutions. One possibility of increasing the productivity and reducing the footprint of such a solution is the consideration of the use of concentrates and in-line dilution. In-line buffer dilution involves the use of concentrated buffer solutions that are diluted with water, pH-adjusted, and mixed as they are
(A)
(B)
(C)
(D)
(E)
FIG. 32.6 Tank shapes evaluated in a study around solution mixing [33] (A) vertical cuboid single impeller; (B) cylindrical single impeller; (C) hexagonal single impeller; (D) lateral cuboid single impeller; (E) lateral cuboid dual impeller.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 657
sent to the downstream processing step. Because concentrated solutions are used, much smaller storage equipment and less space are required. For instance, it has been shown that using buffer concentrates and static mixers could reduce: (i) tank sizes twofold [34], (ii) number of buffer preps and CIP operations per batch by more than 30%; and (iii) labor requirements by 31% [35]. It should be noted, however, that in-line buffer mixing can be challenging, because the buffer solutions must be well mixed and meet tight specifications for pH, temperature, and other critical parameters when they are delivered to the process step. Because of this, strict control over the solutions is necessary. The consequences of poor mixing can be significant and range from reduced process performance to the production of out-of-specification products [36]. Newer technology, such as in-line conditioning technology from GE Healthcare, builds on this concept and mitigates the drawbacks of a simple in-line dilution approach [34] (see Chapter 27).
Solids Handling (Solids Removal) Centrifugation Separation of solids in a monoclonal purification process is limited to handling of biomass during the primary recovery of the antibody. Most industrial processes use disc stack centrifuges, as these apparatus are scalable, perform continuous operation, and have a capacity to handle a wide variety of feed stock [37]. Typically, secondary clarification using a depth filter after centrifugation is required prior to further downstream processing. Efficiency of the centrifugation step depends on the solids volume fraction, the effective clarifying surface (V/D), and the acceleration factor (ω2r/g). Typically, accelerating factors of 1500 g are used for harvesting cells. The product of these two factors (ω2rV/gD) is called the sigma factor (Σ) and is used in scale-up calculations. The sigma factor represents the equivalent area of the centrifuge and is unique for each disc stack centrifuge and the angular velocity. For continuous operation, the ratio between flow rate through the centrifuge, Q, and the sigma factor should be kept constant during the scale-up. The sigma factor can also be used to scale disc stack centrifuges from a lab bottle centrifuge, by replacing one of the flow rates in the above-mentioned ratio with centrifuged volume divided by time of the centrifugation [37]. However, keeping the ratio of Q/Σ constant may still lead to inadequate operation at large scale due to hindered settling and the presence of sub-micron particles generated during the centrifugation process itself. These particles are formed from cell and cell debris upon exposure to high shear (see discussion on the effect of mixing on bioreactor performance) and are removed from the centrifuge in the concentrate stream. In addition, shear damage can cause release of proteases that could affect stability of the antibody. Depth Filtration Although the use of a centrifuge is fully accepted, its application is most suited for primary harvest activities with a high solids content or that require the processing of large volumes. For pilot to mid-scale operations (e.g., up to 2000 L), depth filtration has recently been utilized instead of centrifugation for the separation of cell debris and other solids in extracellular mammalian culture applications. Depth filters used in bioprocessing are typically composed of a fibrous bed of cellulose or polypropylene fibers along with a filter aid (e.g., diatomaceous earth) and a binder that is used to create flat sheets of filter medium. The filter aids provide a high surface area to the filter and are sometimes used by themselves in clarification applications [38]. An additional charge can be imparted to some depth filters, either from the binder polymer, or from other charged polymers incorporated into the filter. Sometimes, a microfiltration membrane with an absolute pore size rating is integrated into the depth filter sheet as the bottom layer. Porous depth filters can retain particles in their tortuous flow channels to a level that size-based screening alone cannot achieve [39]. For process-scale applications, depth filters are often fabricated into cells consisting of two layers of filters separated from each other such that flow occurs from the outside into the space between the layers and is then collected. Multiple cells can be stacked into a housing in which pressure is used to drive flow through the assembly. Depth filters are usually singleuse devices that enable a reduction in the extent of process validation required for their use in biopharmaceutical applications [39]. Scale-up of depth filtration is typically achieved by keeping filtration flux, as defined by Eq. (32.2), constant. PF =
VF A0TF
(32.2)
where VF is the volume of solution to be filtered (L), A0 is the filter membrane area (m2), and TF is the filtration time (h) One issue with the use of filtration for solids separation is that it can be prone to membrane blocking, increasing the pressure of the process. Given this issue, it is typical to have two stages of depth filters for the filtration of a mammalian cell broth, each stage reducing in porosity. Scaling up at constant flux and often keeping filtration time constant, however, lead to a linear increase in the filtration area required as the volume to the process increases. At very large volumes, the
658 SECTION | VI Industrial Process Design
membrane area required for filtration becomes infeasible, and in this case, a centrifuge may be employed as a precursor to the depth filter as previously mentioned. An often cited issue with depth filtration is the significant membrane flushing volume (typically with water for injection or purified water) required as a precursor to the filtration itself. This is to ensure any contaminant particles existing within the membrane itself are flushed from the system. For certain membranes, this may require flushing with at least 100 L/m2 of water prior to filtration. In most cases, an additional buffer flush may be undertaken after water flushing to equilibrate the membrane, particularly those that are charged. Sizing of a depth filtration step for the manufacturing scale therefore should account for these ancillary steps to ensure the facility can accommodate the water flushing needs of the system. More details on depth filtration can be found in Chapter 15.
32.8.2 Membrane Filtration Membrane filtration is one of the most frequently used unit operations in biopharmaceutical manufacturing. Filters are used for depth filtration, ultrafiltration, and diafiltration applications, for sterilization, gas, and virus filtration. In a typical monoclonal antibody purification process (see Chapter 39), dead-end filtration accounts for more than 12% of the total purification process cost [40]. Hence, properly designed/optimized filtration steps can improve economy of the manufacturing process. However, any optimized process will underperform if the filter performance changes with scale. Scalingup filtration steps from a laboratory scale to a production scale system poses several challenges not only related to the change in size, but also to change in filter format (disk versus pleated) and mode of operation (in parallel or in series). The challenges include [41]: (i) limited access to representative material due to high production costs and/or limited production runs; (ii) surrogate fluid may not be representative of the actual process fluid; (iii) small-scale studies may not represent the conditions under which the filtration occurs in the manufacturing scale; (iv) laboratory scale filter elements used for testing may not be predictive of large-scale filter capacity. To overcome these challenges, so called “safety factors” are used to ensure successful operation at scale. Safety factors can vary depending on the type of filtration step (see Chapters 14, 15, and 23). These factors will be also briefly discussed later in the text. From the scale-up perspective, filtration can be divided into NFF (dead end) and tangential flow filtration (sometimes referred to as cross-flow filtration). The dead-end filtration will include bioburden reduction filtration, virus filtration, and depth filtration, while cross-flow filtration will include ultrafiltration/diafiltration, and microfiltration applications (see Chapter 14).
Normal Flow Filtration Dead-end filtration applications can be divided into flux-limited and capacity-limited cases [42]. Filtrate flux, (Eq. 32.2), depends on membrane permeability (a function of pore size distribution, porosity, and thickness), and on the solution properties (e.g., viscosity, density, and temperature). Capacity, on the other hand, is related to the rate of fouling of the membrane. Fouling will depend on solution composition, as well as on process conditions. Fouling increases pressure over the filter. Thus, should the filtration process be operated at constant pressure, the flow rate will need to be decreased to account for the increased fouling, and as such, flux will decrease as process time increases. Design of a filtration step always starts with a decision over which type of membrane will be most effective for a given filtration task. If heuristic information is not available, screening of different filters is the first step in designing a filtration step. Typical experiments involve determination of flux, filter capacity, and step yield for a given feed stream. Scalingup of a filter used in an application where the filtration is flux limited is fairly simple because of the assumption that filter performance scales linearly with filtration area is typically correct and the filter is sized based on the total volume to be processed and the processing time available for the filtration step (Eq. 32.2). Regardless of the method used to find out filter capacity, the scale-up is accomplished by assuming that between 50% and 80% of filter capacity (Vmax) scales linearly with the filter area [42]. Based on this assumption, a minimum filtration area necessary to accomplish a given filtration task within the time specified when operating at a given constant pressure can be calculated. With the minimum filtration area known, the final size of the filtration unit is determined by applying a safety factor to account for feed and membrane variability. Typically a safety factor of 1.5 is used, but larger safety factors can be applied if a more variable feed stream, such as harvested cell culture fluid, is used [42]. Because the normal flow filters are usually available in finite size cartridges, the final sizing of a filtration step must account for the available cartridge configuration, including the filter housing aspect. The filter housings typically used in manufacturing are designed to accept either single or multiple cartridges. These cartridges come in standard lengths of 10, 20, 30, and 40 in., and are generally slightly less than 3 in. in diameter. Among
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 659
many types of housings available, the most common is the T-style, which is designed for installation into fixed piping systems, and is well suited for installation in filtration skids.
Virus Filtration When discussing scale-up of NFF operations, most attention should be paid to virus filtration, as it is one of the most important and costly operations during purification of a monoclonal antibody. While in principle, scale-up of virus filters is done using the same basic strategy as for sterile filtration [42], it is advisable to extend the capacity study beyond the expected manufacturing scale by a factor of 1.5 to 2 to deal with feed stream and membrane lot-to-lot variability. Furthermore, because of relatively low fluxes achieved with the current virus filters and steadily increasing batch sizes, it can be expected that several virus filter elements operating in parallel are necessary [43]. Generally, there tends to be low pressure and high pressure viral filtration technology. Although operating at increased pressures improves filtration flux, line pressures should be minimized due to safety concerns [44]. Further, the use of an adsorptive-pre-filter to bind and thus to remove charged smaller impurities that cannot be removed by a 0.2 or 0.1 μm pre-filter could reduce the minimum filter area required up to ten-fold [45,46]. Care must be taken to avoid yield losses due to product adsorption on the charged filter. The scale-up of virus filtration must also address integrity testing of the filter before and post-processing. Too many cartridges in a single housing can create a challenge from the cartridge integrity test perspective. However, recent analysis showed that from the integrity test perspective, up to 20 filters could be placed in the same housing if the diffusion integrity test is used [43], which should circumvent the filter integrity issues from most, if not all, virus filtration steps.
Tangential Flow Filtration Typical tangential flow filtration (TFF) applications include ultrafiltration/diafiltration (UF/DF) processes and microfiltration steps. In TFF, the product stream flows parallel to the membrane surface, that is, in the direction perpendicular to the filtrate flow. The sweeping action of the product stream flow reduces concentration polarization effects, such as build-up of retained solutes on membrane surface, and/or effects related to osmotic pressure. The sweeping action of product stream can be enhanced by introducing secondary flow involving Taylor or Dean vortices [42]. The presence of the cross-flow in TFF units provides a challenge from a scale-up perspective as the hydrodynamic conditions, and thus all physical phenomena related to these, must prevail at different scales in order to maintain the same flux characteristic. Despite the same basic modus operandi for UF/DF and microfiltration steps, these two applications of cross- flow filtration are rather different. For one, microfiltration is used early in the process train where it can be used for initial harvest of proteins from mammalian, yeast, or bacterial cell cultures, whereas ultrafiltration is used for protein concentration and buffer exchange. Thus, the two applications require membranes with different pore sizes and different cartridge design to accommodate feed streams with different characteristics. Because the feed streams are different, different phenomena need to be considered when optimizing the two applications. On the other hand, the scale-up process is fairly similar, if not the same. As in the case of dead-end/normal filtration, size of a filtration system will depend on the filter capacity defined as the volume of feed that can be processed per unit membrane area before a new membrane needs to be used or the old membrane needs to be regenerated. Depending on whether the filtration is operated at constant flux or constant pressure conditions, this volume will be linked to a moment at which the pressure drop in the system reaches a pre-set maximum value or the permeate flow rate drops to an unacceptable level, respectively. For the latter, as a rule of thumb, the permeate flux at approximately 80% of the maximum mass flux should be selected for stable process operation [45]. Given that filtration is achieved through recirculation of the filtrate solution, the permeate flux calculation (Eq. 32.2) used for NFF applications is slightly refined and given as PFTFF =
VF ,ini - VF , fin A0TF
(32.3)
where VF,ini is the initial volume of solution to be filtered (L), VF,fin is the final volume of solution at end of process (L), CF is the desired concentration factor, A0 is the filter membrane area (m2), TF is the total filtration time for concentration (h). Scale-up of a TFF step can be generally considered simple because membrane cartridges (cassettes or hollow-fibres) are linearly scalable. This linear scalability is achieved by the geometrical similarity of the membrane cartridges at different scales. Geometrical similarity ensures that the scale up, and scale down for process validation purposes, can be based on keeping volume processed per area of membrane constant at different scales without changing process performance. The geometrical similarity relies on two factors: (1) keeping channel length constant and (2) keeping the same hydrodynamic regime within the channels. In combination, these two factors guarantee that trans-membrane pressure, local flux, pressure drop across the channel and protein concentration at the membrane wall are as close as possible at all scales of operation [47].
660 SECTION | VI Industrial Process Design
With the current cassette design, the flow path is kept constant and the desired membrane area is achieved by increasing a total number of channels per cassette. Linear scaling of hollow fiber cartridges is also achievable, providing the length of hollow fibres is kept constant. In the case of hollow fiber cartridges, equal flow distributions and manifold design are easily achievable, as the filtrate pressure losses are often insignificant, which results in reproducible fluid dynamics conditions within different-scale cartridges [47]. Sizing of a UF/DF unit to process a specific volume within a given processing time to reach a desired concentration factor and a final composition is performed based on average permeate fluxes measured at process conditions. With the fluxes known, the minimum filter area for a UF/DF step for different processing times can be quickly estimated following the procedure outline in Fig. 32.7. For a given average flux, a ratio between the volume to be processed and the average flux is calculated. The intersect between a desired process time (e.g., ultrafiltration time), TUF, and a line representing the calculated ratio is found. The ordinate of the intersect point provides the minimum membrane area necessary to process the volume in the desired time. Given the increasing protein concentration in the ultrafiltration step and its effect on the filtration flux, this stage is used for filter membrane sizing. The permeate flux and filtration area, once determined, is then used to estimate the ultrafiltration time, tuf. With reference to Fig. 32.7, a reverse procedure is used to find the diafiltration time, tdf (i.e., the intersect between the line representing the membrane area and the line representing the ratio between total volume of diafiltration buffer and the average flux is found). The abscissa of the intersect point gives the duration of the diafiltration operation, tDF. The average fluxes for either UF or DF steps are determined by measuring volume of the filtrate collected and the time needed to reach the chosen concentration factor, or to permeate the desired number of dia-volumes. It should be noted that when highly viscous streams are to be processed, the linear scale up concept can be a challenge, as the typical cassettes used in biomanufacturing were not designed for this type of feed stream. As discussed by Daniels et al. [48], the combination of high viscosity fluids and high flow rates expose any minor shortcomings in cassette construction that may lead to unpredictable effects at a larger scale.
Systems and Filter Cartridges Different systems configurations are available for various filtration tasks. The systems vary in scale and degree of automation. The minimum configuration of the system should include filter membrane and modules holders, feed pump and pressure, UV sensors for the feed and permeate lines, and a recirculation vessel. In the cases of microfiltration, a permeate pump may be considered. Current systems use components that enable processing at flow rates from 50 mL/min to 1400 L/min, using pipes with diameter from 6 mm to 152 mm, respectively [42]. Cartridge filters and capsules for bioprocess filtration are available from approximately a 0.05 to 36 m2 effective filter area in one filter device. For UF/DF applications, different 1,000.00
Minimum filter area (m2/10 L)
100.00
10.00 Vproc /Javg
Vproc /Javg = 32x
1.00 (Vproc /Javg)DF + (Vproc /Javg)UF (Vproc /Javg)UF
0.10 Vproc /Javg = x/2 0.01
0
tDF 3
6
tUF
9 tUF/DF
12
(Vproc /Javg)DF
15
Processing time (h)
FIG. 32.7 Effect of processing time on the minimum membrane area for UF/DF process for different levels of the ratio of permeates volume to average flux through the membrane: Red, green, and blue curves represent reference Vproc/Javg ratio value, half the reference value (X/2), and 32 times higher (32X) ration than the reference ratio curve, respectively. The other curves represent the data as per labels to the right. (The graph was constructed using the definition of the average flux that relates to membrane area, process time, and permeate volume). Work from GE Healthcare, reproduced with permission.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 661
types of membrane modules are available. Most common among these are hollow fiber and spiral wound cartridges, and flat sheet cassettes. Overall filtration time should be a main consideration for the UF/DF step. Attaining high concentration factors or washing with many dia-volumes requires significant recirculation time, which could have shear impacts on the protein solution and, in some cases, increase the temperature of the solution. If the product solution is deemed temperature sensitive, a jacketed recirculation vessel should be utilized. One of the important aspects of UF/DF or microfiltration systems should be minimization of yield losses and reduction of dead volume, which reduces wash volumes. From a large-scale process perspective, this is achieved through the use of a final flush with a ~10 L/m2 of flush buffer to recover any protein product within the pores of the membrane [45]. The quantity of flush buffer used is critical when UF/DF is used for the final formulation of bulk drug substance at the very end of a process. Here, the final concentration of the product is of importance, and therefore, the volume of flush/recovery buffer utilized must be taken into account. However, the amount of protein recovered could vary, and is likely not predictable. In this case, the designer may consider a second concentration step after flushing to ensure accurate concentration levels are achieved, or over-concentrate the solution in the first place and use the flush to achieve the desired concentration. Another important aspect of scaling up of an ultrafiltration step is related to the system limitations with respect to maximum volume concentration factor and maximum number of dia-volumes beyond which no change in composition can be guaranteed. Thus, it is not recommended that one design processes where volumetric concentration factors greater than 50 will be required for the concentration stage and with dia-volumes of more than 14 for the diafiltration stage (see discussions in Chapter 23). As a final word in this section, it should be emphasised that determination of optimum membrane and process conditions, including combination of pre-filter and final filter to increase the filter step capacity, relies heavily on empirical testing, and although the filtration data obtained in small-scale experiments could be used for scale-up calculations, it is recommended that the data is used only as an indication of filterability. Pilot scale studies should always be conducted under actual process conditions, preferably using the filter design of the same type as at the final process scale design to ensure successful scale up.
32.8.3 Chromatography With mass of the antibody and its concentration known at any stage in the process, sizing of a chromatography column can be performed based on both chromatographic and non-chromatographic factors, as well as by considering all the facility and product quality related constraints. Typically, the column size is chosen following fairly simple guidelines, based on the following set of equations: CV =
Massbatch Loadcycle N cycles
N cycles =
TimePurif Timecycle
(32.4a) (32.4b)
where CV is the column/resin volume [L], Loadcycle is the load per cycle [kg/Lresin], including a safety factor to account for feed and resin variability, Massbatch is the mass delivered from a bioreactor [kg], Ncycles is the number of cycles per batch, TimePurif is the total allocated purification time for a single batch [h], and Timecycle is the time for a single chromatography cycle [h]. Eq. (32.4a) yields the volume of packed resin, CV, necessary to purify a given mass within the allocated time. Preferably, the number of cycles should be an integer, as this would mean that each cycle receives the same load. CV can be used to normalize volumes of different solutions used in a chromatography step (e.g., volumes required for buffer or product loading). Use of CV for describing a chromatography method is scale invariant (i.e., the same method can be used at different scales). For instance, 5 CV of buffer can be applicable to whatever CV is utilized, whether in a development laboratory or in a manufacturing facility. Of course, the size must be chosen in relation to the optimized (at small scale) chromatographic protocol/method and the general scaling principle described in Chapter 16. As in the case of filtration, some safety factors are used to account for process variability, but these are already established during the process development phase of process design. Although the scale-up guidelines are relatively simple, there are some factors of a non-chromatographic nature that need to be considered to ensure a successful scale-up. The simplest rules for scaling up a chromatography process are based on a direct (linear) scale up. These rules require that bed height, residence time, sample concentration, and gradient volume over resin volume remain constant at all scales. This implies that the scale-up requires sample load, volumetric flow rate, and column
662 SECTION | VI Industrial Process Design
c ross-sectional area to increase by the same scaling factor. The direct scale-up principle is depicted in Fig. 32.8 and the guidelines are given in Table 32.8. One drawback of the direct scale-up approach is related to the hardware, as the chromatography columns are available in discrete dimensions, at least from the column diameter perspective, and this fact must be accounted for in all scale-up calculations. As a result, columns are typically slightly oversized, or multiple columns are used. Per the rules of linear scale-up (Table 32.8), increasing column diameter so the column cross-sectional area is increased in proportion to the process volume (keeping the bed height constant) should be enough for a successful scale-up. In practice, however, an increase in column diameter greater than 30 cm will lead to a decrease in column wall support for the resin. Depending on the resin type, this effect may vary in magnitude, with resins introduced in the early years of bioprocessing usually being mechanically somewhat less stable. This may, in turn, result in a need to deviate from the highest possible flow for a chromatographic step to minimize bed compression and all chromatographic effects associated with it, namely back pressure. As discussed in Chapter 16, the effect of wall support on compressibility of a packed bed have been investigated in great detail both by academic groups [50–52] and by industry [53,54]. When performing optimization of a chromatography step, a constraint on maximum operating velocity after the scale-up needs to be imposed to account for a possible increase of pressure drop over the packed bed in the larger column as a result of these differences between scales. M(mass loaded)
7×M
FR(flow rate)
7×FR
SF = 5x
Principle of Linear Scale Up: • Constant bed height • Load proportional to column volume • Flow rate proportional to column volume
Stack columns
SF = 20x
H
Single column
SF = 7x
140×M 140×FR
140×M 140×FR
35×M 35×FR
35×FR
35×FR
35×FR
Parallel columns (4) Number of cycles (4)
SF =
=
H/5
Flowlarge Flowsmall CVlarge
Dlarge
CVsmall
Dsmall
2
SF –Scale-up factor
FIG. 32.8 Principle of linear scale-up (SF stands for scale-up factor). Artwork courtesy of GE Healthcare, reproduced with permission.
TABLE 32.8 Guidelines for Linear Scale-Up of Chromatography Purifications [49] Maintain
Bed height Eluent velocity Sample concentration Gradient slope/bed volume
Increase
Column diameter to reach the required column volume Volumetric flow rate in proportion to column volume Sample volume in proportion to column volume Gradient volume in proportion to column volume
Check
Reduction in wall support (increased pressure drop) Sample distribution Piping and system dead volumes
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 663
Therefore, non-chromatographic effects that need to be looked at when performing scale-up include changes in the size of monitoring cell, different lengths and diameters of outlet pipes or tubing, each of which could contribute to larger dilution (zone spreading) in the system. In addition, time delays from valve switching and extra system volumes that would result in a wrong start and finish of pool/fraction collection must be taken into account. Additional factors that will affect process performance after scale-up include changes in sample concentration and composition due to variation in cell culture and buffer quality. The latter is especially true if the buffer composition is complex and based on additives. However, these latter factors can typically be accounted for by decreasing sample size by a safety factor as previously mentioned. Finally, a successful and robust scale-up requires that flow rate, pressure, tank levels, feed concentration, conductivity, and pH need to be monitored continuously. Although the scale-up is typically done by keeping the bed height and linear velocity constant, other scale-up criteria also exist. One of the most successful ones, especially in the case of monoclonal antibodies purification using Protein A affinity chromatography, is the criterion based on a concept of constant residence time, where the residence time is defined as the ratio of column height, or column volume, to liquid velocity, or flow rate, respectively. In fact, with the constant residence time criterion, it is possible to scale up a chromatographic process at constant productivity while changing both the bed height and the column diameter [55]. Therefore, when scaling up with this criterion, more flexibility in deciding on the right hardware dimensions exists. The principle of this aspect is shown in Figs. 32.9 and 32.10, which show that the same residence time can be obtained either for different combinations of column diameters and bed heights at a constant volumetric flow rate (Fig. 32.9), or for different flow rates and bed heights for a constant column diameter (Fig. 32.10). The constant residence time approach is often used during initial development of chromatography protocols, which is done with small columns to save valuable samples. It could be argued that the concept of scale-up based on the constant residence time is the most general scaling concept in chromatography, providing the separation does not depend on hydrodynamic conditions in the proximity of the surface of a chromatography particle. In the majority of cases, the constant residence time scale-up criterion will hold for heavily loaded columns, gradient, and isocratic elution [56]. In the case of a typical monoclonal antibody purification process, the constant residence time criterion is applicable, especially if one considers all bind elute steps, such as Protein A chromatography, cation exchanges, or HIC steps. However, in the case of convection governed adsorption (the slowest mass transfer step depends on the local velocity as discussed in Chapter 16), the separation may not be the same if the column height is lowered, even though data for identical residence times are compared. However, clearance of critical impurities during a chromatography step operated in flow- through mode may require performing both the optimization and the scale-up applying the constant bed height and the constant velocity criterion Constant flow rate 20 Dcol = const
18
Residence time (min)
16
Hmax
Hmin
tmax
14 12 10 8 6
Dcolumn ↓
4 2 tmin
0 0
10
20 Bed height (cm)
30
FIG. 32.9 Effect of column bed height on residence time for different column diameters (Dcol). The grey rectangle describes an operating window defined by minimum (Hmin) and maximum (Hmax) bed height for a given column and minimum (tmin) and maximum (tmax) residence time, defined by rate of adsorption and stability of a product, respectively. (The straight lines represent data for constant column diameters and the arrow indicates the direction of decreasing column diameters). Figure courtesy of GE Healthcare.
664 SECTION | VI Industrial Process Design
Constant diameter 20 const. flow line
18
Residence time (min)
16
Hmax
Hmin
tmax
14 12 10 8 6
Flow rate ↑
4 2 tmin
0 0
10
20
30
Bed height (cm) FIG. 32.10 Effect of column bed height on residence time for different flow rates. Graph annotations as per Fig. 32.9. (The straight lines represent data for constant flow rate and the arrow indicates the direction of increasing flow rates). Figure courtesy of GE Healthcare.
(i.e., the linear scale-up). This is especially true when very large impurities such as DNA, viruses, and certain host cell proteins (HCPs) are present in the product stream. Because these large molecules cannot access intraparticle pores of commonly used chromatography resins, the overall rate of their adsorption onto the surface of these resins will be more or less all dependent on the local liquid velocity in the proximity of the surface. Therefore, in case of separations where large molecules are to be adsorbed, to keep the same separation performance at different scales, the residence time should be kept constant, and the liquid velocity should be at least the same as the one used when developing the process. In such cases, the best approach is to perform scaling up based on the constant bed height criterion. Sometimes, the scale-up based on residence time is also referred to as scale-up on volume basis, where the volumetric flow is defined as multiples of column volumes (CV/h). With this approach, a successful scale up by a factor of 274 was realized, as long as the gradient was appropriately characterized and accounted for at each scale [55]. As discussed, the criterion of constant residence time is valid for gradient elution, as long as a gradient strength is kept constant in scaled form [57]. In other words, the number of theoretical plates necessary to maintain resolution needs to be kept constant for a given gradient slope. Another scaling principle for linear gradient elution states that resolution is kept constant, provided that the column length increases in proportion to the normalized gradient slope times the height equivalent to theoretical plate for gradient elution [57]. Following this approach, a successful 500-time scale-up was reported [58]. Recently, the same approach was used to design and optimize separation of aggregates from monomer in an antibody purification process [59]. In Table 32.9 a summary of scale-up rules for chromatography steps constituting a typical antibody purification process is provided. The rules are given as recommendations that are based on the common understanding of the challenges in mAb purifications and the reported examples. These rules should be applied to each step in a chromatographic cycle (i.e., load, wash, and elution). After all, wash and elution steps are each characterized by their characteristic time constant, and to reach the same efficiency of each step, the ratio of residence time to these characteristic times for each of the steps must not change during scale-up. For instance, it is fairly common to scale up wash and elution steps by keeping the amount of buffer used proportional to column volume and operating these steps at the maximum linear velocity. This ignores the time constant aspect, and such approach may thus result in lower purity and lower process yield if excess of respective buffer is not used once velocity is increased. No rules for scale-up of clean in place (CIP) are provided, as the cleaning step should typically be based on the contact time during which the resin is exposed to a cleaning solution, making the scale up of this step straightforward. More details on the cleaning of chromatography resins can be found in Chapter 33. For a polishing step based on flow-through mode where the remaining impurities are adsorbed on the column and the antibody flows through, the linear scale-up criterion can be recommended as binding capacities, for large-molecule impurities can be dependent on the local velocity in the proximity of the chromatography matrix surface, and, therefore, varying the velocity will affect column performance. In order to determine if this is the case for a specific process, a simple
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 665
TABLE 32.9 Recommended Scale-Up Criteria for Typical Chromatography Steps in Used in Antibody Purification Scale-up Criterion Linear
Residence Time
Comments
Protein A
✓
Cation exchange
✓
All steps in a chromatography cycle should be scaled based on this concept, including: load, wash, and elution
HIC
✓
Anion exchange (FT)
✓
Anion exchange (B/E)
✓
Residence time if a large impurity is to be removed
✓
Bind elute mode
FT, flow through; B/E, Bind/Elute.
e xperiment during the optimization phase can be performed in which the separation is compared at two bed heights at the same residence time. If the results obtained differ, a linear scale-up criterion should be used. Of course, as discussed earlier, all the steps can be scaled following the principle behind the linear scale-up criterion.
32.8.4 Chromatography Systems When scaling up a chromatography step, not only a column, but also a chromatography system needs to be considered. Most chromatography column manufacturers offer chromatography systems, either in a standard or a custom-made design. Typically, the chromatography systems are chosen on the flow requirement, and pressure specifications to match desired production rates. The systems are available either in stainless steel, hard plastic, or flexible single-use configurations with varying degrees of automation, as well as expansion capabilities. Another aspect of scale-up related to hardware design is not immediately obvious, but may cause very practical and costly issues later in the lifetime of a production process. Larger companies with the corresponding experience, and also engineering firms involved in facility design projects, have a tendency of developing their own engineering solutions for chromatography and/or filtration skids. Although this, at times, may look like an appealing option to stay within a tight project budget (these solutions do not carry the cost of a vendor R&D project, nor much SG&A2 costs), they should be very critically reviewed from a medium- to long-term cost of use perspective. For instance, seamless maintenance, and possible production expansions, including transfer to other production sites, either owned or CMO owned, are all affected by nonstandardized engineering solutions. The simplest solution to these problems is the possibility of working with vendors on a custom designed chromatography skid. Such skids are designed based on customer specifications, but are made of standard and proven components, tested extensively in the system vendor R&D program. Thus, scale-up decisions on hardware have certain managerial, long-term aspects to them as well. However, regardless of the systems design, an important route to achieve a long lifetime of a resin packed into a column, thus minimizing the need for column repacking, is to make sure that nothing accumulates on the packed bed or the column parts that disturbs performance over time. This can be achieved by making sure that the feed applied on the column is as clean as possible, for instance, by applying column pre-filters, and/or by performing efficient and regular cleaning-in-place steps (Chapter 33). These preventive maintenance measures of the use of pre-filters and cleaning in place should be developed to prevent column fouling early during the process development phase, almost co-currently with the development of cell culture conditions, by systematically looking for possible solutions, either based on heuristic information, or newly observed phenomena.
32.9 PROCESS MODELING AND OPTIMIZATION Within the biopharmaceutical industry, stronger competition among manufacturers and detailed government regulations have put an increasing pressure on lowering of development and production costs. Methods and tools that can be applied to reduce costs associated with the whole development process, from discovery to R&D, process development, and facility design are covered in other parts of the book. The main focus of this section is an overview of modelling of manufacturing processes. 2. Selling, General and Administration.
666 SECTION | VI Industrial Process Design
As the industry has matured and competition increases, more and more focus has been put on cost and manufacturability (e.g., a scalable processes) [60]. Although a common opinion is that manufacturing costs today account for a considerable part of the total revenue [61], alternative points of view have been proposed [62] (see Chapter 55). One of the most basic questions in process design modelling and simulation is: what resources are needed to produce the amount of product that is to be produced? In addition to this question, there are other questions to be considered when evaluating different process scenarios. These questions include: What is the cost for a new facility or an existing one? Depreciating costs? What time does it take to produce one batch and how many batches do we need to run? Lead time between batches? Cost of goods (COG)? Where are the bottlenecks in the process? Scaling issues? Environmental impact? Energy balances? Scheduling of process? How do we handle variability; product concentration, time for fermentation, etc.? Answers to these questions are very often interrelated, and require complex analysis, often described as process modeling. The task of process modeling, and calculation of production costs, is not trivial, especially because many factors are not directly related to the process itself [63,64]. At the initial stages, not all relevant information might be available and, therefore, the accuracy of process modeling increases with process and product maturity. Consequently, process options initially considered should not be discarded if differences between these are within 30%–40%, as at the end the less attractive process might still prove to be the most desirable from the final operational scale perspective. The stages involved in process design, together with general modeling needs that are connected to these phases are shown in Fig. 32.11 [65]. At an early stage (process development stage) modeling tools are primarily used for looking at different production scenarios from an economic, scheduling, and environmental point of view [65]. Steps with a low ratio of yield to cost are easily identified and can be exchanged already at this early stage. Further down the development process, in the facility design phase, the modeling and simulations are used for technology transfer, process fitting, and scheduling. At this stage, equipment sizing and cycling patterns are defined, and needs for supporting utilities and systems such as purified water, steam, electricity, etc. are evaluated. In the manufacturing phase, other factors become predominant, and simulation tools are used for continuous process optimization and debottlenecking [65]. A similar view on the use of modeling tools at different stages of process development is described by Sinclair and Monge [66]. For production, planning and/or scheduling of multiproduct plant modeling is an important day-to-day planning tool. At manufacturing scale, it is important to follow constraints in resources, such as equipment, labor, raw materials, and utilities, and also to deal with unexpected variations in available resources, potential process failures, and scheduled maintenance operations. Following, the tasks of process modeling and scheduling simulations will be described in more detail, and some software packages for process modeling will briefly be discussed.
32.9.1 Modeling of Biotech Processes There are two basic modes of process design that rely either on static or on dynamic models [64,67]. Most often, static models are spreadsheet-based, and are particularly useful early in a project in which sizing of equipment, duration of process steps, and ball park figures for the process cost are considered as being “good enough.” The dynamic models are
IDEA GENERATION Project Screening, Strategic Planning Development Groups PROCESS DEVELOPMENT Evaluation of Alternatives Common Language of Communication
FACILITY DESIGN Equipment & Utility Sizing and Design
MANUFACTURING On-Going Optimization, Debottlenecking Production Scheduling, Capacity Analysis
Development Groups Process Engineering Corporate Environmental Tech Transfer
Manufacturing
FIG. 32.11 Different phases in process development and the general need for modeling. From D. Petrides, et al., Biopharmaceutical process optimization with simulation and scheduling tools. Bioengineering 1 (2014) 154–187 with permission.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 667
particularly useful for modeling of dynamic workflows and the logistics of operations [64]. For instance, activities competing for resources due to unforeseen events could be modeled using dynamic tools. The dynamic models are very often based on discrete-event modeling approaches and are fairly complex to build [67]. On the other hand, these models provide a much more accurate process scheduling and, therefore, yield a better estimate of facility throughput, and hence, even a better cost estimate, especially when uncertainty analyses are performed [64]. Although it has been suggested that static models (spreadsheet-based) are best suited for process scale-up and economy simulations, and the dynamic models are suited better for capturing logistics and manufacturing variability [68], without a doubt the dynamic models are more versatile, as they can be easily converted to the static models by assuming uninterrupted operation. In other words, the dynamic and static models could be, and are, linked together, as has been shown in several cases [69–71]. See Table 32.10 for a comparison between static and dynamic approaches to modeling. As seen from the table, the main differences between the models are linked to the type of input data and type of output results. The static models require average values of input parameters and deliver results that would represent some sort of aggregated data for a longer period of time. However, because biotech processes are, by nature, uncertain when it comes to product titers, process-step yield, as well as rate of failure [72–74], these uncertainties should be taken into account when the processes are modeled. A simplistic method of conducting such sensitivity analysis (e.g., to adjust process simulation outcome based on variability) constituting of changing each variable in a given range gives the results for this range, but does not take into account the frequency of changes occurring. The methods of “risk assessment” and “Monte Carlo simulation” take this variability into account. Risk assessment means that all input variables are weighted according to the expected outcome, and the output are expected average values, taking account for possible variability. The probability distribution used for each variable is normally based on historical data, as well as a more subjective input from experts [64]. Monte Carlo simulations have become popular, partly due to the spreadsheet add-ons that are available, for example Crystal Ball (Oracle, Redwood Shores, CA, USA) and @RISK (Palisade Corporation, Newfields, NY, USA). One drawback with self-constructed spreadsheet-based tools used for static modeling is that they soon become rather complicated, and only the person making them can use them [66,75]. On the other hand, these tools provide a control over the type of calculations performed, calculation methods used, and assumptions made. Most published work on modeling and optimization of biotech processes deals with manufacturing of mAbs, with a few exceptions. Table 32.11 lists a few of the reported biotechnology process modeling studies. Some of these represent an interesting combination of standard process modeling work and other computational methods. For instance, the case for using a rational design and optimization of a downstream process for virus particles, partly based on a platform approach, has been made [85]. In this approach, mathematical modeling and Design of Experiments (DoE) are crucial components. The conceptual work done prior to building, expanding, or retrofitting a process plant is called process design, and it consists of two main activities; process synthesis and process analysis. Process synthesis is the selection and arrangement of a set of unit operations capable of producing the desired product at an acceptable cost and quality, and process analysis is the evaluation and comparison of different process synthesis solutions. Several commercial software tools for process design, process economy, and process scheduling simulations are available. A few examples of these tools have already been given in Table 32.11. In what follows, brief summaries of selected software packages are given.
TABLE 32.10 Pros and Cons of Static and Dynamic Models Attribute
Static Modeling
Dynamic Modeling
Simplicity
Easier to build and use, depending on detail level
Challenging, requires a skilled user to build the model
Software platform
Often a custom application built in Excel or Access
Off-the-shelf systems ProModel, Extend, Arena, others)
Required inputs
Averages for key parameters
Averages, variability, and distribution of data for key parameters
Outputs
Equipment, labor, and materials utilization, aggregated for a longer time
Bottlenecks, minimum and maximum labor use per day, locations of production delays, cycle times, impacts of operation scenarios, locations of inventory build-up, space/storage utilization
Time to build
2–8 weeks, depending on the complexity
2–5 months for a detailed model
Adopted from M. Puich, A. Paz, Simulations improve production capacity. BioPharm Int. 17(5) (2004) with permission.
668 SECTION | VI Industrial Process Design
TABLE 32.11 A Selection of Some References Where Commercial Modeling Software Has Been Used Area of Modeling
Software Used
Reference
CoGs modeling and QbD for developing cost-effective processes
SuperPro Designer
Costioli et al. [76]
Design of a large-scale biopharmaceutical facility
SuperPro Designer and SchedulePro
Toumi et al. [77]
CoGs modeling and QbD
SuperPro Designer
Broly et al. [63]
Combined ultra scale-down and financial modeling of milk proteins
SuperPro Designer
Chhatre et al. [78]
Process optimization with simulation and scheduling tools, mAb case
SuperPro Designer and SchedulePro
Petrides et al. [65]
Manufacturing economics of plant based therapeutic and industrial enzymes
SuperPro Designer
Tuse’ et al. [79]
Influence of process development on manufacturing cost
BioSolve
Sinclair & Monge [60]
Manufacturing cost and its impact on organizations
BioSolve
Sinclair & Monge [66]
Disposable costs, a sensitivity analysis
BioSolve
Monge & Sinclair [80]
Continuous biomanufacturing of recombinant protein production
BioSolve
Walthe et al. [81]
Production of royalactin under uncertain conditions
BioSolve
Torres Acosta et al. [82]
Debottlenecking and process optimization
Bio-G
Johnston [83]
Optimization of scheduling in multiproduct facility
Aspen Tech software
O’Connor et al. [84]
Modeling of a multi-step protein synthesis and purification process
Aspen Tech Batch plus
Kahn et al. [75]
Driving efficiencies in the pharmaceutical industry
aspenONE
Tabor [61]
SuperPro Designer SuperPro Designer is a process flow-sheeting software tool for engineers and scientists in process development, process engineering, and manufacturing groups. The software enables creation of the process flow diagrams (flow sheets) using a library of standard unit operation templates and a fairly user- friendly graphical interface (see Fig. 32.12 for an example of a flow diagram for a mAb process). Each unit operation (e.g., processing step) is editable, and can be populated with relevant data. Based on all the input data, the simulator formulates material and energy balances for each unit operation, considering its position within the process sequence, and uses this information to calculate either the appropriate size of the equipment associated with the unit operation, including auxiliary equipment such as tanks, CIP skids, etc., or the time that is necessary to process the required amount of material if the equipment size is fixed. In addition to the SuperPro Designer, Intelligen Inc. also has specific software tools for related analysis, such as scheduling and environmental impact assessment. The modeling suite includes built-in databases for raw materials, consumables, heat transfer agents, etc. User databases are also supported. Standard outputs from SuperPro Designer include visual representation of the process, material and energy balances, sizing of equipment and utilities, an estimation of capital and operation costs, scheduling and cycle time analysis, throughput analysis and debottlenecking, waste stream characterization, and limited environmental impact assessment.
BioSolve BioSolve is an Excel-based bioprocess modeling tool from Biopharm Services (Biopharm Services Ltd., Chesham, United Kingdom, http://biopharmservices.com). BioSolve Process is a software that can be used for all types of biotech products. The use of the software has been illustrated in several publications, a few examples being [60,66,80–82]. A view of the general build of this software is shown in Fig. 32.13. Due to the spreadsheet-driven nature of the tool, models can be generated relatively quickly. Several databases are connected to the software. The first step in the modeling is to create the process configuration, followed by entering all data needed into the tool, after which verification of inputs is done, followed by analysis. In recent releases of the software, multivariate analysis and variability simulations using Excel add-in’s (e.g., Crystal Ball by Oracle Corp) have been included,
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 669
FIG. 32.12 Monoclonal antibody production flow sheet. From A. Toumi, et al., Design and optimization of a large scale biopharmaceutical facility using process simulation and scheduling tools. Pharma. Eng. (2010) 1–9 with permission.
Process
Interface
Calculation
Outputs
Production
Customization
Labor
Bill of materials
Consumables & materials
Cost of goods
Equipment
Capital
Dashboard Process definition
Scenarios
Sensitivity
Cost database Equipment Consumables Materials Labor Waste treatment
FIG. 32.13 The structure of the BioSolve process software. From Biopharm Services Ltd. with permission.
670 SECTION | VI Industrial Process Design
as well as scheduling operations with Gantt charts as outputs. The dashboard function allows configuration of user-defined inputs and outputs to be tracked.
Bio-G Bio-G is a real-time modeling system that is specifically designed for biomanufacturing operations by the Bioproduction Group (Bioproduction Group Inc., Berkeley, CA, USA, www.bio-g.com). It is designed to be used from late-stage process development to large-scale production. The software could be linked to enterprise platforms (SAS, automation systems) and systems like Delta V. The software uses variability as the lowest building block, thus the inherent variability in biological systems are taken into consideration, and its effect on manufacturing can be monitored. The use of the software is described in Reference [86]. An integral part of the software is a data translation software that makes it possible to perform analysis on real-time data from the running process. Acuna et al. [87] described the use of Bio-G for a simulation of a perfusion process, which is more complex to model than a traditional batch process.
aspenONE Pharmaceutical Solutions aspenONE is a modeling software from Aspen Technology (Aspen Technology, Inc., Bedford, MA, USA, www.aspentech. com) that is used by several biotech and pharma companies [61]. aspenONE is an integrated life-cycle simulation tool that could be used from initial design through plant start-up to operational support. It is intended to simulate (e.g., reduce) capital and operating costs, increase throughput, and accelerate development timelines [88]. One example of the implementation of Aspentech software is given in Kahn et al. [75], Batch Plus was used for this purpose (Batch Plus is currently known as Aspen Batch Process Developer). This software is a recipe-based modeling tool. Aspen Batch Process Developer is an important part in the aspenONE package. The development of the model was performed in incremental steps: gathering of information, recipe details, backbone assembly, workarounds, model refinement, error checking, and updates. The output from the model is available as Excel spreadsheets.
Other Approaches and Applications Some of these tools have been compared for their applicability for analysis of process for manufacturing protein products [64,89]. In this section, we describe few examples, using primarily non-commercial software. The literature in this area is plentiful, so the description of the different approaches is not detailed, and the coverage is almost certainly not complete. SIMBIOPHARMA is a tool that could be used to evaluate manufacturing from a cost, time, yield, resource allocations and risk perspective [90]. This software has a flexible environment for modeling that includes interactive graphics, taskoriented representation, and dynamic simulation. Debottlenecking is a key objective in modeling and simulation work. It is the process of improving efficiency by finding the rate-limiting operations in the facility and correcting them. This activity is an important approach to handling the inherent variability of a biotech process, and the complexity of these types of processes [83], and is important in both resource-constrained and throughput-constrained facilities. For this activity, one should look at the whole process without any bias, and listening to experts in a narrow area of the process could be counterproductive [83,91]. It has been reported that running a debottlenecking simulation with and without taking variability into account gives different results. Thus, running a simulation without taking into account variability could lead to the wrong unit operation being optimized. The process of debottlenecking is also discussed by Sengar and Rathore [92]. Yet another approach is described by Yang et al. [93], with the use of data mining for facility fit and debottlenecking. One tool that can be used to monitor the effectiveness in processes is the use of Overall Equipment Effectiveness (OEE), which helps maximize value-added activities by showing where potential improvements could be implemented. This approach is described by Junker [94]. A statistical approach for handling the challenges of modeling (e.g., limited data availability and variability), have been proposed [95] by using elastic net with Monte Carlo sampling. For optimization of column sizing and chromatography step sequencing, the technique of mixed integer fractional programming has been proposed [96]. Another approach is the integration of an ultra scale-down model and financial modeling for the purification of a milk protein [78]. Modeling from an environmental and economic standpoint has been illustrated by Grote et al. [97]. They described a recycling strategy for a biotech process and its economic implications. Modeling could advantageously be used for comparing different production technologies, like single-use technology versus conventional technologies [80,98,99] and fed-batch and perfusion cell cultures [69,87].
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 671
32.10 SUMMARY In this chapter, we have focused on basics of the downstream process design. We defined the process design, as it is described in Stage 1 of process validation guidelines [3]. We have tried to highlight the importance of identifying the best tools and procedures to design a robust process that consistently and economically makes sufficient quantity of the target molecule, isolate it from the production system, and then purify the target molecule to the level of purity specified for the final product. We hope that we could show that while the pathway to achieve a robust manufacturing process can be rather complex and, in some cases, lengthy; by following regulatory guidelines and recommendations and by applying the right tools and basic scientific understanding, a robust process can be designed. For the readers that would like to find out more about the regulatory aspects of process design and about scientific principles behind filtration and chromatography, further reading of Chapters 48 and 50 and Chapters 14 and 16, respectively, can be recommended. Although the main focus of the chapter is the downstream purification part of a manufacturing process, we have emphasised importance of the various interdependencies between upstream and downstream parts, and briefly discussed some important aspects of the upstream process design. Throughout the chapter, an example of a universal roadmap for process and control strategy development was outlined and discussed by focusing on general rules and dedicated tools for developing, characterizing, and scalingup of downstream processes. We outlined general methodologies for sizing of filtration and chromatography unit operations, and because the process design should account for the functionality and limitations of commercial manufacturing equipment, we discussed examples of relevant facility-related constraints. In the final section of the chapter, a brief overview of process modeling strategies that could facilitate the design and scale up of manufacturing processes was also given.
REFERENCES [1] Chapter 3: CMC Activities for Monoclonal Antibody Development, in H.L. Levine, G. Jagschies (Eds.), The Development of Therapeutic Monoclonal Antibody Products: A Comprehensive Guide to CMC Activities from Clone to Clinic, Sweden, 2010, pp. 36–59. [2] ICH Q8 Pharmaceutical Development Step 4. ICH Harmonised Tripartite Guideline, Q8, 2009, pp. 1–28. [3] FDA, Process Validation: General Principles and Practices U.S.D.o.H.a.H.S.F.a.D. Administration, Editor, 2011. [4] ICH, ICH guideline Q8 (R2) on pharmaceutical development: Step 5, in EMA/CHMP/ICH/167068/2004, E.M. Agency (Ed.), Committee for Human Medicinal Products, 2015. [5] L. Hagel, G. Jagschies, G. Sofer, 3—Process-design concepts, Handbook of Process Chromatography, second ed., Academic Press, Amsterdam, 2008, pp. 41–80. [6] ICH, ICH Q10 Pharmaceutical quality system, in ICH Harmonised Tripartite Guideline, ICH (Ed.), 2007, p. 20. [7] E. Moran, et al., Platform Manufacturing of Biopharmaceuticals: Putting Accumulated Data and Experience to Work, EBE Publications, Brussels, BE, 2013. pp. 1–21. [8] G. Slaff, Application of technology platforms to the purification of monoclonal antibodies, in: BioProcess International Conference, Berlin, Germany, 2005. [9] A.A. Shukla, et al., Evolving trends in mAb production processes, Bioeng. Transl. Med. 2 (2017) 58–69. [10] ICH, ICH guideline Q6B on Specifications: Test Procedures and Acceptance Criteria for Biotechnological/Biological Products, in CPMP/ ICH/365/96, E.M. Agency (Ed.), Committee for Human Medicinal Products, 1999. [11] Biophorum Operations Group (BPOG), Best Practices Guide for Evaluating Leachables Risk From Polymeric Single-Use Systems Used in Biopharmaceutical Manufacturing, 2017. [12] European Medicines Agency, Guideline on Process Validation for the Manufacture of Biotechnology-Derived Active Substances and Data to Be Provided in the Regulatory Submission, 2014. [13] D.C. Montgomery, Design and Analysis of Experiments, eighth ed., John Wiley & Sons Inc., New York, 2000. [14] CBW Group, A-Mab: A Case Study in Bioprocess Development, 2009, pp. 1–278. [15] L. Sejergaard, T. Hansen, E.B. Hansen, Process validation (stage 1) using latin hypercube sampling, in Recovery of Biological Products. Bermuda, 2016. [16] T. Hansen, L. Sejergaard, E.B. Hansen, Process validation (stage 1) using latin hypercube sampling: chromatography case study, in Recovery of Biological Products. Bermuda, 2016. [17] J. Pieracci, L. Perry, L. Conley, Using partition designs to enhance purification process understanding, Biotechnol. Bioeng. 107 (5) (2010) 814–824. [18] A. Meitz, et al., An integrated downstream process development strategy along qbd principles, Bioengineering 1 (4) (2014) 213. [19] ICH, Q9, Quality Risk Management, 2005. [20] J.T. McCue, et al., Modeling of protein monomer/aggregate purification and separation using hydrophobic interaction chromatography, Bioprocess Biosyst. Eng. 31 (3) (2008) 261–275. [21] B. Mothes, et al., Accelerated, seamless antibody purification: process intensification with continuous disposable technology, BioProcess Int. 14 (5) (2016) 34–58.
672 SECTION | VI Industrial Process Design
[22] C. Casey, et al., Protein concentration with single-pass tangential flow filtration (SPTFF), J. Membr. Sci. 384 (1–2) (2011) 82–88. [23] Z. Xing, et al., Scale-up analysis for a CHO cell culture process in large scale biorectors, Biotechnol. Bioeng. 103 (4) (2009) 733–746. [24] P. Gronemeyer, R. Ditz, J. Strube, Trends in upstream and downstream process development for antibody manufacturing, Bioengineering 1 (4) (2014) 188. [25] F.M. Wurm, Production of recombinant protein therapeutics in cultivated mammalian cells, Nat. Biotechnol. 22 (11) (2004) 1393–1398. [26] E. Spens, L. Häggström, Defined protein and animal component-free NS0 fed-batch culture, Biotechnol. Bioeng. 98 (6) (2007) 1183–1194. [27] F. Li, et al., Current therapeutic antibody production and process optimization, BioProcess J. 5 (4) (2006) 16–25. [28] C. Kloth, et al., inocoulum expansion methods, animal cell lines, in: M. Flickinger (Ed.), Upstream Industrial Biotechnology—Expression Systems and Process Development, John Wiley & Sons, Hoboken, NJ, 2013. [29] J.S. Bee, et al., Response of a concentrated monoclonal antibody formulation to high shear, Biotechnol. Bioeng. 103 (5) (2009) 936–943. [30] J. Jaspe, S.J. Hagen, Do protein molecules unfold in a simple shear flow? Biophys. J. 91 (9) (2006) 3415–3424. [31] S. Guham, O. Kaltenbrunner, How not to squander cell culture titer improvements during downstream processing, in: SBE’s 2nd International Conferene on Accelerating Biopharmaceutical Development, Coronado, SBE, 2009. [32] A. Sinclair, M. Monge, Biomanufacturing for the 21st century: designing a concept facility based on single-use systems, BioProcess Int. 2 (9, supplement) (2004) 26–32. [33] S. Kandala, et al., Mixing Dynamics in a Large Volume Single Use Mixer—Effect of Tank Shape and Impeller Quantity on Salt Settling and Dispersion, GE Healthcare Biosceinces Corp, Marlborough, MA, 2015. [34] M.P. Smith, Strategies for the purification of high titre, high volume mammalian cell culture batches, in: BioProcess International European Conference and Exhibition, Berlin, 2005. [35] C. Antoniou, Methods and guidelines for use of buffer concentrates in antibody manufacturing, in: IBC’s Bioprocess International Conference, San Diego, 2008. [36] C. Challenger, Behind the scenes with Buffers, BioPharm Int. 28 (2) (2015) 27–28. [37] E. Russell, A. Wang, A.S. Rathore, Chapter 1: harvest of a therapeutic protein product from high cell density fermentation broths: principles and case study, in: A.A. Shukla, M.R. Etzel, et al. (Eds.), Process scale Bioseparations for the Biopharmaceutical Industry, Biotechnology and Bioprocessing Series, CRC, Taylor & Francis, London, 2007, pp. 1–59. [38] R. Knight, E. Ostreicher, Charge-modified filter media, in: T. Meltzer, M. Jornitz (Eds.), Filtration in the Biopharmaceutical Industry, Marcel Dekker, New York, 1998, pp. 95–125. [39] A. Shukla, J. Kandula, Harvest and recovery of monoclonal antibodies from large-scale mammalian cell culture, BioPharm Int. 21 (5) (2008). Accessed from http://www.biopharminternational.com/biopharm-international-05-01-2008. [40] B. Kelley, Very large scale monoclonal antibody purification: the case for conventional unit operations, Biotechnol. Prog. 23 (5) (2007) 995–1008. [41] PDA, Technical Report No. 26 Revised 2008 Sterilizing Filtration of Liquids. PDA J. Pharm. Sci. Technol. 62(S-5) (2008). [42] R. van Reis, A. Zydney, Bioprocess membrane technology, J. Membr. Sci. 297 (1–2) (2007) 16–50. [43] C.J. Dowd, Multi-round virus filter integrity test sensitivity, Biotechnol. Bioeng. 103 (3) (2009) 574–581. [44] P. Genest, K. Scott, J. De Souza, Virus-filtration process development optimization: the key to a more efficient and cost-effective step, BioProcess Int. (2016) 62–74. [45] M.W. Phillips, et al., Virus filtration process design and implementation, in: A.A. Shukla, M.R. Etzel, et al. (Eds.), Process scale Bioseparations for the Biopharmaceutical Industry, CRC, Taylor & Francis, London, 2007, pp. 333–365. [46] M. Siwak, Process for prefiltration of a protein solution. US Patent Office, US20030201229 (A1), 2003. [47] E.M.G. Robert van Reis, C.L. Yson, L.N. Frautschy, S. Dzengeleski, H. Lutz, Linear scale ultrafiltration, Biotechnol. Bioeng. 55 (5) (1997) 737–746. [48] C. Daniels, et al., Chapter 19: linear scale-up of ultrafiltration of high viscosity process streams, in: A.A. Shukla, M.R. Etzel, et al. (Eds.), Biotechnology and Bioprocessing Series, CRC, Taylor & Francis, London, 2007, pp. 523–539. [49] G.H. Lars Hagel, G.H. Günter Jagschies, G.H. Gail Sofer, Development, manufacturing, validation and economics, Handbook of Process Chromatography, second ed., Academic Press, Oxford, UK, 2007, p. 382. [50] J.J. Stickel, A. Fotopoulos, Pressure-flow relationships for packed beds of compressible chromatography media at laboratory and production scale, Biotechnol. Prog. 17 (4) (2001) 744–751. [51] R.N. Keener, J.E. Maneval, E.J. Fernandez, Toward a robust model of packing and scale-up for chromatographic beds. 1. Mechanical compression, Biotechnol. Prog. 20 (4) (2004) 1146–1158. [52] R.N. Keener, J.E. Maneval, E.J. Fernandez, Toward a robust model of packing and scale-up for chromatographic beds. 2. Flow packing, Biotechnol. Prog. 20 (4) (2004) 1159–1168. [53] R.N. Keener III, et al., Advancement in the modeling of pressure-flow for the guidance of development and scale-up of commercial-scale biopharmaceutical chromatography, J. Chromatogr. A 1190 (1–2) (2008) 127–140. [54] J.T. McCue, et al., Application of a two-dimensional model for predicting the pressure-flow and compression properties during column packing scale-up, J. Chromatogr. A 1145 (1–2) (2007) 89–101. [55] S. Kidal, O.E. Jensen, Using volumetric flow to scale up chromatographic processes, BioPharm Int. 19 (3) (2006) 34–43. [56] E.N. Lightfoot, et al., Reffining the description of protein chromatography, J. Chromatogr. A 760 (1997) 139–149. [57] S. Yamamoto, Plate height determination for gradient elution chromatography of proteins, Biotechnol. Bioeng. 48 (5) (1995) 444–451. [58] T. Ishihara, T. Kadoya, S. Yamamoto, Application of a chromatography model with linear gradient elution experimental data to the rapid scale-up in ion-exchange process chromatography of proteins, J. Chromatogr. A 1162 (1) (2007) 34–40. 11th International Symposium on Preparative and Industrial Chromatography and Allied Techniques.
Downstream Process Design, Scale-Up Principles, and Process Modeling Chapter | 32 673
[59] E.J. Suda, et al., Comparison of agarose and dextran-grafted agarose strong ion exchangers for the separation of protein aggregates, J. Chromatogr. A 1216 (27) (2009) 5256–5264. [60] A. Sinclair, M. Monge, Influence of process development decision on manufacturing costs, BioProcess Int. 9 (2010) 36–40. [61] A. Tabor, Driving efficiencies in the pharmaceutical industry, Pharma Mag. 5 (4) (2009) 38–39. [62] B. Kelley, Very large scale monoclonal antibody purification: the case for conventional unit operation, Biotechnol. Prog. 23 (5) (2007) 995–1008. [63] H. Broly, et al., Cost of goods modelling and quality by design for developing cost-effective processes, BioPharm Int. 23 (6) (2010) 26–35. [64] S.S. Farid, Process economics of industrial monoclonal antibody manufacture, J. Chromatogr. B Analyt. Technol. Biomed. Life Sci. 848 (1) (2007) 8–18. [65] D. Petrides, et al., Biopharmaceutical process optimization with simulation and scheduling tools, Bioengineering 1 (2014) 154–187. [66] A. Sinclair, M. Monge, Measuring manufacturing cost and its impact on organizations, BioProcess Int. 8 (6) (2010) 2–4. [67] M. Puich, A. Paz, Simulations improve production capacity, BioPharm Int. 17 (5) (2004) 52–58. [68] A. Rathore, et al., Costing issues in the production of biopharmaceuticals, BioPharm Int. 17 (2) (2004) 46–55. [69] A.C. Lim, et al., A computer-aided approach to compare the production economics of fed-batch and perfusion culture under uncertainty, Biotechnol. Bioeng. 93 (4) (2006) 687–697. [70] M.A. Mustafa, et al., A software tool to assist business-process decision-making in the biopharmaceutical industry, Biotechnol. Prog. 20 (4) (2004) 1096–1102. [71] A.C. Lim, et al., Application of a decision-support tool to assess pooling strategies in perfusion culture processes under uncertainty, Biotechnol. Prog. 21 (4) (2005) 1231–1242. [72] S.S. Farid, J. Washbrook, N.J. Titchener-Hooker, Decision-support tool for assessing biomanufacturing strategies under uncertainty: stainless steel versus disposable equipment for clinical trial material preparation, Biotechnol. Prog. 21 (2) (2005) 486–497. [73] A. Biwer, S. Griffith, C. Cooney, Uncertainty analysis of penicillin V production using Monte Carlo simulation, Biotechnol. Bioeng. 90 (2) (2005) 167–179. [74] S. Sommerfeld, J. Strube, Challenges in biotechnology production—generic processes and process optimization of monoclonal antibodies, Chem. Eng. Process. 44 (2005) 1123–1137. [75] D. Kahn, R. Plapp, A. Modi, Modeling a multi-step protein synthesis and purification process: a case study of a CAPE application in the pharmaceutical industry, in: EACAPE-11, 2011. [76] M.D. Costioli, et al., Cost of goods modeling and quality by design for cost-effective processes, BioPharm Int. 23 (6) (2010) 26–35. [77] A. Toumi, et al., Design and optimization of a large scale biopharmaceutical facility using process simulation and scheduling tools, Pharm. Eng. 30 (2) (2010) 1–9. [78] S. Chhatre, L. Pampel, N.J. Titchener-Hooker, Integrated use of ultra scale-down and financial modeling to identify optimal conditions for the precipitation and centrifugal recovery of milk proteins, Biotechnol. Prog. 27 (4) (2011) 998–1008. [79] D. Tuse, T. Tu, K.A. McDonald, Manufacturing economics of plant-made biologics: case studies in therapeutic and industrial enzymes, Biomed. Res. Int. 2014 (2014) 256135. [80] M. Monge, A. Sinclair, Disposables cost contributions: a sensitivity analysis, BioPharm Int. 22 (4) (2000) 14–18. [81] J. Walthe, et al., The business impact of an integrated continous biomanufacturing platform for recombinant protein production, J. Biotechnol. 213 (2015) 3–12. [82] M.A. Torres-Acosta, et al., Economic analysis of Royalactin production under uncertainty: evaluating the effect of parameter optimization, Biotechnol. Prog. 31 (3) (2015) 744–749. [83] R. Johnston, Approaches to debottlenecking and process optimization, BioProcess Int. 10 (5) (2012) 44–53. [84] J. O’Connor, A. Sanford, F. Nasuti, Taming the scheduling beast, Pharm. Manuf. 9 (2) (2010). [85] T. Vicente, et al., Rational design and optimization of downstream processes of virus particles for biopharmaceutical applications: current advances, Biotechnol. Adv. 29 (6) (2011) 869–878. [86] R. Johnston, D. Zhang, Garbage in, garbage out: the case for more accurate process modeling in manufacturing economics, BioPharm Int. 22 (8) (2009) 60–68. [87] J. Acuna, et al., Modeling perfusion processes in biopharmaceutical production, BioProcess Int. 9 (2) (2011) 52–58. [88] W. De Bruyn, D. Borodin, B. Van Vreckem, Full Scale Product Lifecycle Management in Biotech Production Usin Advanced Control and Production Information Systems. Newark, DE, USA, 2011. [89] T. Shanklin, et al., Selection of bioprocess simulation software for industrial applications, Biotechnol. Bioeng. 72 (4) (2001) 483–489. [90] S.S. Farid, J. Washbrook, N. Titchener-Hooker, Modeling biopharmaceutical manufacturing: design and implementation of simbiopharma, Comput. Chem. Eng. 31 (2007) 1141–1158. [91] G. Jagschies, A. O'Hara, Debunking downstream bottleneck myth, Genetic Eng. Biotechnol. News 27 (14) (2007) 62–64. [92] T. Sengar, A.S. Rathore, Achieving process intensification by scheduling and debottlenecking biotech processes, BioPharm Int. 24 (2) (2011) 44–52. [93] Y. Yang, S.S. Farid, N.F. Thornhill, Data mining for rapid prediction of facility fit and debottlenecking of biomanufacturing facilities, J. Biotechnol. 179 (2014) 17–25. [94] B.H. Junker, Application of overall equipment effeciveness to biopharmaceutical manufacturing, BioPharm Int. 22 (5) (2009) 40–50. [95] K. Severson, et al., Elastic net with Monte Carlo sampling for data-based modeling in biopharmaceutical manufacturing facilities, Comput. Chem. Eng. 80 (2015) 30–36. [96] S. Liu, et al., Optimising chromatography strategies of antibody purification by integer fractional programming techniques, Comput. Chem. Eng. 68 (2014) 151–164.
674 SECTION | VI Industrial Process Design
[97] F. Grote, R. Ditz, J. Strube, Downstream of downstream processing: development of recycling strategies for biopharmaceutical processes, J. Chem. Technol. Biotechnol. 87 (2012) 481–497. [98] J.L. Novais, N.J. Titchener-Hooker, M. Hoare, Economic comparison between conventional and disposables-based technology for the production of biopharmaceuticals, Biotechnol. Bioeng. 75 (2) (2001) 143–153. [99] GE Healthcare, Process Economy and Production Capacity Using Single-Use Versus Stainless Steel Fermentation Equipment, GE Heaklthcare BioSciences AB, Uppsala, 2015.