By Toni Manzano (Aizon), Mario Stassen (Stassen Pharmaconsult BV), William Whitford (DPS Group) and AIO Team - AI in Operations

AI in Pharma Adoption, Part 4: Three Real Use Cases of AI Implementation in Pharma – NOT Science Fiction!

by Brittany Wells

In our last blog entry, we discussed the whats and whys around artificial intelligence (AI) transforming manufacturing. This entry sheds light upon some of the hows in anonymized accounts of real-life implementations. AI can present opportunities to improve production processes that have already been put into place, adopting more efficient, automated processes that incorporate data-driven decisions and use predictive analytics tools. The pharma and biotech industry are currently looking at ways to leverage AI in their lines of business.


Case 1: Predictive Pooling Strategy: Global Biotech Company Reduces Risk in Downstream Purification

A large biopharma company has a very modern perfusion (continuous) upstream process. The unpurified bulk (UPB) material is harvested each day into single use bags, designated a batch identification, and stored in a deep freeze inventory. Each bag of harvest presents significantly different purification-influencing characteristics, such as protein concentration. As required by the site production plan, for the downstream purification process these bags are later used in purification campaigns. Campaigns typically consist of five to six purification runs, that each employ up to 20 (25L) bags, representing multiple batches of UPB material. In designing a campaign, the operations team carefully selects and pools bags for each downstream run, with a total of ~120 bags per campaign being pulled from inventory.

To optimize a campaign, the team must consider all measured characteristics of each bag in designing a pool for each run. Their goal was to determine how to select the bags which provide an optimal total yield of protein in a particular run, without compromising the yield of later runs by consuming all the ‘best bags’ in the earlier runs. More specifically, how could they pool a selection of bags for each run which would both 1) fulfill the ‘hard requirements’ per the control strategy (e.g. total start protein concentration, harvest day distribution, etc.) as well as 2) optimize to a predicted yield both per run, and in aggregate for the entire campaign?

The initial step was to develop a single platform by ingesting data from five different data sources pertaining to how each bag was made in the upstream fermentation process.  Data sources included material genealogy, electronic batch records, process data (Critical Process Parameters (CPPs)), ERP planning data, and LIMS data (Critical Quality Attributes (CQAs)). Thankfully, this sponsor’s award-winning platform provides flexible contextual models to link and relate information from multiple disparate data sources, and ensures the data is stored in a compliant way in an underlying cloud data lake.

By implementing AI-based predictive models they were able to simulate decisions, allowing them to see likely outcomes and ultimately get the optimal results with lower risk and higher degree of confidence. Nowadays, the company is experimenting with a 93% reduction in pooling time.


Case 2:  Use AI to Improve mAb Yield

A leading biotech company needed to improve their centrifugation step—which is where the harvesting process starts once fermentation is completed in the bioreactor. Average yield was already more than 97% in this process—higher than industry standard. Improving on that baseline would be surprising but immensely valuable.

A non-linear correlation between hold-up volume and time between process operations was discovered by analyzing five years of historical batch data with AI algorithms designed for cause-effect detection.

It was determined that using this analysis to vary how much time should lapse between these processes for each batch would save an additional 277 grams of the monoclonal antibody (mAb)—resulting in $5 million in additional recovered revenue.

This leading biotech company is now considering a full implementation of this process and is also exploring other areas where they can use AI to derive even more insights and value. They estimate that scaling the engagement across the entire line could lead to as much as $680 million in additional revenue per year.


Case 3:  “Right First Time” Operation for Improved Process Robustness

A multinational biopharmaceutical company was looking to reduce their operational inefficiencies and costs by avoiding numerous recirculations at their ultrafiltration process step. Even with a lot of effort and analysis, they struggled to get the targeted concentration of the drug product (through polarimetry) correct or “Right First Time.” Polarimetry is one of the Critical Quality Attributes (CQA) that represents the concentration (g/L) of the product, which has to be between certain limits before moving to the next process step. Both their process and material data were spread across different data silos, such as Oracle, DeltaV, and GLIMS. In fact, there were a lot of parameters that controlled the ultrafiltration process—over 100!

Even if the data were centralized, there were too many factors involved to simply rely on classical statistical methods for any meaningful insights. Two questions needed to be answered: (1) Which Critical Process Parameter (CPP) has the biggest impact on the polarimetry measured out of the ultrafiltration unit? and (2) What is that CPP’s set point to obtain the optimal polarimetry to avoid recirculation?

First, their data needed to be collected and normalized into a central location. With the data now centralized, they applied real-time continuous Principal Component Analysis (PCA) functionality to discover and analyze the co-dependent relationships between all of these process variables. With that, they were able to reduce the complexity of all of their variables and determine that the speed of the pump supplying the bulk material through the filtration unit had the single biggest effect on the target polarimetry parameter.

Finally, having quantified to which extent the pump speed was affecting the CQA, AI was used to predict the optimal pump speed to obtain a CQA value within the target range—thereby avoiding recirculations. Through an iterative process, a highly accurate ML (machine learning) model was established where the team was able to predict which value for the pump speed should be applied to get the targeted polarimetry after the filtration process.

Leveraging an accurate prediction model, this biopharmaceutical company has proven the ability to go from multiple recirculations to getting the ultrafiltration process step “Right First Time.” Multiple recirculations are now no longer necessary, and they estimated a 61% reduction in the total number of runs and a 53% boost in process effectiveness. Moving forward, they now have all of the data they need for a good AI approach. Data generated in the manufacturing and quality systems is available to be used in creating AI Models, and the information coming in real time allows predictions and recommendations around the downstream operations. The data is no longer lost, but is now all captured in a GxP- compliant manner, making any future audits easier and smooth.


AI and ML belong to the field of computer science. This is providing continuous benefits to the industry in general, assisting in novel process development, and leveraging the current capabilities to a new world of opportunities. We are now experimenting in how AI can shift pharmaceutical knowledge towards an unlimited sphere—and that is not science fiction, it is actually happening today!

Tagged: 

https://www.xavierhealth.org/ai-blog/2021/6/29