
The modern pharmaceutical industry is dynamically evolving with Artificial Intelligence (AI), which offers unprecedented opportunities in drug discovery, production optimization, and quality control. Implementing these technologies in a regulated environment requires strict adherence to standards. A key element in ensuring patient safety and the quality of medicinal products is the validation of computerized systems in the AI era, in line with the draft of new EU guidelines for pharma. This article discusses the latest EU regulations, including the Artificial Intelligence Act and specific EudraLex guidelines, and presents practical aspects of AI implementation and validation in pharmaceuticals.
1. AI Regulations in the EU 2025: The Artificial Intelligence Act (AI Act)
The Artificial Intelligence Act (AI Act), which came into force in February 2025, represents the world’s first comprehensive legal framework for AI, aiming to build trust in technology across Europe. It introduces a risk-based approach, classifying AI systems by their level of potential threat.
The AI Act came into effect on August 1, 2024, with its requirements being phased in gradually. The first provisions, including the ban on “unacceptable risk” and the “AI literacy” requirement, are effective from February 2, 2025. Obligations for providers of general-purpose AI models (GPAI) come into effect on August 2, 2025, though the finalization of the GPAI Code of Practice has been delayed until August 2025. Most provisions, including those concerning “high-risk” systems, will be implemented by August 2, 2026, with further implementation phases extending to summer 2027. This staggered timeline creates a complex and dynamic regulatory compliance landscape.
The AI Act defines four levels of risk: unacceptable, high, limited, and minimal. Unacceptable risk systems are strictly prohibited as they pose a clear threat to safety and fundamental rights (e.g., subliminal manipulation, social scoring, untargeted facial scanning). Provisions regarding penalties for violations of Article 5 come into force on August 2, 2025.
High-risk systems are those that pose a significant risk to health, safety, or fundamental rights. This includes AI used as safety components of products covered by EU harmonization legislation (e.g., in medicine) or listed in Annex III, unless they do not pose a significant risk. High-risk systems are subject to stringent obligations, such as adequate risk assessment, high data quality, activity logging, detailed documentation, clear user information, human oversight, and a high level of robustness, cybersecurity, and accuracy.
A key requirement of the AI Act is also “AI literacy.” From February 2, 2025, providers and users of AI systems must ensure their personnel possess a “sufficient level of AI literacy.” This requirement applies to all AI systems, not just high-risk ones, and includes the ability to assess legal and ethical implications and critically interpret results.
The AI Act is a horizontal framework designed to coexist with sectoral law. There is a need for clarity on the extent to which the general principles of the AI Act will regulate the use of AI by pharmaceutical companies, especially in the context of high-risk systems. The European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMA) are actively working on their own guidelines for AI in the medicinal product lifecycle, indicating the need for specific industry regulations.
2. Practical Aspects of AI Implementations in Pharmaceutical Manufacturing: EudraLex Annex 22 Guidelines
Within the general framework of the AI Act, the pharmaceutical sector receives more detailed guidance through the update of EudraLex Volume 4. The revised Annex 11 concerning computerized systems and the entirely new Annex 22 dedicated to artificial intelligence are of crucial importance.
Revised Annex 11 – Computerised Systems strengthens the requirements for managing the lifecycle of computerized systems, emphasizing the comprehensive application of Quality Risk Management (QRM) principles at all stages. Controls related to ensuring data integrity, audit trails, electronic signatures, and system security have been clarified. The New Annex 22 – Artificial Intelligence establishes specific requirements for the use of AI and machine learning in the manufacture of active substances and medicinal products.
- Scope of Application: Annex 22 applies to computerized systems where AI models are used in critical applications, i.e., those with a direct impact on patient safety, product quality, or data integrity, e.g., for data prediction or classification. This specifically concerns machine learning (AI/ML) models that gain functionality through training on data.
- Key Limitations and Exclusions: Annex 22 has very precise limitations. It applies exclusively to static models (non-adaptive during use) and deterministic models (identical inputs always yield identical outputs). Dynamic models (continuously learning) and probabilistic models (identical inputs may not yield identical results) should not be used in critical GMP applications. Furthermore,
- Generative AI and Large Language Models (LLMs) are explicitly excluded from critical GMP applications. If these models are used in non-critical applications, qualified and trained personnel must ensure their outputs are appropriate, implying a “human-in-the-loop” (HITL) approach.
- General Principles: Close collaboration is required among all involved parties (Subject Matter Experts (SMEs), QA, data scientists, IT) during algorithm selection, training, validation, testing, and operations. Personnel must be appropriately qualified. Full documentation of all activities must be available and reviewed. All activities must be implemented based on the risk to patient safety, product quality, and data integrity.
- Intended Use: The intended use of the model and its specific tasks should be described in detail, based on in-depth process knowledge. This includes characterizing input data and identifying limitations.
- Acceptance Criteria: Appropriate test metrics must be defined to measure model performance (e.g., confusion matrix, sensitivity, specificity, accuracy, precision, and/or F1 score). Acceptance criteria must be at least as high as the performance of the replaced process.
- Test Data: Test data must be representative of and extend the full sample space of the intended use. They should be stratified, cover all subgroups, and reflect limitations. The test dataset must be sufficiently large to calculate metrics with appropriate statistical confidence. Labeling of test data must be verified.
- Independence of Test Data: Technical and/or procedural controls must ensure the independence of test data, meaning that data used for testing cannot be used during model development, training, or validation.
- Execution of Tests: Tests must ensure that the model is suitable for its intended use and “generalizes well.” A prepared and approved test plan is required. Any deviations must be documented and justified.
- Explainability: Systems must log features in the test data that contributed to classification or decisions. Feature attribution techniques (e.g., SHAP, LIME) or visual tools should be used.
- Confidence: The system should log the model’s confidence score for each result. Low confidence scores should be flagged as “undecided.”
- Operations (Continuous Use): The model, system, and process must be under change control. Regular monitoring of model performance and input sample space (data drift) is required.

3. AI in Drug Manufacturing: Applications and Benefits
The integration of artificial intelligence in the pharmaceutical industry is leading to significant transformations in drug discovery and development, as well as pharmaceutical sector management. AI streamlines every stage, from drug discovery to clinical trials, manufacturing, and supply chain management.
In drug discovery and design, AI accelerates the analysis of vast datasets, identifies molecular targets, and predicts drug-target interactions, reducing time and costs. It enables virtual screening of chemical libraries, proposes new structures (de novo drug design), and optimizes drug candidates.
AI support in clinical trials is equally significant. AI systems shorten the duration of clinical trial cycles by using predictive models to identify relevant information in real-world data (RWD). AI helps in more effective patient matching for studies and in their design. An important innovation is the use of digital twins – virtual patient models that simulate individual responses to therapies.
In production processes, AI is revolutionizing many aspects:
- Process Automation: Automating processes, AI streamlines production, ensuring consistency in repetitive operations.
- Predictive Maintenance: Continuous monitoring of production operations allows AI to identify the need for part replacement or repair before it halts operations.
- Waste Reduction: AI assists in analyzing drug batches to determine where improvements can be made. AI-powered quality control systems can detect early defects, reducing waste by up to 25%.
- Production Scheduling: AI optimizes schedules, minimizing changes, enabling just-in-time production, and maximizing delivery efficiency.
- Anomaly Detection and Digital Factory Twin: Combining anomaly detection with digital twins enables the identification and replication of the “golden batch,” minimizing deviations.
- Demand Forecasting and Inventory Management: AI transforms demand forecasting and inventory management, providing more accurate forecasts.
- Smart Logistics and Supply Chain: AI optimizes routes, reducing costs, delivery time, and emissions, and improves information flow and collaboration.
The applications of AI extend throughout the entire pharmaceutical product lifecycle, from research and development, through production, to logistics and personalized medicine. The success of AI implementation in pharma is inextricably linked to a company’s data management maturity.
4. Ensuring Pharmaceutical Product Quality with Artificial Intelligence
Ensuring the quality and safety of pharmaceutical products is of paramount importance. Artificial Intelligence (AI) emerges as a transformative force, capable of redefining the landscape of quality control in pharmaceuticals.
One of AI’s most significant contributions in QC laboratories is its ability to handle and interpret colossal amounts of data. AI algorithms, particularly machine learning models, excel at processing complex datasets, uncovering hidden correlations, and providing actionable insights. This predictive analytics capability shifts quality control from a reactive to a proactive function, allowing laboratories to anticipate issues before they escalate. For example, AI can analyze spectroscopic data to predict critical quality attributes or forecast the probability of batch non-compliance.
AI, through computer vision and deep learning, is revolutionizing visual inspection, providing highly accurate and consistent automated inspection capabilities. AI-powered vision systems offer automated detection of subtle defects with greater speed and accuracy than human inspectors.
AI enhances data integrity by automating data collection, reducing manual entry errors, and applying algorithms to detect anomalies or inconsistencies in datasets. It can also provide continuous monitoring of data streams for compliance with GxP principles.
Additionally, AI improves visibility and control throughout the supply chain, from supplier qualification to the distribution of the finished product, mitigating risks associated with counterfeit drugs and low-quality materials. It also contributes to reducing the potential for human error, which is a major cost driver in pharmaceutical manufacturing.
5. AI Implementation and Validation According to New Guidelines: Practical Aspects
The implementation and validation of AI systems in pharmaceuticals require an integrated approach, combining the general principles of the AI Act, the reinforced requirements of Annex 11, and the specific guidelines of Annex 22. Annex 11 provides the foundation for managing the lifecycle of computerized systems, while Annex 22 adds AI-specific layers.
Quality Risk Management (QRM) principles must be comprehensively applied at all stages of the AI model’s lifecycle: from algorithm selection, through training, validation, testing, to operations.
Key stages of AI model validation, detailed in Annex 22, include:
- Definition of Intended Use: A detailed description of the model and its tasks, based on in-depth knowledge of the process into which it is integrated.
- Establishing Acceptance Criteria: Defining appropriate test metrics and acceptance criteria, which should be at least as high as the performance of the replaced process.
- Rigorous Test Data Management: Test data must be representative, stratified, sufficiently large, and have verified labeling. The independence of test data from training/validation data is crucial.
- Test Execution and Documentation: Tests must ensure that the model is suitable for its intended use and “generalizes well.” An approved test plan is required, and any deviations must be documented.
- Explainability and Confidence: Systems should record features that contributed to decisions (e.g., SHAP, LIME) and log the model’s confidence score for each result. Low confidence scores should be flagged as “undecided.”
- Continuous Monitoring and Change Control: The model and system must be under change control. Model performance and input data sample space must be regularly monitored to detect data drift.
In the context of human oversight (“Human-in-the-Loop” – HITL), the human role remains crucial. For systems where testing effort has been reduced, or in non-critical applications for Generative AI/LLM, consistent review and/or testing of each model output by an operator is required.
Practical challenges arise from the limitations of Annex 22. Companies must accurately classify their AI systems to ensure that only static and deterministic models are used in critical GMP applications.
The table below provides a practical checklist and guide for validation specialists, systematizing the detailed requirements of Annex 22.
Table 1: Key Validation Requirements for AI Models in Critical GMP Applications (based on Draft Annex 22)
Aspekt Walidacji | Wymóg (na podstawie Annex 22) | Kluczowe Rozważania/Przykłady | Odpowiedzialność (wg Annex 22) |
---|---|---|---|
1. Zamierzone Zastosowanie | Szczegółowy opis modelu i jego zadań; charakterystyka danych wejściowych, ograniczeń. | Pomoc lub automatyzacja; podział na podgrupy; rola operatora w HITL. | Process SME |
2. Kryteria Akceptacji | Zdefiniowanie metryk testowych (np. confusion matrix, sensitivity, accuracy). Wydajność modelu co najmniej równa wydajności zastępowanego procesu. | Zmienne dla podgrup; znajomość wydajności procesu zastępowanego. | Process SME |
3. Dane Testowe | Reprezentatywność i rozszerzenie pełnej przestrzeni próbki; stratyfikacja, wszystkie podgrupy. Wystarczający rozmiar danych dla pewności statystycznej. Weryfikacja etykietowania. | Uzasadnienie pre-processingu i wykluczeń. | N/A (ogólne wymagania) |
4. Niezależność Danych Testowych | Dane testowe nie mogą być używane w rozwoju, treningu, walidacji. Kontrole techniczne/proceduralne (dostęp, audit trail). | Zabezpieczenie danych testowych; zasada “czterech oczu”. | N/A (ogólne wymagania) |
5. Wykonanie Testów | Zapewnienie, że model jest odpowiedni do zamierzonego zastosowania i “dobrze generalizuje” (wykrywanie over/underfitting). | Zatwierdzony plan testów; dokumentacja odchyleń i niepowodzeń. | Process SME (zaangażowanie w plan) |
6. Wyjaśnialność (Explainability) | Rejestrowanie cech w danych testowych, które przyczyniły się do decyzji/klasyfikacji. | Użycie technik (SHAP, LIME) lub narzędzi wizualnych (mapy ciepła); przegląd cech. | N/A (wymóg systemowy) |
7. Pewność (Confidence) | Logowanie wyniku pewności modelu dla każdego wyniku. | Ustawienie progu; flagowanie jako “undecided” przy niskiej pewności. | N/A (wymóg systemowy) |
8. Operacje (Ciągłe Użycie) | Kontrola zmian i konfiguracji. Regularne monitorowanie wydajności systemu i dryfu danych. | Ocena zmian pod kątem retestowania; procedury przeglądu ludzkiego (HITL). | N/A (wymóg operacyjny) |

6. Summary Ai Software in Pharma Industry
The integration of artificial intelligence in the pharmaceutical industry is inevitable and offers enormous benefits. However, implementing these technologies requires a proactive and rigorous approach to regulatory compliance. It is crucial to understand and implement the requirements stemming from both general legal frameworks, such as the Artificial Intelligence Act, and industry-specific EudraLex guidelines, particularly the updated Annex 11 and the new Annex 22.
For computerized system validation specialists, this means adapting to new standards that emphasize comprehensive risk management, data integrity (especially test data), rigorous validation (including test data independence, explainability, and model confidence), and maintaining the crucial role of human oversight. The explicit limitations on the types of AI permissible in critical GMP applications (static and deterministic models) necessitate a cautious choice of technology.
The pharmaceutical industry must be prepared for the continuous evolution of regulations and invest in developing “AI literacy” competencies among personnel. The future of AI in pharma will be shaped by the ability to innovate within clearly defined and stringent regulatory frameworks, while ensuring the highest standards of patient safety and quality.
6. How TTMS can help you leverage AI in pharmaceuticals
At TTMS, we perfectly understand how challenging it is to combine innovative AI technologies with rigorous pharmaceutical regulations. Our experts support companies in safely and legally implementing solutions that increase efficiency and maintain patient trust.
Want to take the next step? Contact us and see how we can accelerate your path to safe and innovative pharmaceuticals.