top of page

The Rise of AI in Clinical Trial Recruitment: Mapping the Market and Its Regulatory Fault Lines

Updated: 6 hours ago

As AI reshapes how patients are identified, matched, and enrolled in clinical research, regulatory frameworks must evolve to address the structural diversity, data flows, and ethical risks embedded in these emerging platforms.


AI-generated image of the global map of generative AI in clinical trial recruitment
Image generated by ChatGPT from OpenAI.

Introduction


The global AI in clinical trials market, valued at US$1.20 billion in 2023, is projected to grow at a 12.4% annually, reaching US$2.74 billion by 2030. Key applications include utilizing AI to optimize trial design, to automate data processing, and to facilitate patient recruitment. One example is Insilico Medicine, an AI-driven biotech company that offers generative AI platforms capable of creating novel molecules and predicting clinical trial outcomes, with the promise of radically reducing trial failures and shortening drug development cycles.


Clinical trial recruitment, specifically, is projected to take the largest share of AI use in clinical trials market. This comes at no surprise, given that participants recruitment in clinical studies is widely regarded as one of the most significant bottlenecks in successful trial execution. In fact, in 2018, data analytics and consulting company GlobalData found that 55% of clinical trials in their database were terminated due to low enrolment rates - the single highest reason for failure. Another report published by IQVIA, an American company based in Durham, North Carolina, indicated that 11% of clinical research sites fail to enrol a single patient, and 37% of sites under-enrol. Insufficient clinical trial recruitment leads to trial delays, financial loss, and poor quality of results.


While generative AI holds immense promise to overcome enrolment challenges in clinical studies, there is no shortage of skepticism and ethical critiques. For example, Renan Leonel, a researcher at the New Jersey Institute of Technology, has conducted an anticipatory ethical analysis of the US National Institute of Health (NIH)'s trial-matching platform, TrialGPT, highlighting several bioethical concerns, including algorithmic bias, challenges to informed consent, and the epistemic opacity of AI systems.


In this blog article, I provide an overview of the current commercial landscape of AI development for clinical trial recruitment and highlight emerging regulatory challenges. This analysis intends to help researchers, policymakers, and stakeholders identify strategic gaps and anticipate potential regulatory considerations as generative AI technology and its health applications continue to evolve.


Overview of AI in Clinical Trial Recruitment

TrialGPT (NIH/NCI)

The US NIH/NCI TrialGPT project was first introduced as a research prototype in 2023 and later technically and institutionally validated in 2024, culminating in a landmark paper in Nature Communications. It is a publicly available Azure OpenAI wrapper that cuts trial-matching time by 40% by leveraging GPT-3.5/GPT-4 to understand and analyze patient notes and trial criteria. TrialGPT has three primary functions: trial retrieving, matching, and ranking. First, it generates a list of keywords from a patient summary and retrieves a small subset of relevant trials. Then, it predicts patient eligibility according to trial criteria, generating explanations and identifying specific relevant sentences. Finally, it produces a ranked list of trials based on eligibility. TrialGPT performs closely to expert performance at an accuracy of 87.3% with faithful explanations and outperforms the best-competing models by 28.8% to 53.5% in ranking and excluding clinical trials.


TrialMind

TrialMind is an all-in-one clinical trial AI system developed by US company Keiji AI that remains in a pre-launch phase of development. Its foundation model, Panacea, is trained on TrialAlign, an expansive dataset comprising approximately 793,279 clinical trial documents and 1,113,207 trial-related scientific papers, and further fine-tuned using TrialInstruct, which includes 200,866 instruction-style data points. Functionally, TrialMind supports a broad range of clinical research tasks, including clinical trial search, trial summarization, trial design support, and patient-trial matching, leveraging the NIH TrialGPT project for trial matching. Performance evaluations indicate that TrialMind surpasses current cutting-edge generic and medicine-specific large language models (LLMs), demonstrating a reported 14.42% improvement in patient-trial matching accuracy and 43% reduction in recruitment time.


TrialX

Launching its first clinical trial–focused application in 2008, US company TrialX uses AI to help patients find relevant clinical trials by entering key information such as medical condition and geographic location into their search search engine. TrialX matches user queries to eligibility criteria, ranks trials by fit and proximity, and provides simplified trial summaries along with pre-screening features to support downstream recruitment. As of January 2026, TrialX supports research across 18 countries and 14 languages, hosts more than 21,000 study listings, and attracts over one million visitors annually.


TrialGPT (Foundation29)

Spanish non-profit organization Foundation29's TrialGPT is a user-facing, AI-powered clinical trial database. Development began in 2023, and it is currently in an institutional validation phase as an open, publicly available tool not intended for market. The tool allows users to upload medical records, search by condition, and specify location in order to receive a list of potentially relevant clinical trials. TrialGPT (Foundation 29) generates a plain-language description of eligible clinical trials and an explanation for why a given trial may be a good match given the data provided.


IQVIA

Released in 2021, US company IQVIA Biotech's AI-powered direct-to-patient recruitment technology creates targeted ad campaigns that direct potential participants to pre-screening websites. These ad campaigns are informed by consumer data (i.e., online searches) and health data collected from IQVIA's own network of medical partners. Through these targeted recruitment efforts, they claim a 57% reduction in non-enrolling sites and a 114% higher enrolment rate.


Opyl

Founded in 2013, Australian company Opyl focuses on AI-driven clinical trial recruitment using social media and other digital engagement tools informed by analysis of consumer-generated data. Its AI model was first trained in early 2020 on approximately 300,000 registered clinical trials dating back to 2005, with additional training added in mid-2020 using a cohort of 475 COVID-19 trials. In 2020 Opyl demonstrated successful proof of concept and reliability testing. It is currently in running pilots in preparation for launch as a full enterprise solution, focusing on UX/UI refinement and feature enrichment. Opyl reports a strong model performance using standard statistical techniques. It does not yet have large-scale deployment figures or user adoption metrics available.


Tempus One/Tempus Link

US company Tempus, founded in 2015, offers AI-powered diagnostics and research solutions across oncology, neurology, psychiatry, cardiology, and radiology. Its clinical trial-related products include Tempus One, a provider-facing solution for rapid trial eligibility screening based on enrolment criteria, and Tempus Link, a researcher-facing platform that identifies and pre-screens oncology patients for trials within Tempus’ network, called the TIME program. The system reportedly screens out about 72% of ineligible patients early in the pipeline. The Tempus TIME program additionally offers a patient-facing pathway to connect eligible patients with trials while reducing geographic barriers, incorporating clinical context such as treatment decision points and upcoming visits. The integrated network spans approximately 100 sites representing over 800 clinical locations, covering more than 5 million cancer patients, with over 1,500 active clinical trials. 


Viz Recruit

Founded in 2016, US company Viz.ai offers AI-powered solutions across multiple medical specialties, including oncology, radiology, and trauma care. Its clinical trial enrolment offering, Viz Recruit, is a researcher-facing tool that identifies eligible trial candidates from within its network in real time at the point of clinical evaluation. The system connects potential participants directly with research teams, accelerating enrolment by three times. As of January 2025, Viz.ai served roughly 1,700 hospitals and more than 60,000 healthcare providers in the United States.


Ethical and Regulatory Considerations


The shift towards AI-assisted clinical trial recruitment without centralized oversight and clear ethical guidelines raises significant concerns related to patients' data privacy, meeting standards for informed consent, transparency in clinical decision-making, explainability, and algorithmic bias. For instance, some technologies, such as NIH TrialGPT, place more explicit emphasis on ensuring AI-powered trial matching decisions are transparent and explainable. By contrast TrialX does not provide the same level of explainability to patients or practitioners, at least based on information that is publicly available.


In the absence of formal laws and policy guidelines governing the function and use of AI technologies in clinical settings, we are left to defer to existing legal mechanisms. Yet, applying these frameworks to AI-assisted clinical trial recruitment can be difficult, especially given the diversity and novelty of current platforms. Let's consider the principle of ensuring personal data privacy in this context. In Canada, consumer data collected by private-sector organizations in the course of commercial activity is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA). In addition, provinces also have their own health information protection acts that apply to public-sector institutes such as hospitals and universities. Determining which statute applies depends on whether the information-holding entity is public or private, its function, the nature of the information in question, and the purposes for which that information is used.


PIPEDA dictates that private-sector organizations abide by the following ten principles:


  1. Accountability: "An organization is responsible for personal information under its control."


  2. Identifying purposes: "The organization must identify the purpose purposes for which the personal information is being collected must be identified by the organization before or at the time of collection."


  3. Consent: "The knowledge and consent of the individual are required for the collection, use, or disclosure of personal information"


  4. Limiting collection: "The collection of personal information must be limited to that which is needed for the purposes identified by the organization. Information must be collected by fair and lawful means."


  5. Limiting use, disclosure, and retention: "Unless the individual consents otherwise or it is required by law, personal information can only be used or disclosed for the purposes for which it was collected. Personal information must only be kept as long as required to serve those purposes."


  6. Accuracy: "Personal information must be as accurate, complete, and up-to-date as possible in order to properly satisfy the purposes for which it is to be used."


  7. Safeguards: "Personal information must be protected by appropriate security relative to the sensitivity of the information."


  8. Openness: "An organization must make detailed information about its policies and practices relating to the management of personal information publicly and readily available."


  9. Individual access: "Upon request, an individual must be informed of the existence, use, and disclosure of their personal information and be given access to that information. An individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate."


  10. Challenging compliance: "An individual shall be able to challenge an organization’s compliance with the above principles."


In the case of clinical trial recruitment, the principles of consent, identifying purposes, and limiting use are particularly salient. First, under PIPEDA, consent must be meaningful, which means that the consenting party must understand the nature, purpose, and foreseeable consequences of the collection and use of their personal information. However, as Renan Lionel pointed out in his article, there is an epistemic imbalance between patients and AI developers. It is therefore unclear what degree of technical familiarity with how an AI system functions, stores data, or generates outputs constitutes "adequate" patient understanding. Misconceptions about AI capabilities, whether overestimating or underestimating them, as well as the inherent opacity of many AI systems, can further complicate standards of informed decision-making.


Moreover, PIPEDA requires organizations to disclose all of the purposes for which personal information is being collected at the time of collection, to use information exclusively for the disclosed purpose(s), and to permit the withdrawal of consent at any time. Therefore, saving collected patient data and trial outcomes to train or refine future models requires additional consent to be procured, particularly if this function was not originally disclosed.


Appropriate systems for collecting consent also vary. Patient-facing platforms such as TrialX and TrialGPT (Foundation29) only generate outputs when patients actively volunteer their data and so effectively have consent and opt out processes embedded in their very structure, though full disclosure of all anticipated information usages remains essential. Other technologies, such as Tempus One and Viz Recruit, automatically analyze in-network patient data at the point of clinical evaluation for trial eligibility; it is less clear whether patients are informed of this use or whether they are able to opt out.


It is, finally, important to note that most of AI platforms examined are developed by American companies, operating under US laws and regulatory frameworks. Cross-border deployment introduces additional complexity regarding jurisdiction, enforcement, and harmonization of consent standards.


Conclusion


There are vast structural differences across current AI platforms for clinical trial recruitment, namely, in how data is collected and processed (e.g., voluntary submission versus automated analysis of live-updated records versus analysis of consumer data). This structural variability requires a careful and often case-by-case application of consent and disclosure principles. More generally, because AI-assisted clinical trial recruitment encompasses a wide range of technological architectures, data flows, and business models, safety and regulatory standards cannot be uniformly evaluated or enforced. "Safe" AI does not necessitate a simple technical or procedural standard, and emerging AI policies must account for structural and functional diversity. This regulatory complexity calls for further work to guide responsible use of generative AI in clinical settings.


Hannah So is a Research Associate in Bioethics at Genomics4S, where she supports research projects on the ethical use of AI in clinical research and contributes to public outreach initiatives. 


Recommended citation: So, H. (2026). The Rise of AI in Clinical Trial Recruitment: Mapping the Market and Its Regulatory Fault Lines. Zenodo. Canadian Institute for Genomics and Society (Blog). March 3, 2026. https://doi.org/10.5281/zenodo.18855021


Comments


© 2018 by Canadian Institute for Genomics and Society. Proudly created with Wix.com

  • Facebook Social Icon
  • Twitter Black Round
bottom of page