GLOSSARY

Multi-Site Imaging Trial

A multi-site imaging trial is a clinical study that collects medical images from participants enrolled at two or more separate sites — hospitals, imaging centres, or academic medical centres — and uses those images to assess efficacy, safety, or diagnostic criteria as trial endpoints. Managing imaging data consistently across sites is the central operational challenge.

 

What is a multi-site imaging trial?

A multi-site imaging trial — also called a multi-centre imaging study or multi-center imaging study — is a clinical study that collects medical images from participants enrolled at two or more geographically separate sites and uses those images to assess efficacy, safety, or diagnostic criteria as part of the trial's primary or secondary endpoints.

The majority of Phase II and Phase III clinical trials are multi-site by design.¹ Distributing enrolment across multiple sites enables sponsors to recruit the patient numbers required for statistical power more quickly than any single centre could, and produces a more demographically representative trial population.

When imaging is a primary endpoint, the multi-site structure introduces challenges that non-imaging trials do not face. A non-imaging multi-site trial primarily manages structured data — EDC records, laboratory results — which are more readily standardised than imaging data. A multi-site imaging trial adds scanner variability, DICOM routing complexity, reader qualification, protocol deviation risk, and harmonization requirements on top of that baseline. Each additional site increases the surface area for data quality failures.

Why multi-site imaging trials are harder than other multi-site trials

In a non-imaging multi-site trial, a site submits data through an EDC form. Standardisation is achieved through protocol training and edit checks. In a multi-site imaging trial, data is an image file produced by a physical machine — and different scanners do not produce perfectly identical outputs, even when running the same protocol.

This means multi-site imaging trials require a layer of infrastructure that non-imaging trials do not: acquisitions from different vendors, field strengths, and software versions must be pushed through standardized pipelines — de-identification, conversion to analysis-ready formats (e.g., BIDS), gradient nonlinearity and distortion correction, motion and eddy-current correction, intensity normalization, registration to a common template or atlas, and segmentation or quantitative map generation — before any site-to-site comparison is statistically defensible. Each of these stages is a potential source of site-level bias: a pipeline that silently fails on one vendor's DWI, a FreeSurfer version mismatch between sites, or a B1 inhomogeneity that skews cortical thickness estimates can all confound the endpoint just as readily as a reader disagreement. Sponsors therefore need pipeline version control, per-site QC gating, and harmonization methods (ComBat and related approaches) applied consistently across the study.

Each of these steps introduces failure modes that compound across a large site network.

Centralised imaging oversight — where all images are routed through a single platform, quality-checked at receipt, and processed by the same algorithms — is the most common approach in large multi-site imaging trials to controlling this complexity. It does not eliminate site variability, but it makes variability visible, measurable, and correctable before it contaminates the endpoint dataset.

Seven operational challenges in multi-site imaging trials

1. Protocol adherence

Every site must acquire images using the correct parameters: field strength, sequence type, slice thickness, contrast dosing, and timing relative to treatment. Enforcing this remotely across dozens of sites — with different scanner models, different technicians, and different local practices — is difficult and cannot rely on site self-reporting alone.

Protocol deviations that affect image evaluability may result in that patient's data being excluded from endpoint analysis if evaluability criteria are not met. At scale, a 10% deviation rate across a 200-patient trial represents 20 patients whose imaging data cannot be used — a meaningful reduction in statistical power.

Automated quality checking at the point of upload is the most effective mitigation: flagging deviation patterns before images enter the analysis pipeline, while there is still time to correct them.

2. Data completeness

Tracking which sites have submitted images for which patients at which visits — and following up on missing submissions before the collection window closes — is one of the most time-consuming operational tasks in a large imaging trial.

Missing imaging data at a single visit is often unrecoverable within protocol-defined windows: patients have left the clinic, scanner time is gone, and the protocol window has passed. Real-time submission dashboards that flag missing data across all sites simultaneously, rather than surfacing gaps during manual reconciliation weeks later, are a key factor in preventing unrecoverable data loss.

3. Site variability

Even when imaging protocols are followed correctly, scanner hardware differences between sites introduce systematic variability in image characteristics. MRI scanners from Siemens, GE, United, Philips, and Canon use different reconstruction algorithms; the same brain imaged on two different 3T scanners will produce images with measurably different signal properties. This variability must be addressed through standardization and harmonization before images from different sites can be combined in a single analysis.

4. PACS heterogeneity

Each site's Picture Archiving and Communication System (PACS) is configured differently — different vendor implementations, different DICOM tag populations, different local anonymisation practices. Connecting dozens of different PACS configurations to a single central imaging repository requires per-site integration work and flexible anonymisation workflows that account for site-level variability rather than assuming a fixed tag structure.

5. Imaging biomarkers

Quantitative biomarkers, like hippocampal volume, cortical thickness, white matter lesion load, DWI-derived metrics, PET SUVR, are only as trustworthy as the consistency of how they're derived across sites and timepoints. Scanner differences, coil changes mid-trial, software upgrades at a site, and pipeline version drift all introduce non-biological variance that can swamp the treatment effect the trial is powered to detect. Operationally, this means sponsors need locked pipeline versions with documented re-processing policy, phantom and traveling-subject calibration where feasible, and statistical harmonization (ComBat or equivalent) applied consistently — plus per-site QC gating so that a biomarker value never reaches the analysis dataset without provenance attached.

6. Central review

Long neuro trials run for years, and a reader's calibration at month 36 is not the same as at month 3 — a problem compounded when endpoints require subjective judgment (ARIA-E/ARIA-H detection, RANO response, lesion counting). Central review operations need scheduled re-qualification reads, blinded duplicate reads at a defined sampling rate, and live kappa/ICC monitoring with pre-specified thresholds that trigger retraining before drift contaminates the endpoint. Layered on top is blinding integrity: randomized presentation order, scrubbed DICOM headers, clean adjudicator assignment rules, and a full audit trail — because a single leaked site ID or timepoint marker can invalidate reads retroactively and force costly re-reads.

7. Regulatory documentation

Multi-site imaging trials require site-level regulatory documentation for every participating centre: qualification records, training logs, protocol confirmation records, deviation logs, and audit trails covering every image submission, quality check, and processing event throughout the trial. This documentation must be maintained in inspection-ready form for the full retention period — not assembled retrospectively before a regulatory filing.

→ See also: Audit trail in clinical imaging · PACS integration in clinical trials · DICOM anonymization

Site qualification in multi-site imaging trials

Before a site can submit images, it must be qualified — a process confirming that the site's scanner meets the trial's technical requirements, its staff understand the imaging protocol, and its data transfer infrastructure is connected and tested.

Site qualification in imaging trials typically covers:

Scanner assessment — Confirmation of field strength, software version, coil configuration, and compliance with the trial's imaging protocol specification.

Protocol review — Formal review and approval of the site's local acquisition settings against the master imaging protocol, including any site-specific adaptations approved by the sponsor.

Staff training — Training of site radiographers and coordinators on the imaging protocol, patient preparation requirements, and image submission procedure — documented with completion records.

Qualification scan — A phantom scan or first patient scan submitted and reviewed by the central imaging team before the site is authorised to enrol. This is the operational gate that prevents protocol-deviant images from entering the trial dataset from day one.

PACS integration and testing — Setup of the DICOM routing connection between the site's PACS and the central imaging platform, including a test transfer confirming anonymisation, image quality checks, and receipt confirmation are all functioning correctly.

Sites that fail qualification must remediate the identified issues before being permitted to submit trial images.

 

Setting up a new multi-site imaging study?
QMENTA's site onboarding team handles scanner qualification, PACS integration, and protocol training across all sites — reducing setup time so your clinical team can focus on science.
Talk to site operations →

 

Managing imaging data at scale

Effective multi-site imaging trial management requires a centralised platform providing real-time visibility across all sites simultaneously — not a spreadsheet-based tracking system reconciled weekly.

The platform must receive images from each site, perform automated quality checks and anonymisation at receipt, track submissions against the expected visit schedule, flag missing or protocol-deviant submissions, route confirmed images to the central review facility or analysis pipeline, and maintain a complete audit trail of every event.

Sponsors and imaging CROs monitor site performance through dashboards showing submission rates, query volumes, and protocol deviation rates by site — enabling proactive intervention before data gaps become unrecoverable.

QMENTA's Imaging Hub is built for this operational model, supporting multi-centre imaging studies across sites globally. The platform served as the operational backbone for a multi-centre MS study connecting ten leading academic institutions that contributed imaging data to research supporting the revision of the McDonald Criteria.²

Key takeaways

  • Multi-site imaging trials face challenges non-imaging trials do not — scanner variability, DICOM complexity, reader qualification, and harmonization requirements on top of standard multi-site coordination
  • Protocol deviation at a single site can render that patient's imaging data unusable — automated quality checking at upload is the primary mitigation
  • Site qualification is typically a required operational gate in regulated imaging trials before any site can submit trial images — consistent with Good Clinical Practice (ICH E6) requirements for data integrity and traceability
  • Centralised cloud-based imaging management provides real-time visibility across all sites and prevents version drift in AI analysis tools
  • Multi-site and multi-centre are interchangeable terms — multi-centre is more common in EU and academic contexts, multi-site in North American industry usage

 

By Paulo Rodrigues, PhD, Chief Technology Officer and Co-Founder at QMENTA
Paulo Rodrigues leads technology strategy at QMENTA and writes about imaging clinical trials, protocol standardization, real-time QC, and compliance-ready neuroimaging workflows for multi-site studies. View executive leadership.

 

¹ IQVIA Institute for Human Data Science. Global Trends in R&D. 2023. iqvia.com/insights/the-iqvia-institute

² QMENTA. 2025 Year in Review: Clinical Imaging Infrastructure Milestones. qmenta.com/blog/qmenta-2025-year-in-review-clinical-imaging-infrastructure-milestones

Running a multi-site imaging trial?

QMENTA's Imaging Hub manages site qualification, PACS integration, real-time submission tracking, and centralised review across trials of any scale — from five sites to several hundred.

See the platform →

QMENTA-Hub-transparent

Frequently asked questions

What is the difference between a multi-site and a multi-centre imaging trial?

The terms are generally used interchangeably. Both refer to clinical studies that enrol participants and collect imaging data at more than one geographical location. Multi-centre is the more common usage in European regulatory and academic contexts; multi-site is used more frequently in North American industry settings. The operational challenges and regulatory requirements are identical regardless of which term is used.

How many sites can a clinical imaging trial support?

There is effectively no fixed technical upper limit in modern cloud-based imaging platforms. Trials range from five to ten sites in early-phase studies to several hundred in large global Phase III programmes. The practical limits are operational — the imaging team's capacity to qualify, train, and monitor sites — rather than technical. Platforms with automated site monitoring, real-time submission tracking, and centralised query management can scale to large site networks without proportionally increasing operational overhead.

What is a protocol deviation in a multi-site imaging trial?

A protocol deviation occurs when a site acquires images that do not conform to the trial's specified parameters — using the wrong MRI sequence, missing a required contrast injection, scanning at the wrong time point relative to treatment, or using scanner settings outside the approved range. Deviations that affect image evaluability are classified as major deviations and may result in the image being excluded from endpoint analysis. Automated quality checks at the point of submission can detect common deviation patterns before the image enters the analysis pipeline, when there is still an opportunity for the site to reacquire if the protocol window permits.

What happens if a site has persistent imaging quality problems?

Persistent quality problems — repeated protocol deviations, high query volumes, or systematic submission delays — are typically addressed through a structured remediation process: targeted retraining, protocol clarification, and additional monitoring. Sites that cannot achieve acceptable quality levels after remediation may be restricted or suspended from imaging collection for the remainder of the trial. Sponsors and imaging CROs should monitor site performance metrics continuously throughout the trial, not only at scheduled site visits, so that emerging quality problems are identified and addressed while they are still recoverable.

Can a new site join a multi-site imaging trial after it has already started?

Yes — late site additions are common in large trials when enrolment is slower than projected or additional geographic coverage is needed. Late sites must complete the full qualification process — scanner assessment, protocol review, staff training, qualification scan, and PACS integration — before submitting trial images. Their imaging data is subject to the same quality checking, harmonization, and audit trail requirements as data from sites that were present at trial initiation. The qualification timeline for a late site typically runs a few weeks, depending on site IT responsiveness and scanner configuration.