Skip to content

AI Agent Facilitation

Feature Status: Phase 1 Complete (Foundation Infrastructure) Last Updated: 2026-01-21 Parent Document: Project Plan Implementation Plan: ai-agent-implementation-plan.md


Implementation Status

Phase Description Status
Phase 1 Foundation - Database models, LLM service, ChangeSet infrastructure ✅ Complete
Phase 2 Protocol Ingestion & Study Structure Generation 🔜 Next
Phase 3 Battery & Assessment Configuration Assistance 🔜 Planned
Phase 4 Amendment Impact Analysis & Cutover Planning 🔜 Planned

Phase 1 Deliverables

Database Models (server/alembic/versions/025_ai_agent_foundation.py): - AgentDocument - Uploaded context documents (protocols, SoAs) - AgentRun - AI task execution tracking - ChangeSet - Container for AI-generated proposals - ChangeSetItem - Individual reviewable artifact changes

Services: - LLMService (server/app/services/llm_service.py) - Anthropic Claude integration with structured output, streaming, token tracking - AgentService (server/app/services/agent_service.py) - Task orchestration, document and run management - ChangeSetValidator (server/app/services/changeset_validator.py) - Pre-apply validation with risk flagging

API Router (server/app/routers/ai_assistant.py): - Document management endpoints - Agent run management endpoints - ChangeSet lifecycle endpoints - Full audit logging for all operations


Overview

This document specifies the AI Agent Facilitation capability for the Metricis EDC platform. AI agents are embedded as governed co-pilots for configuration and governance, not autonomous actors.

The winning pattern is Agent-as-Co-Pilot, with (1) defined scopes, (2) verifiable outputs, (3) approval gates, and (4) audit trails—rather than a generic "chatbot."


Where it belongs in the UI


Where it belongs in the UI

1) New top-level item under Configure

Add a first-class item:

  • Configure → Study Assistant (AI)

This is the workspace for authoring, reviewing, validating, and promoting configuration artifacts.

2) Contextual “Assist” entry points across Configure

In addition to the main workspace, add “Assist with AI” actions in:

  • Study Design (visits/forms)
  • Assessments Designer (batteries/modules/event linking)
  • eConsent Designer (consent templates/versioning/triggers)
  • Data Standards (terminology, rules, mappings)
  • Metadata Versions (diff explanation, impact analysis, release notes)

The main workspace is the “control room,” while contextual assists are “inline power tools.”


What the agent should do for study initiation/setup

Think in terms of setup jobs that produce structured artifacts, not free-form advice. High-value tasks:

A. Protocol ingestion → structured study skeleton

Inputs:

  • Protocol PDF (or synopsis)
  • Schedule of Activities table (SoA)
  • CRF spec (if available)
  • Consent doc(s)
  • Assessment battery plan

Outputs (agent drafts):

  • StudyEventDefs (visit schedule, windows, repeats, unscheduled events)
  • FormDefs and ItemDefs stubs
  • Assessment event binding plan (which battery at which visit)
  • Draft rules checklist (required fields, cross-checks)
  • Initial mapping stubs (ODM OIDs, export domains)

Human gate:

  • DM reviews/edits; PI approves publication.

B. Battery builder acceleration (jsPsych)

Inputs:

  • Battery specification (domains, modules, order, timing)
  • Language variants
  • Device constraints

Outputs:

  • Battery configuration drafts
  • Event Linking proposals
  • Scoring metadata stub (analysis-ready fields vs raw signals)
  • ODM ItemDef mapping table for each output

C. eConsent configuration acceleration

Inputs:

  • Consent text + required signature workflow
  • Trigger rules (re-consent conditions)

Outputs:

  • ConsentVersion drafts, language variants
  • Workflow config (signers, sequencing)
  • Consent gating rules tied to StudyEventDefs
  • Re-consent triggers tied to Metadata Version publish

D. Standards alignment and edit-check authoring

Inputs:

  • Data dictionary
  • Terminology requirements (MedDRA/WHO-DD; local codelists)
  • CDISC expectations

Outputs:

  • CodeLists drafts
  • Units definitions
  • Range checks, missingness checks
  • Cross-form checks (e.g., sex vs pregnancy fields)
  • Query triggers aligned with monitoring plan

E. Amendment planning assistant (stress-test driven)

Inputs:

  • Proposed amendment description
  • Which events/forms/batteries/consents change

Outputs:

  • Impact analysis (“who/what is affected”)
  • Cutover plan (queue reconciliation policy)
  • Participant cohorts (pending re-consent; in-flight assessments)
  • Draft “Amendment SOP run sheet” and required audit events

How it should work (the operational model)

1) Treat the agent like a “generator of draft change-sets”

Core principle: the agent does not “edit the study.” It produces a ChangeSet that can be reviewed and applied.

ChangeSet contains:

  • Proposed objects to create/update (StudyEventDefs, FormDefs, BatteryVersions, ConsentVersions, CodeLists, Rules)
  • Diff against current metadata version
  • Rationale for each change
  • Risk flags (breaking changes, unmapped outputs, consent gating gaps)
  • Suggested tests to run

This is essential for regulatory defensibility.

2) Two-stage commit with approvals

  • Generate draft (agent)
  • Review and edit (DM/authorized config roles)
  • Approve and publish (PI signature where required)

This aligns with your Metadata Versions governance and Amendment SOP.

3) Enforce “agent scopes” and “no silent actions”

Define scopes per task:

  • “Draft only” (default)
  • “Apply in draft metadata version” (DM only)
  • “Never publish/activate” (agent cannot publish; PI must sign)

Even if you eventually allow auto-application, the default should be draft + review.


What backend architecture you need

A. Agent Service as a first-class subsystem

Create an internal service (or module) with:

  • Agent Orchestrator
  • Routes tasks, manages context, handles tools
  • Artifact Store
  • Stores prompts, inputs, outputs, and structured drafts
  • ChangeSet Engine
  • Converts agent suggestions into validated, typed objects
  • Policy Engine
  • Enforces role scopes and action restrictions
  • Audit Hooks
  • Logs every agent run, suggestion, applied change, and approval

B. Typed outputs (non-negotiable)

The agent must output machine-checkable structures, e.g.:

  • JSON for StudyEventDefs, FormDefs
  • JSON/YAML for CodeLists and rules
  • Mapping tables (CSV/JSON) for ODM/SDTM alignment

This allows automatic validation and prevents “chatty” ambiguity.

C. Validation pipeline before anything is applied

Every ChangeSet should run validators:

  • ODM integrity checks (OIDs, references)
  • Mapping completeness checks
  • Rule compilation checks
  • Consent gating coverage checks
  • Assessment delivery feasibility checks (device, language, timing)
  • “Breaking change” detection

Only validated ChangeSets can be applied to a draft metadata version.


How it appears to users (UX pattern)

Configure → Study Assistant (AI)

A workspace with tabs:

  1. Intake
  2. Upload protocol/SoA/CRF/consent docs
  3. Provide structured “study facts” form (phase, arms, sites, visits)
  4. Drafts
  5. Generated objects grouped by category (Visits, Forms, Assessments, Consent, Standards, Rules)
  6. Each draft has: preview, rationale, confidence flags, “needs human input”
  7. Diff & Impact
  8. Shows what changes relative to current metadata
  9. Who is affected (participant cohorts) if already live
  10. Validation
  11. Pass/fail checks, warnings, required fixes
  12. Apply to Draft Version
  13. DM applies to draft metadata version
  14. Produces a release note summary for PI review
  15. Audit Log
  16. Every run, input sources, outputs, who approved what

Inline assists

Inside Study Design / Assessments Designer / eConsent Designer, add:

  • “Ask AI to draft…” buttons that route to the assistant but keep context.

Where it adds the most value in study initiation

If you prioritize, do these first:

  1. SoA → Visit schedule + windows + event linking
  2. Battery plan → Batteries + event linking + mapping stubs
  3. Consent docs → Consent versions + triggers + gating
  4. Rules starter set (missingness, ranges, critical cross-checks)
  5. Release note generation for Metadata Version publish

These reduce setup time dramatically while remaining safe.


Compliance and risk controls you should implement up front

  • Agent outputs are always labeled Draft until approved.
  • Agent cannot:
  • publish metadata versions
  • activate consent versions
  • lock data
  • override consent gating
  • Every agent action is auditable:
  • input artifacts referenced
  • output artifacts generated
  • user who applied changes
  • approvals and signatures

Minimal sidebar revision to support “first-class” AI

Under Configure, add:

  • Study Assistant (AI)

Optionally, also add under Operations:

  • AI Settings
  • model selection, redaction policy, retention, allowed tools

But keep “Study Assistant” in Configure where the work lives.


Below is a UI wireframe description (pages + components) for making AI agent facilitation a first-class function in your EDC, specifically focused on study initiation and setup. This is written so it can be handed directly to a product designer, frontend lead, or AI-agent engineer.

I will describe:

  1. Global UI patterns
  2. The Study Assistant (AI) primary workspace
  3. Contextual inline AI assists
  4. Cross-cutting components (diffs, validation, approvals, audit)
  5. Role-specific affordances (DM vs PI vs CRC)

No visual mockups are assumed; this is a structural wireframe spec.


Global UI Patterns (Applies Everywhere)

Persistent Elements

  • Left Sidebar (as defined in sidebar-menu.md)
  • Top Context Bar
  • Active Study
  • Metadata Version (e.g., “Draft – Amendment v2.0”)
  • Environment (Draft / Published)
  • Role indicator
  • Right Utility Panel (collapsible)
  • Contextual AI suggestions
  • Validation warnings
  • Help / SOP links

AI Interaction Pattern (Standardized)

Every AI interaction must:

  • Declare scope (“Draft only”, “Draft metadata version”)
  • Declare outputs (what artifacts will be created)
  • Produce structured drafts, not free text
  • Be reviewable, editable, and auditable

1. Configure → Study Assistant (AI)

This is the primary AI workspace. Think of it as a controlled “command center,” not a chat UI.


Page 1: Study Assistant – Overview

Purpose:

Orient the user, show current state, and launch AI-assisted setup jobs.

Components

A. Header

  • Title: Study Assistant (AI)
  • Subtitle: “AI-assisted study setup and amendment planning”
  • Status badges:
  • Active Study
  • Metadata Version (Draft / Published)
  • Last AI run timestamp

B. Job Launcher Panel (Primary CTAs)

Card-based actions:

  • Ingest Protocol & Draft Study Structure
  • Draft Visit Schedule from SoA
  • Build Assessments & Batteries
  • Configure eConsent
  • Generate Data Standards & Rules
  • Plan Protocol Amendment

Each card shows:

  • Inputs required
  • Outputs generated
  • Roles required to apply results

C. Recent AI Jobs

Table:

  • Job name
  • Triggered by
  • Status (Draft ready / Needs input / Failed validation)
  • Affected objects
  • Link to results

Page 2: Intake (per AI job)

Purpose:

Collect structured inputs and source documents.

Components

A. Input Checklist (Left)

  • Required inputs (with completion status)
  • Protocol document
  • Schedule of Activities
  • Consent document(s)
  • Assessment plan
  • Upload controls
  • Metadata form:
  • Study phase
  • Arms/cohorts
  • Countries/sites
  • Visit cadence assumptions

B. Source Preview (Center)

  • Embedded PDF/doc viewer
  • Highlighted sections (AI-extracted)
  • Manual annotations allowed

C. AI Configuration Panel (Right)

  • Model selection (if allowed)
  • Output scope:
  • Draft only
  • Apply to draft metadata version
  • Constraints:
  • “Do not create new visits”
  • “Do not change existing batteries”
  • Run AI Draft button

Page 3: Drafts & Artifacts

Purpose:

Review AI-generated outputs as ** first-class configuration artifacts .**

Layout

Tabbed by artifact type:

  • Visits & Schedule
  • Forms & Items
  • Assessments & Batteries
  • eConsent
  • Standards & Rules
  • Mappings

Artifact Card (reusable component)

Each draft artifact appears as a card with:

  • Name (e.g., “Visit 3 – Follow-up”)
  • Type (StudyEventDef, BatteryVersion, ConsentVersion)
  • Status:
  • Draft
  • Needs review
  • Validation error
  • Confidence indicators:
  • “Derived from protocol section 6.2”
  • Actions:
  • View
  • Edit
  • Reject
  • Accept into ChangeSet

Clicking View opens a structured detail panel.


Page 4: Artifact Detail View

Purpose:

Allow precise, human-controlled review and editing.

Components

A. Structured Definition (Center)

  • JSON-like form editor (not raw JSON)
  • Fields aligned to domain model:
  • For visits: windows, repeats, consent requirement
  • For batteries: modules, order, scoring
  • For consent: signatures, triggers

B. AI Rationale Panel (Right)

  • “Why this was generated”
  • Source references
  • Assumptions made
  • Risk flags (“Potential breaking change”)
  • Accept
  • Edit
  • Mark as needs human input
  • Reject (with reason)

Page 5: Diff & Impact Analysis

Purpose:

Show exactly what will change if drafts are applied.

Components

A. Metadata Diff Viewer

  • Side-by-side:
  • Current metadata version
  • Proposed version
  • Highlighted changes:
  • Added
  • Modified
  • Deprecated

B. Impact Summary

  • Affected visits
  • Affected participants (if live)
  • Affected assessments
  • Consent implications

C. Risk Flags

  • Requires re-consent
  • Breaks longitudinal consistency
  • Requires PI approval

Page 6: Validation & Checks

Purpose:

Prevent unsafe or non-compliant application.

Validation Panels

Each with pass/warn/fail:

  • ODM integrity (OIDs, references)
  • Assessment mapping completeness
  • Consent gating coverage
  • Rule compilation
  • Device feasibility (jsPsych)
  • Amendment SOP alignment

Blocking failures are explicit.


Page 7: Apply to Draft Metadata Version

Purpose:

Controlled promotion into governance workflow.

Components

  • Summary of accepted drafts
  • Target metadata version selector
  • Auto-generated release notes
  • Apply Changes button (DM only)

System actions:

  • Creates ChangeSet
  • Updates draft metadata version
  • Emits audit events

Page 8: Audit & History

Purpose:

Inspection-grade traceability.

Table

  • Timestamp
  • User
  • AI task
  • Inputs used
  • Outputs generated
  • Applied? (Y/N)
  • Approval references

All immutable.


2. Inline AI Assists (Contextual)

These appear as “Ask AI to Draft…” buttons within existing Configure pages.


Study Design (Authoring)

Button: “Draft visits from protocol”

Opens:

  • Mini intake modal
  • Produces Visit drafts
  • Routes results to Study Assistant Drafts tab

Assessments Designer

Buttons:

  • “Draft battery from spec”
  • “Suggest event linking”
  • “Generate ODM mappings”

Outputs:

  • BatteryVersion drafts
  • Event binding proposals
  • Mapping tables

eConsent Designer

Buttons:

  • “Draft consent version”
  • “Suggest re-consent triggers”

Outputs:

  • ConsentVersion drafts
  • Trigger rules

Metadata Versions

Button:

  • “Explain differences”
  • “Assess amendment impact”

Outputs:

  • Human-readable diff explanation
  • Participant cohort analysis

3. Cross-Cutting Components

ChangeSet Viewer

  • Aggregates accepted drafts
  • Shows dependency graph
  • Required approvals listed

Approval Panel (PI)

  • Review summary
  • Sign & approve
  • Linked SOP checklist

Override Dialog (Rare)

  • Requires explicit justification
  • Restricted roles
  • Heavy audit logging

4. Role-Specific UI Behavior

Data Manager

  • Full access to Study Assistant
  • Can apply drafts to metadata
  • Cannot publish without PI

Principal Investigator

  • Read-only drafts
  • Approval/signature panels only
  • No editing

CRC / Monitor

  • No access to Study Assistant
  • May see amendment summaries post-publication

5. Why this wireframe works

  • AI is ** embedded in governance , not bolted on**
  • Every AI action produces typed, reviewable artifacts
  • Approvals and audit are first-class
  • Matches mental models of DM/PI/Inspector
  • Scales from study initiation to amendments

Below is a formal Audit Event Taxonomy designed specifically for your EDC architecture, including AI-assisted study setup, metadata versioning, jsPsych assessments, eConsent, and protocol amendments. This taxonomy is intended to be:

  • Inspection-ready (ICH-GCP, 21 CFR Part 11)
  • Machine-enforceable (for logging and querying)
  • Human-interpretable (for monitors, auditors, inspectors)
  • Compatible with ODM concepts and your Amendment SOP

This is written as a normative specification, not commentary.


Audit Event Taxonomy

EDC Platform – Regulated Clinical Trials


1. Design Principles

All audit events MUST:

  1. Be immutable once written
  2. Be timestamped with server-side UTC time
  3. Capture actor identity (human or AI agent)
  4. Capture object identity and version
  5. Capture intent and outcome
  6. Be queryable by role, object, participant, and time
  7. Be retained for the regulatory retention period

AI-generated actions are not exempt and must be distinguishable from human actions.


2. Core Audit Event Schema (Conceptual)

Every audit event MUST include the following canonical fields:

Field Description
event_id Globally unique identifier
event_category High-level category (see Section 3)
event_type Specific event type
actor_type HUMAN**
actor_id User ID or Agent ID
actor_role CRC, PI, DM, Monitor, Safety, System
study_id Study identifier
metadata_version_id Metadata version in effect
target_type Object type acted upon
target_id Object identifier (OID/UUID)
target_version Version of the object
participant_id Nullable; required for participant-level events
timestamp_utc Server-side timestamp
action CREATE, UPDATE, DELETE, VIEW, SIGN, APPLY, OVERRIDE
reason Nullable free-text justification
source UI, API, BackgroundJob, AI_Assistant
outcome SUCCESS, FAILURE, PARTIAL
hash Integrity hash of event payload

3. Audit Event Categories

All events MUST fall into exactly one category.

3.1 AUTHENTICATION & SESSION

Purpose: User identity and access control.

Event Type
USER_LOGIN
USER_LOGOUT
SESSION_TIMEOUT
MFA_CHALLENGE
MFA_SUCCESS
MFA_FAILURE

3.2 AUTHORIZATION & RBAC

Purpose: Permission enforcement and security controls.

Event Type
ROLE_ASSIGNED
ROLE_REVOKED
PERMISSION_DENIED
DELEGATION_GRANTED
DELEGATION_REVOKED

3.3 STUDY & METADATA GOVERNANCE

Purpose: Study structure, ODM-aligned metadata, and versioning.

Event Type
STUDY_CREATED
STUDY_UPDATED
METADATA_VERSION_CREATED
METADATA_VERSION_MODIFIED
METADATA_VERSION_APPROVED
METADATA_VERSION_PUBLISHED
METADATA_VERSION_ROLLED_BACK
METADATA_DIFF_VIEWED

Inspection relevance:

Demonstrates controlled lifecycle and PI oversight.


3.4 AI AGENT OPERATIONS

Purpose: Full traceability of AI-assisted actions.

Event Type
AI_JOB_STARTED
AI_JOB_COMPLETED
AI_JOB_FAILED
AI_DRAFT_CREATED
AI_DRAFT_REJECTED
AI_DRAFT_ACCEPTED
AI_DRAFT_APPLIED
AI_RISK_FLAG_RAISED

Required metadata additions:

  • AI model identifier/version
  • Input artifacts referenced
  • Output artifact IDs

3.5 STUDY DESIGN & CONFIGURATION

Purpose: Authoring and configuration actions.

Event Type
STUDY_EVENT_CREATED
STUDY_EVENT_UPDATED
FORM_CREATED
FORM_UPDATED
ITEM_CREATED
ITEM_UPDATED
RULE_CREATED
RULE_UPDATED
RULE_DEACTIVATED

3.6 ASSESSMENTS (jsPsych / eCOA)

Purpose: Assessment lifecycle and participant interaction.

Event Type
BATTERY_CREATED
BATTERY_VERSION_CREATED
BATTERY_VERSION_ACTIVATED
ASSESSMENT_QUEUED
ASSESSMENT_CANCELED
ASSESSMENT_REISSUED
ASSESSMENT_STARTED
ASSESSMENT_COMPLETED
ASSESSMENT_FAILED
ASSESSMENT_EXPIRED

Participant-level events MUST include participant_id.


3.7 ECONSENT

Purpose: Consent authoring, execution, and governance.

Event Type
CONSENT_VERSION_CREATED
CONSENT_VERSION_APPROVED
CONSENT_VERSION_ACTIVATED
CONSENT_SENT
CONSENT_VIEWED
CONSENT_SIGNED
CONSENT_FAILED
CONSENT_WITHDRAWN
RECONSENT_TRIGGERED

Regulatory note:

Consent signature events must be non-repudiable.


3.8 DATA ENTRY & MODIFICATION

Purpose: Capture of CRF and derived data.

Event Type
FORM_OPENED
FORM_SAVED
FORM_SUBMITTED
ITEM_VALUE_ENTERED
ITEM_VALUE_MODIFIED
ITEM_VALUE_DELETED

3.9 QUERIES & DISCREPANCIES

Purpose: Data quality lifecycle.

Event Type
QUERY_CREATED
QUERY_UPDATED
QUERY_RESPONDED
QUERY_CLOSED
QUERY_REOPENED

3.10 SAFETY

Purpose: Adverse event handling.

Event Type
AE_CREATED
AE_UPDATED
SAE_SUBMITTED
SAE_ACKNOWLEDGED
SAFETY_REPORT_EXPORTED

3.11 CODING (MedDRA / WHO-DD)

Purpose: Medical coding traceability.

Event Type
TERM_CODED
CODE_UPDATED
CODE_VERSION_CHANGED

3.12 DATA REVIEW, FREEZE & LOCK

Purpose: Dataset finalization.

Event Type
DATA_REVIEW_STARTED
DATA_REVIEW_COMPLETED
DATA_FROZEN
DATA_UNFROZEN
DATA_LOCK_REQUESTED
DATA_LOCK_APPROVED

3.13 REPORTING & EXPORTS

Purpose: Regulatory outputs.

Event Type
REPORT_GENERATED
EXPORT_REQUESTED
EXPORT_COMPLETED
EXPORT_FAILED

3.14 OVERRIDES & EXCEPTIONS

Purpose: Controlled deviations.

Event Type
CONSENT_OVERRIDE
ASSESSMENT_OVERRIDE
VISIT_OVERRIDE
RULE_OVERRIDE

Overrides MUST include:

  • Reason
  • Approving role
  • Linked SOP reference

3.15 SYSTEM & BACKGROUND TASKS

Purpose: Automated operations.

Event Type
BACKGROUND_JOB_STARTED
BACKGROUND_JOB_COMPLETED
QUEUE_RECONCILIATION_RUN
NOTIFICATION_SENT
NOTIFICATION_FAILED

Severity Meaning
INFO Routine operation
WARNING Potential compliance risk
CRITICAL Regulatory-impacting action

Examples:

  • CONSENT_SIGNED → INFO
  • CONSENT_OVERRIDE → CRITICAL
  • AI_DRAFT_APPLIED → INFO
  • METADATA_VERSION_PUBLISHED → CRITICAL

5. Required Audit Reports (Derived Views)

Your UI should support generating:

  1. Participant Audit Timeline
  2. Amendment Cutover Audit Report
  3. Consent Provenance Report
  4. Assessment Version Compliance Report
  5. AI Agent Activity Report
  6. Override & Exception Report
  7. Metadata Governance Report

Each report is a filtered projection of the same underlying event table.


6. Inspector Test Question Mapping

Inspector Question Audit Coverage
Who approved the amendment? METADATA_VERSION_APPROVED
When was re-consent triggered? RECONSENT_TRIGGERED
Which battery version was used? ASSESSMENT_STARTED / COMPLETED
Was AI involved? AI_JOB_* events
Were overrides used? *_OVERRIDE events
Was data locked correctly? DATA_LOCK_APPROVED

7. Non-Negotiable Implementation Rules

  • No audit gaps: every state change emits an event
  • AI actions are never silent
  • Actor identity is always explicit
  • Events are append-only
  • Audit views are read-only

Below is a ** Health Canada–specific audit trail narrative template , aligned with ** Food and Drug Regulations (C.05) , ** ICH-GCP E6 (R2/R3) , and Health Canada inspection expectations for ** electronic systems used in clinical trials .

This version uses Health Canada terminology and emphasis, and is suitable for direct inclusion in an inspection binder or as a controlled document.


Audit Trail Narrative

Electronic Data Capture (EDC) System

Health Canada Inspection Version


1. Purpose

This document describes how the Electronic Data Capture (EDC) system used for [Study Title / Protocol Number] maintains a complete, secure, and reliable audit trail in compliance with:

  • Food and Drug Regulations, Division 5 (C.05)
  • ICH-GCP E6 (R2/R3)
  • Health Canada guidance on computerized systems used in clinical trials

The audit trail supports verification of data integrity, participant protection, investigator oversight, and regulatory compliance.


2. System Description

The EDC system is a validated, role-based computerized system used to manage clinical trial data, participant consent, assessments, and study configuration. The system includes:

  • Version-controlled study metadata
  • Role-based access control
  • Electronic signatures
  • Participant-level consent management
  • Participant-completed electronic assessments
  • Immutable, system-generated audit trails

The system is designed so that ** all changes to data, configuration, and permissions are fully traceable .**


3. Audit Trail Principles (Health Canada Alignment)

The audit trail is implemented according to the following principles, consistent with Health Canada expectations:

  1. Completeness All creation, modification, review, approval, and deletion actions are recorded.
  2. Independence Audit trail records are generated automatically by the system and cannot be modified by users.
  3. Attribution Each record identifies:
  4. The individual or system process performing the action
  5. The role held at the time of the action
  6. Chronology All events are time-stamped using server-side Coordinated Universal Time (UTC).
  7. Traceability Each data point can be traced to:
  8. The study metadata version in effect
  9. The participant and study visit
  10. The consent version under which the data were collected

4. Roles and Responsibilities

The EDC system enforces role-based access consistent with study delegation logs and ICH-GCP:

  • Principal Investigator (PI)
  • Approval and electronic signature authority
  • Oversight of study conduct and amendments
  • Data Manager (DM)
  • Study configuration and metadata management
  • Data review and lock initiation
  • Clinical Research Coordinator (CRC)
  • Participant enrollment, consent execution, and operational tasks
  • Monitor (CRA)
  • Independent verification and query management (read-only data access)
  • Safety Officer
  • Adverse event and serious adverse event oversight

All role assignments and changes are recorded in the audit trail.


5. Scope of the Audit Trail

The audit trail captures events including, but not limited to:

  • User authentication and access control
  • Study and metadata version creation, approval, and publication
  • Protocol amendments
  • Participant enrollment and status changes
  • Informed consent creation, delivery, and signing
  • Electronic assessment delivery and completion
  • Data entry, modification, and review
  • Query creation and resolution
  • Adverse event reporting
  • Data freeze, lock, and export
  • System-generated and AI-assisted configuration actions

6. Example Scenario: Protocol Amendment

6.1 Amendment Identification

  • Protocol Amendment: [e.g., Version 2.0]
  • REB Approval Date: [Date]
  • Implementation Date in EDC: [Date and Time UTC]

6.2 Study Metadata Control

The amendment was implemented through the creation of a new, versioned study metadata configuration.

The audit trail records:

  • Creation of the amended metadata version
  • Review and approval by the Principal Investigator
  • Publication of the metadata version with an effective date

This demonstrates controlled change management and investigator oversight, as required under ICH-GCP and Division 5.


Where the amendment required participant re-consent:

  • A new consent version was created and approved
  • Re-consent was triggered for affected participants
  • Participant actions (viewing and signing) were recorded
  • The system prevented further study procedures requiring consent until re-consent was completed

The audit trail documents ** who consented, which version was signed, and when .**


6.4 Electronic Assessments

The amendment included changes to participant-completed electronic assessments.

The audit trail records:

  • Creation and activation of new assessment versions
  • Cancellation or reissue of assessments, where applicable
  • Participant start and completion of assessments
  • Association of each assessment with:
  • The correct assessment version
  • The applicable metadata version
  • The corresponding study visit

This ensures that all assessment data are collected in accordance with the approved protocol.


6.5 Data Integrity and Oversight

Throughout amendment implementation:

  • No data were collected without valid consent
  • No data were overwritten or deleted
  • All actions were attributable and time-stamped
  • Any deviations or overrides required justification and were logged

7. Data Review, Freeze, and Lock

The audit trail documents:

  • Data review activities
  • Medical coding actions
  • Data freeze initiation
  • Data lock approval by the Principal Investigator

Once locked, data cannot be modified without a documented and audited unlock process.


8. Use of AI-Assisted Tools

The EDC system includes AI-assisted tools used to support study configuration and amendment planning.

For regulatory compliance:

  • AI tools generate draft recommendations only
  • All AI-generated outputs require human review and approval
  • No AI-generated changes are implemented without PI approval
  • All AI activity is clearly identified and audited

This ensures transparency and accountability, consistent with Health Canada expectations.


9. Audit Trail Review and Availability

Authorized users can generate audit trail reports including:

  • Participant-level timelines
  • Amendment-specific audit summaries
  • Consent provenance reports
  • Assessment version tracking reports

Audit trail data are available for review by Health Canada inspectors upon request and are retained for the required regulatory retention period.


10. Conclusion

The audit trail for [Study Title / Protocol Number] demonstrates that:

  • Study conduct was controlled and traceable
  • Participant safety and consent requirements were enforced
  • Data integrity was maintained throughout the study lifecycle
  • Oversight and accountability were clearly documented

The EDC system supports compliance with Health Canada regulations and facilitates regulatory inspection.


11. Supporting Documentation (Available Upon Request)

  • Audit trail exports
  • Metadata version histories
  • Consent forms and signatures
  • Assessment configuration records
  • SOPs governing computerized systems, amendments, and data management