Skip to content

Testing Workflow Guide

This document describes how to test batteries, assessments, and the complete data flow (including REDCap sync) before going live with real participants.

Development Server Ports

Service Port URL
Server (FastAPI) 8030 http://localhost:8030
Portal (Researcher) 3030 http://localhost:3030
Client (jsPsych) 3031 http://localhost:3031
Patient Portal 3032 http://localhost:3032

Start all servers:

npm run dev:all

Or start individually:

npm run dev:server   # Port 8030
npm run dev:portal   # Port 3030
npm run dev:client   # Port 3031
npm run dev:patient-portal  # Port 3032

Testing Methods

The fastest way to test a battery without manual form entry. Opens directly in jsPsych with pre-populated participant data.

URL Format:

http://localhost:3031/?battery=<battery-uuid>&study=<study-code>&participant=<participant-code>&age=<age>&site=<site-uuid>

Example:

http://localhost:3031/?battery=2ef1f656-38e1-4483-980b-f6fdc3e8014c&study=STUDY010&participant=TEST-001&age=25&site=demo

Parameters: - battery - UUID of the battery to test (required) - study - Study code (required for site selection) - participant - Participant code (required for quick test mode) - age - Participant age (required for age-adaptive tasks) - site - Site UUID or "demo" (required for quick test mode) - test_mode=true - Optional, allows testing draft batteries

How to get battery UUID: 1. Open Portal → Studies → Select Study → Batteries 2. Click on a battery to view details 3. Copy the UUID from the URL or battery info panel

2. Dev Menu (Single Task Testing)

For testing individual cognitive tasks without a full battery:

http://localhost:3031/?task=cpt
http://localhost:3031/?task=simpleRT
http://localhost:3031/?task=all  # Full battery

3. Portal Test Mode

Testing from the Portal UI with participant links:

  1. Enable Test Mode:
  2. Portal → Studies → Select Study
  3. Toggle "Test Mode" in the study selector dropdown

  4. Generate Test Link:

  5. Navigate to Participants
  6. Select a test participant
  7. Click "Generate Assessment Link"
  8. Copy and open the link

  9. Or use Quick Test from Battery Page:

  10. Navigate to Batteries
  11. Select battery → "Test" button
  12. Enter test participant details

4. iOS Simulator Testing

For testing native Capacitor builds:

  1. Start Vite dev server with host exposure:

    cd client && npm run dev -- --host 0.0.0.0
    

  2. Find your Mac's IP:

    ipconfig getifaddr en0  # WiFi
    

  3. Set environment for hot reload:

    export CAPACITOR_DEV_SERVER="http://192.168.1.xxx:3031/?battery=<uuid>&test_mode=true"
    

  4. Open in Xcode:

    cd client && npx cap open ios
    

  5. Run in simulator - The app will load from your dev server with hot reload.

Debugging iOS: - Safari → Develop → Simulator → app.metricis.app - View console logs and network requests

REDCap Sync Testing

Prerequisites

  • REDCap site configured in Portal (Settings → REDCap)
  • Valid REDCap API token
  • Participant enrolled in REDCap

Test Flow

  1. Verify REDCap Configuration:

    curl -X GET "http://localhost:8030/api/redcap/config/<study-id>"
    

  2. Check participant exists in REDCap:

  3. Open REDCap project
  4. Verify participant record exists

  5. Run assessment with test participant:

  6. Use Quick Test Mode or Portal Test Link
  7. Complete the battery

  8. Verify sync:

  9. Check server logs for sync status
  10. Verify data appears in REDCap project
  11. Check sync_status in session records:
    curl "http://localhost:8030/api/sessions?participant_id=<id>" | jq '.[] | {id, sync_status}'
    

Troubleshooting Sync Failures

If sync fails: 1. Check server logs for CRITICAL errors 2. Verify REDCap API token is valid 3. Check REDCap field mappings 4. Manually retry sync via API:

curl -X POST "http://localhost:8030/api/sessions/<session-id>/resync"

Patient Portal Testing

  1. Generate a magic link:
  2. Portal → Participants → Select Participant
  3. Click "Send Login Link" or generate manually

  4. Or create JWT token directly (dev only):

    cd server && source .venv/bin/activate
    python scripts/generate_portal_token.py --email <participant-email>
    

  5. Access patient portal:

    http://localhost:3032/auth/verify?token=<jwt-token>
    

Testing Patient Portal Tasks

  1. Patient portal requires valid authentication
  2. Participant must have scheduled visits/tasks
  3. Tasks are fetched from /api/portal/tasks

Environment Configuration

All ports are configurable via environment variables for deployment flexibility.

Server (.env)

ENVIRONMENT=development
API_HOST=0.0.0.0  # Required for mobile testing
API_PORT=8030     # Server port (default: 8030)
ALLOWED_ORIGINS=http://localhost:3030,http://localhost:3031,http://localhost:3032,capacitor://localhost,capacitor://app.metricis.app

Client (.env or .env.local)

# API Configuration
VITE_API_URL=                    # Override API URL for all environments
VITE_DEV_API_PORT=8030           # Development API port (default: 8030)
VITE_CLIENT_PORT=3031            # Client dev server port (default: 3031)

# For iOS/Android simulator testing - use your Mac's IP
VITE_NATIVE_API_URL=http://192.168.1.xxx:8030/api

Portal (portal/.env.local)

VITE_API_PROXY_TARGET=http://localhost:8030

Patient Portal (patient-portal/.env.local)

VITE_API_PROXY_TARGET=http://localhost:8030

Changing Ports

To use different ports (e.g., to avoid conflicts):

  1. Update server port in server/.env:

    API_PORT=9030
    

  2. Update client environment:

    VITE_DEV_API_PORT=9030
    VITE_NATIVE_API_URL=http://192.168.1.xxx:9030/api
    

  3. Update portal/patient-portal proxy targets if needed

  4. Update CORS origins in server/.env with new client ports

Longitudinal Study Testing

Testing multi-visit longitudinal studies requires simulating time passage and generating data at multiple timepoints. This section covers the complete workflow for testing longitudinal studies.

Overview

Longitudinal study testing uses three main features:

  1. Time Simulation - Simulate future dates to test visit windows and scheduling
  2. Synthetic Data - Generate test participants with realistic assessment data
  3. Dev Mode - Bypass consent and other constraints for testing

Enable Test Mode for a Study

  1. Navigate to Portal → Studies → Select Study → Overview
  2. Scroll to "Development & Testing Mode" section
  3. Configure the test settings:
  4. Notification Dry-Run: Log notifications without sending
  5. Time Simulation: Advance study time for testing
  6. Synthetic Data Generation: Enable to allow test data creation

Step 1: Generate Synthetic Participants

Generate test participants with enrollment dates and visit schedules:

From Portal UI: 1. Enable "Synthetic Data Generation" toggle 2. Click "Generate Data" 3. Configure options: - Participants: Number to generate (1-100) - Enrollment Spread: Spread enrollment over N days (simulates staggered enrollment) - Generate assessments: Create initial assessment data - Assign to arms: Randomly assign study arms - Assign to sites: Randomly assign sites 4. Click "Generate"

From API:

curl -X POST "http://localhost:8030/api/testing/studies/{study_id}/generate-synthetic-data" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{
    "participant_count": 10,
    "enrollment_spread_days": 30,
    "generate_assessments": true,
    "generate_schedules": true,
    "assign_arms": true,
    "assign_sites": true
  }'

Synthetic participants are marked with codes like STUDY001-SYN-0001 and can be cleared later.

Step 2: Simulate Time Passage

Advance the study's effective date to test visit window transitions:

From Portal UI: 1. In "Development & Testing Mode" section 2. Use time control buttons: -1 Week, -1 Day, +1 Day, +1 Week 3. The "Effective Date" display shows the simulated date 4. Click "Update Visit Statuses" to recalculate visit states

From API:

# Set time offset (+30 days = 1 month in the future)
curl -X POST "http://localhost:8030/api/testing/studies/{study_id}/time-simulation" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{"offset_days": 30}'

# Update visit statuses based on new effective date
curl -X POST "http://localhost:8030/api/testing/studies/{study_id}/update-visit-statuses" \
  -H "Authorization: Bearer <token>"

Visit Status Transitions: As effective date advances, visits transition through these states: - scheduledin_window (within visit window) - in_windowoverdue (past target date) - overduegrace_period (if configured) - grace_periodmissed (past grace period)

Step 3: Generate Longitudinal Data

Generate assessment data across multiple visits for each participant:

From API:

curl -X POST "http://localhost:8030/api/testing/studies/{study_id}/generate-longitudinal-data" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{
    "visits_to_complete": 3,
    "participants": null
  }'

Options: - visits_to_complete: Number of visits to complete (0 = all scheduled visits) - participants: List of participant IDs, or null for all synthetic participants

Step 4: Bulk Anchor Date Setting

Set anchor dates for multiple participants at once (useful for testing time-based workflows):

curl -X POST "http://localhost:8030/api/testing/studies/{study_id}/bulk-anchor-date" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{
    "anchor_date": "2024-01-15",
    "participants": null,
    "reason": "testing_longitudinal_workflow"
  }'

This will: 1. Set anchor dates for all synthetic participants 2. Regenerate visit schedules based on the new anchor date

Step 5: Test Patient Portal Experience

When time simulation is active, the Patient Portal shows a purple banner displaying the simulated date. This helps testers understand what date the system is using.

Patient Portal Features in Test Mode: - Purple "Simulated Date" banner shows effective date and offset - Tasks due dates reflect simulated time - Visit windows calculated from simulated date

Dev Mode Panel (Per-Participant Testing)

For detailed participant-level testing, use the Dev Mode Panel on the Participant Detail page:

  1. Navigate to Portal → Participants → Select Participant
  2. Expand the "Dev Mode Tools" panel
  3. Available actions:
  4. Override Anchor Date: Set enrollment date directly
  5. Trigger Message Now: Send scheduled notifications immediately
  6. Force Consent Bypass: Allow scheduling without signed consent

Bypass Status Indicators: - Consent: Valid | Bypassed | Missing - Scheduling: Allowed | Bypassed - Messaging: Allowed | Bypassed

Clear Test Data

When done testing, remove all synthetic data:

From Portal UI: 1. In "Development & Testing Mode" section 2. Click "Clear Synthetic Data" 3. Confirm deletion

From API:

curl -X DELETE "http://localhost:8030/api/testing/studies/{study_id}/synthetic-data" \
  -H "Authorization: Bearer <token>"

This removes: - Synthetic participants (codes containing -SYN-) - Their assessment sessions and responses - Scheduled visits - Anchor date records

Real participant data is never affected.

Complete Longitudinal Test Workflow

Here's the recommended sequence for comprehensive longitudinal study testing:

1. Create/configure study with visit schedule
   └─ Define visit windows (Baseline, Week 4, Week 12, etc.)

2. Enable test mode
   └─ Enable synthetic data generation
   └─ Enable notifications dry-run

3. Generate synthetic participants (10-20)
   └─ Spread enrollment over 30 days
   └─ Generate schedules automatically

4. Verify baseline visits
   └─ Check all participants have scheduled visits
   └─ Verify visit windows are correct

5. Simulate +30 days (Week 4)
   └─ Update visit statuses
   └─ Verify Week 4 visits are "in_window"
   └─ Test reminder triggering via Dev Mode Panel

6. Generate longitudinal data (2 visits)
   └─ Creates data for Baseline and Week 4

7. Simulate +90 days (Week 12)
   └─ Update visit statuses
   └─ Verify Week 4 visits show as "completed"
   └─ Verify Week 12 visits are "in_window"

8. Test patient portal experience
   └─ Login as test participant
   └─ Verify correct tasks are shown
   └─ Complete an assessment

9. Test consent withdrawal
   └─ Remove consent from a participant
   └─ Verify scheduling is blocked
   └─ Verify messaging is blocked

10. Reset and clean up
    └─ Reset time simulation
    └─ Clear synthetic data

Testing API Reference

Endpoint Method Description
/api/testing/studies/{id}/config GET Get test mode config
/api/testing/studies/{id}/config PUT Update test mode config
/api/testing/studies/{id}/time-simulation POST Set time offset
/api/testing/studies/{id}/time-simulation/reset POST Reset to today
/api/testing/studies/{id}/update-visit-statuses POST Recalculate visit statuses
/api/testing/studies/{id}/generate-synthetic-data POST Create test participants
/api/testing/studies/{id}/generate-longitudinal-data POST Create multi-visit data
/api/testing/studies/{id}/bulk-anchor-date POST Set anchor dates for multiple participants
/api/testing/studies/{id}/synthetic-data DELETE Clear all synthetic data

Dev Mode API Reference

Endpoint Method Description
/api/dev/status GET Get dev mode status
/api/dev/anchor-date/override POST Override participant's anchor date
/api/dev/message/trigger-now/{id} POST Trigger scheduled message immediately
/api/dev/participants/{id}/bypass-preview GET Preview what would be bypassed
/api/dev/participants/{id}/consent-status GET Get consent status
/api/dev/participants/{id}/force-consent-bypass POST Force bypass consent gate

Testing Checklist

Before going live with a study:

Battery Testing

  • [ ] Test battery loads correctly in browser
  • [ ] Test all modules complete without errors
  • [ ] Verify data submission succeeds
  • [ ] Check summaries are calculated correctly

Longitudinal Study Testing

  • [ ] Generate synthetic participants with schedules
  • [ ] Simulate time passage through all visit windows
  • [ ] Verify visit status transitions (scheduled → in_window → overdue)
  • [ ] Test reminder triggering via Dev Mode Panel
  • [ ] Generate longitudinal data across multiple visits
  • [ ] Test consent withdrawal blocks scheduling/messaging
  • [ ] Clear synthetic data after testing

REDCap Integration (if applicable)

  • [ ] REDCap config is correct
  • [ ] Test sync with a test participant
  • [ ] Verify data appears in REDCap
  • [ ] Check field mappings are correct

Patient Portal

  • [ ] Magic link authentication works
  • [ ] Scheduled tasks appear
  • [ ] Assessment launch works
  • [ ] Consent flow (if required) works
  • [ ] Time simulation banner shows when active

Mobile Testing (if applicable)

  • [ ] Test on iOS simulator
  • [ ] Test on Android emulator
  • [ ] Verify touch interactions work
  • [ ] Check offline behavior (if applicable)

Common Issues

"Failed to Load Battery"

  • Check API port (should be 8030)
  • Verify battery UUID is correct
  • Check CORS configuration
  • Look for errors in browser console

iOS Simulator Network Errors

  • Ensure server is running on 0.0.0.0
  • Check VITE_NATIVE_API_URL uses correct IP
  • Verify CORS includes capacitor:// origins

REDCap Sync Failures

  • Check API token validity
  • Verify event names match
  • Check field mappings
  • Review server logs for details

Authentication Issues (Patient Portal)

  • Magic link tokens are single-use
  • Check token hasn't expired
  • Verify CORS allows portal origin