Skip to content

Development Plan: Pediatric Cognitive Assessment Battery (jsPsych-Based)

Goals and Design Principles

  • Open-Source and Browser-Based: The assessment battery will be entirely open-source and run in a web browser. This ensures broad accessibility and adaptability – researchers and clinicians can inspect and modify the code freely . Unlike many proprietary test batteries, an open approach allows flexible tuning of tasks (e.g. adjusting difficulty or adding new tasks) to suit different research or clinical needs . The use of standard web technologies (JavaScript/HTML/CSS via the jsPsych framework) means no special software is required beyond a modern browser.
  • Unsupervised Use on Tablets: The design will accommodate unsupervised administration at home or in clinics, especially on touchscreen tablets. All tasks will be optimized for touch input as well as mouse, with large on-screen buttons or swipe/tap interactions to eliminate the need for keyboards. Research shows that cognitive tasks (e.g. Stroop, visual search) yield equivalent results on touchscreens as on keyboards , including in children as young as 7–9 years old who completed tablet-based attention tasks with high reliability . This confirms the feasibility of tablet-based testing for our target age range (5–17). We will include a brief device check (using jsPsych’s built-in browser-check or similar) to ensure the battery runs smoothly on the user’s device and screen orientation.
  • Multilingual (EN/FR) Support: All instructions, task content, and survey questionnaires will be provided in English and French, reflecting the Canadian context and ensuring accessibility to a broader pediatric population. The battery will allow the user (or administrator) to select language at the start. Internally, text strings will be parameterized or loaded from language-specific files, so switching languages is straightforward without modifying core logic. For example, task instructions and survey questions will be stored in a JSON or translation module for each language. We will leverage any existing translations of instruments (e.g. a French version of a questionnaire if available) and ensure culturally appropriate wording. The content of cognitive tasks mostly involves non-verbal stimuli (shapes, letters, numbers), but any language-dependent elements (e.g. word stimuli for a Stroop task, or yes/no buttons) will be translated. By designing with localization in mind from the start, adding additional languages in the future will be feasible with minimal code changes.
  • Modular Battery Structure: The battery will consist of a suite of cognitive tests (modules), each targeting a specific cognitive domain. These modules can be administered together as a full battery or individually, allowing flexibility for different use cases. Each task will be implemented as a self-contained jsPsych timeline or plugin, so that tasks can be re-ordered, added, or removed without affecting others. In unsupervised settings, a fixed sequence can be used (with built-in rest breaks), but the modular design means, for instance, a clinic could choose to deploy only an attention and memory module if desired. In the full battery mode, we will include an overview and progress indicator to guide participants through the series of tasks. The tasks will be automatically sequenced via jsPsych’s timeline, and can even be randomized or counterbalanced if needed (as was done in the CAM battery, where tasks began one after another in random order ). Each module will include its own instructions and practice trials, then report results, before moving to the next module.
  • Child-Friendly Engagement: Since the target users are children and adolescents (5–17) and many may have attention/fatigue issues, the interface will be designed to maximize engagement and clarity. We will use pediatric-friendly graphics and possibly a narrative theme to motivate the child. For example, the Computerized Attention Measure (CAM) battery for preteens introduced a friendly cartoon guide (“Martin”) and a storyline (visiting a zoo) to make tasks feel like games . Adopting a similar approach, our battery could feature a mascot or simple story context linking the tasks (e.g. a space adventure or treasure hunt where each cognitive task is a “mission”). Each task’s instructions will use simple language (with voice-over or images if possible for younger children) and positive encouragement. We will incorporate gamification elements such as feedback animations, score progress, or badges when a child completes a task, to sustain motivation. Importantly, these features will not compromise the cognitive rigor of tasks but will make the experience less tedious . Task durations will be kept relatively short (a few minutes each) and we will allow short breaks between tasks, with on-screen prompts like “Great job! Take a quick rest before the next game.” The overall goal is an engaging, game-like assessment that still yields valid cognitive measures.
  • Ethical and Privacy Considerations: Although not explicitly requested, it’s worth noting as a design principle that working with minors and clinical populations demands attention to privacy and informed consent. The battery will not collect any identifying personal data except perhaps age and an anonymous participant ID. All data transmission will be encrypted, and if data are stored locally or sent to a server, it will be done securely with parental consent as required. Being open-source also allows transparency in what data is collected. We will include a child-friendly assent screen or an introduction telling the child that this is not a timed test in the sense of pressure (even though we measure speed, we frame it as just doing one’s best). These considerations ensure the battery adheres to ethical standards for pediatric assessments.

Target Cognitive Domains (Long COVID, POTS, CFS)

Children and adolescents suffering from long COVID, Postural Orthostatic Tachycardia Syndrome (POTS), or Chronic Fatigue Syndrome (CFS) often experience “brain fog” and other cognitive difficulties. Based on clinical reports and literature, the following cognitive domains are prioritized (ordered by their relevance in these conditions):

  1. Attention: This includes both sustained attention (the ability to maintain focus over time) and selective or divided attention (the ability to concentrate on relevant stimuli while ignoring or splitting attention among distractors). Attention problems are among the most common cognitive complaints in pediatric long COVID – for example, about one-third of children with long COVID show objective weaknesses in sustained and divided attention tasks . POTS patients frequently report attention and concentration difficulties as well . Given that attention is foundational for other cognitive functions, it is our top priority domain.
  2. Working Memory (and Short-Term Memory): Working memory refers to holding and manipulating information over brief periods. Many youths with post-viral or chronic fatigue conditions struggle with short-term memory, such as forgetting instructions or what they were about to do. In long COVID, memory problems (especially short-term memory) are commonly reported – one review found memory issues to be among the most frequent cognitive symptoms in pediatric long COVID . POTS patients similarly can experience short-term memory lapses . Assessing working memory can capture these “holding information online” difficulties that contribute to brain fog.
  3. Processing Speed: Cognitive processing speed is the pace at which one can perceive, interpret, and respond to information. Children with these conditions often describe slowed thinking or mental fatigue impacting their quickness in tasks. Indeed, “slowed processing” is a key feature of brain fog in long COVID and studies in POTS have found impairments in cognitive processing speed on formal tests . Measuring processing speed provides an index of how efficiently a child’s brain is working, which may be especially sensitive to fatigue effects.
  4. Executive Function: This is a broad domain encompassing higher-order skills like inhibitory control (impulse control), cognitive flexibility (task switching), planning, and organization. Executive function issues can manifest as trouble multitasking, switching between tasks, or inhibiting distractions – all relevant in brain fog and fatigue syndromes. While core executive test scores in some long COVID case series were often in the average range , parents still report high rates of executive function difficulties in daily life (e.g. 53% of long COVID patients in one study had concerns with cognitive regulation/executive control ). Additionally, POTS patients in cognitive studies have shown deficits in executive tasks such as those requiring working memory and flexible thinking . Thus, it’s important to assess executive function, particularly cognitive flexibility and inhibitory control, to detect subtle issues that might not be captured by attention and memory tests alone.

These four domains are interrelated (for example, working memory and attention both contribute to executive function). However, each domain will be measured with specific tasks to pinpoint the cognitive profile of the patient. The battery’s focus on these domains aligns with documented cognitive impacts of long COVID, POTS, and CFS – primarily difficulties with attention/concentration, memory, mental speed, and higher-order executive processes . Other domains such as language or visuo-spatial abilities are less centrally affected in these conditions and are not the initial priority for this battery (though they could be added later if needed).

Task Selection and jsPsych Mapping by Domain

For each cognitive domain above, we have identified suitable tasks and how they can be implemented using jsPsych. We will reuse existing jsPsych plugins whenever possible, and note where custom development is needed. The goal is to build on proven paradigms (some drawn from open-source projects like the CAM battery or Experiment Factory tasks ) and leverage jsPsych’s extensive plugin library for efficiency .

Note: jsPsych plugins define trial types and many general-purpose plugins can be configured to create classic cognitive tasks. For example, jsPsych includes specific plugins for a circular visual search array (for visual attention tasks) and an Implicit Association Test, among others . It also has generic plugins for presenting stimuli and collecting responses (e.g. showing an image and recording a button or key response) which we can use to construct various tasks . Below we map each domain to candidate tasks and the jsPsych implementation approach:

1. Attention (Sustained and Selective Attention)

  • Continuous Performance Task (CPT) / Go-NoGo: To measure sustained attention and inhibitory control, a simple CPT can be used (e.g., letters or symbols flash on screen one at a time, the child must tap a button for every item except a target item). This tests vigilance and the ability to refrain from responding to infrequent no-go targets. jsPsych implementation: Use the html-keyboard-response or html-button-response plugin in a loop (timeline). We will present stimuli sequentially with timed intervals and record responses. No specialized plugin is required as this is essentially a sequence of stimulus displays and capturing a key/tap press. We can randomize the sequence with jsPsych’s timeline variables. After the experiment, jsPsych’s data can compute metrics like omission errors (missed targets) and commission errors (false alarms) as well as reaction time variability as indicators of sustained attention performance. Plugin mapping: jsPsychHtmlKeyboardResponse or jsPsychHtmlButtonResponse for each stimulus event.
  • Selective Attention (Visual Search Task): To test focused attention in the presence of distractors, we will include a visual search task. For example, the child might see an array of shapes or letters and must find a specific target (e.g., find the red letter among blue letters, or find a “cat” among many “dogs”). This assesses processing of visual information under distraction. jsPsych implementation: We can use the dedicated visual-search-circle plugin to present multiple items on screen in randomized positions . The plugin can display an array of stimuli (images or letters) and detect which item the user clicks (touch input is treated as a click). If no suitable plugin existed, we could also implement this by drawing stimuli on an HTML canvas (jsPsychCanvasKeyboardResponse with a custom drawing function) or by absolutely positioning HTML elements and capturing click events. However, jsPsych’s built-in visual search plugin is designed for exactly these kinds of tasks. We will configure it with parameters for number of distractors, target identity, and response collection via touch/mouse. The outcome measures would be response time to find the target and accuracy (was the correct item clicked). This task could be gamified (e.g., “find the animal that’s different” with fun images).
  • Divided Attention (Dual Task): If needed, we can include a dual-task paradigm (e.g., simultaneously monitoring visual and auditory stimuli) to stress divided attention. However, this may be too complex for younger children. As an alternative, we might include a Flanker task which requires focusing on a target stimulus while ignoring flanking distractors – this tests selective attention and inhibition. jsPsych implementation of Flanker: We can present stimuli (arrows or fish pointing left/right, etc.) with image-keyboard-response or html-keyboard-response plugin. JsPsych does not have a built-in “flanker” plugin, but it’s easy to implement by showing a stimulus (for instance, a string like “<< < <<”) and collecting a left/right response. The CAM battery’s attention tasks included Stroop and Flanker tasks , showing these are appropriate and can be made child-friendly (e.g., using arrows or animal images facing left/right instead of letters). Correct/incorrect feedback can be given in practice trials. Outcome measures: accuracy and reaction time for congruent vs. incongruent trials (to measure interference).

(Summary: Attention tasks will mostly use existing general plugins like jsPsychVisualSearchCircle for visual search and jsPsychHtmlButtonResponse for sequential tasks, rather than requiring a new plugin. These tasks address the sustained attention deficits seen in long COVID and the selective attention issues reported in POTS .)

2. Working Memory

  • N-Back Task: A classic working memory task, where the participant must identify when the current stimulus matches the one from N steps earlier (typically 1-back or 2-back for children). For example, a sequence of letters or pictures is shown and the child presses a button whenever the current item is the same as the last one (1-back) or the one before last (2-back). This tests updating and monitoring of information in mind. jsPsych implementation: Use html-keyboard-response or html-button-response with a timeline that presents a random sequence of stimuli. We maintain an internal counter and use jsPsych’s flexible trial data or on_finish callback to check if each response is correct (match occurred when it should). Alternatively, we can pre-compute a sequence with target positions and store correct answers in a timeline variable. JsPsych doesn’t have a built-in n-back plugin, but the task logic can be handled with a few lines of JavaScript in the trial timeline. This would yield metrics like d’ (sensitivity) or percent correct on target vs non-target trials. If we want to avoid keyboard input (for tablet), we can display a “Yes” button for “match” and maybe no button for “non-match” (just instruct them to only tap when there is a match), or two buttons (“Match” / “No Match”) each trial for clarity.
  • Span Tasks (Memory Span): To directly measure short-term memory capacity, we could implement a digit span or visuospatial span task. For example, for Digit Span, the screen presents a sequence of digits one by one, then the child is asked to recall them in order (or reverse order for more challenge). On a tablet, recall could be done by tapping numbers on a displayed keypad in sequence. jsPsych implementation: We may use jsPsychHtmlKeyboardResponse to present the sequence (or even audio-button-response if we want spoken digits). For the response, jsPsych’s survey-multi-choice or a custom response screen can be used to let the child input the sequence. Another approach is using the free-sort or reconstruction plugin if we present all digits and ask to reorder them (but that might be advanced). A simpler method: after showing the sequence, present a set of candidate numbers on buttons (including the ones shown) and let the child tap them in order (record the sequence of taps). That would likely require a custom trial definition (but could be done within a single trial using JS events). Given the complexity and the broad age range, we might opt for a simpler working memory task like n-back or a Corsi Block (spatial span) task. Corsi Block could be done with an array of circles lighting up in a sequence, and the child reproducing the sequence by tapping the circles in order. Implementation would involve a bit of custom code or careful use of jsPsych timing, but not an entirely new plugin – possibly using jsPsychCanvasKeyboardResponse to draw the grid and capture taps, or successive trials for each tap input.
  • Complex Span (Working Memory under distraction): The CAM battery included a complex span task (where children had to remember items while doing an intervening task) . This might be too involved for our initial battery, but is an option for future development. It would combine a simple processing task (like verifying an equation or counting objects) with a memory to recall a series of items. We will likely prioritize simpler span or n-back tasks first for feasibility.

Plugin summary: There is no dedicated jsPsych plugin for n-back or span tasks, but they can be built using sequences of standard trials and some custom logic. We might create simple helper functions to manage sequence presentation and scoring. If we find an open implementation (for example, the Experiment Factory had a digit span task, and the Adolphe et al. battery included a working memory task ), we can adapt that code into our jsPsych framework .

3. Processing Speed

  • Reaction Time Task (Simple and Choice RT): The purest measure of processing speed is a simple reaction time test. For instance, the child watches a blank screen or fun image and must tap as quickly as possible when a target appears (like a “whack-a-mole” game with a single mole appearing). We can include both simple RT (tap when you see the stimulus) and choice RT (tap the left vs right button depending on whether a stimulus appears on the left or right side, etc.). jsPsych implementation: This is straightforward with html-button-response or image-button-response. We show a stimulus (or two stimuli for choice RT) and record the response time. jsPsych will automatically record reaction time from stimulus onset to button press. We can add random inter-stimulus intervals to prevent anticipatory responding. This task will produce metrics like mean reaction time and variability. Because long COVID and POTS patients often have slowed cognitive processing , a simple RT task can objectively quantify this by comparing to age-normative data.
  • Trail Making Test – Part A: While Trail Making is primarily an executive task when considering Part B, Part A (connecting numbers in sequence) is essentially a test of processing speed and visual scanning. We discuss TMT in detail in the next section, but note here that TMT Part A completion time is a good processing speed index. We plan to develop a custom plugin for TMT (see next section), but if that is in place, we will automatically get a measure for processing speed from Part A. For younger children who may not do Part B, Part A alone serves as a processing speed task.
  • Symbol/Digit Coding Task: Another standard processing speed measure (from neuropsychological batteries like WISC or CNS Vital Signs) is a coding or substitution task, where the child must quickly match symbols to numbers using a key. Implementing a full symbol-coding might be complex on a tablet (and might be confounded by motor speed), so we likely will not include this initially. However, if desired, a simplified version could present one problem at a time (e.g., show a symbol and multiple-choice options for its code, as quickly as possible). This would use image-multi-choice or similar, but due to time constraints we lean toward RT and TMT tasks for processing speed.

In summary, processing speed tasks are generally implementable with basic jsPsych plugins (no new development needed). We will ensure stimuli are fun (e.g., a cartoon character popping up instead of a generic stimulus) to keep young children engaged while they are essentially doing a reaction time drill. The emphasis is on measuring how quickly and consistently they can respond, which we can do through jsPsych’s timing data.

4. Executive Function (Inhibition, Cognitive Flexibility)

  • Stroop Task: The Stroop task measures inhibitory control (an aspect of executive function) by requiring the participant to respond to one attribute of a stimulus while suppressing another (e.g., name the font color of a color word that may spell a different color). In a pediatric, bilingual context, a traditional Stroop (with color words) might be tricky for younger kids or require reading ability. We could use a Child-Friendly Stroop, such as the Day-Night Stroop (say “day” for a picture of a moon and “night” for a sun) or a less verbal version (e.g., respond to the picture’s meaning vs. its background color). jsPsych implementation: Using jsPsychImageKeyboardResponse or jsPsychImageButtonResponse suits this well. For instance, present an image and two choice buttons (one for each response). Alternatively, present a word and have colored buttons to press. There’s no dedicated Stroop plugin, but the logic is straightforward to implement with conditional stimuli and standard response collection. We’ll include practice trials to ensure the child understands the rule. Outcome: Stroop interference effect (difference in reaction time or accuracy between congruent and incongruent trials).
  • Flanker Task: (Mentioned under attention too) – This can also be framed as an executive/inhibitory control task, since it requires suppressing responses to flanking stimuli. We already plan for a flanker in the attention section. Its metrics (Flanker effect in RT/accuracy) reflect executive control as well.
  • Task Switching: A key part of executive function is cognitive flexibility – the ability to switch between tasks or rules. The primary measure we’ll use for this is the Trail Making Test Part B (detailed next section), which is a set-shifting task (alternating between number and letter sequences). We might also incorporate a simpler switching task, such as a card-sorting paradigm or an odd/even – high/low task where the rule switches partway. But to keep the battery concise, TMT Part B will likely suffice as our main switching measure. If we did add a non-TMT switching task: for example, showing a number and if a cue indicates “judge parity” vs “judge magnitude”, requiring switching between those rules. jsPsych implementation: This can be done by presenting a cue and a number with html-button-response (buttons labeled “odd”/“even” or “high”/“low”), and using jsPsych’s timeline to change the cue periodically. However, given TMT’s presence, we might not need a separate task.
  • Planning/Problem-Solving: For completeness, tasks like Tower of Hanoi or mazes assess planning, but these are lengthy and complex to implement for unsupervised sessions (and not directly mentioned as an issue in long COVID/POTS). We will not include them in the initial battery.

In summary, executive function will be assessed primarily through inhibitory control tasks (Stroop, Flanker, Go/NoGo) and cognitive flexibility tasks (Trail Making Test Part B). These map onto tasks successfully used in prior pediatric batteries: for instance, the CAM battery used Stroop, Flanker, and Go/NoGo to tap inhibition . All those were delivered via web, presumably using paradigms that can be replicated in jsPsych. We’ll harness jsPsych’s general plugins (image/text display + keyboard/button response) to implement them. If any custom code is needed (e.g. to log specific performance metrics like switch cost in a task-switching paradigm), we can do that within jsPsych’s data handling callbacks.

By leveraging jsPsych’s flexibility in this way, we avoid reinventing the wheel – many paradigms are standard and only require careful assembly of existing plugin trials. Where needed, we will create lightweight custom plugins (or extensions) for specific interactions (the main one being the Trail Making Test, next section). Overall, this mapping ensures that each cognitive domain is covered by at least one well-established task, implemented with the appropriate jsPsych tool:

  • Attention: visual-search-circle plugin, sequential stimulus trials for CPT.
  • Working Memory: custom sequence logic using existing display plugins.
  • Processing Speed: simple reaction tasks with button-response plugin.
  • Executive: combination of flanker (no special plugin) and TMT (custom plugin), plus possibly Stroop (image/text response).

We will also take advantage of open-source task repositories. For example, the Experiment Factory (ExpFactory) project provides implementations of classic psychology tasks (Stroop, n-back, etc.) in JavaScript . We can consult or adapt those implementations to ensure our tasks are validated and efficient. Additionally, prior open-source batteries (CAM , Adolphe et al. 2022 , etc.) have tasks we can reference for stimuli and timing parameters to use. This reuse and mapping approach will accelerate development and ground our battery in validated methods.

Trail-Making Test (TMT) Plugin Development Plan

One key development is a new jsPsych-compatible plugin for the Trail Making Test, tailored for children and touchscreens. The Trail Making Test is a widely used measure of processing speed (Part A) and executive function (Part B, which adds set-shifting) . It requires the participant to connect items in order on a page or screen. Our goal is to implement both Part A and Part B in a user-friendly, pediatric-appropriate way. Below is the plan, including features and technical design:

  • Plugin Design and Rendering: We will create a custom jsPsych plugin (e.g., jspsych-trailmaking) that displays a set of scattered “nodes” (circles containing numbers and/or letters). The plugin will likely use an HTML5 canvas or SVG to render the nodes and lines, as this allows drawing lines dynamically as the child connects the dots. At the start of the trial, the plugin will generate or load the layout of circles. For consistency, we might use a predetermined layout (like the standard TMT layouts) scaled to the screen size, or generate random positions with constraints (ensuring they are evenly distributed and not overlapping). The stimuli will be the circles labeled appropriately: Part A with numbers 1–N, Part B with numbers and letters. A Children’s version uses fewer total points for easier visual search . For instance, adults use 25 points (1–25 or 1–13 + A–L), whereas the child version uses 15 points (1–15 for A, and 1–8 + A–H for B) . We will likely adopt ~15 nodes for Part B for younger participants, possibly scaling up to 25 for older teens if needed or making it configurable. The plugin should allow specifying the node set (so we could even use a smaller set for a 5-year-old if necessary).

  • Touch and Mouse Interaction: The plugin will be built to handle pointer input universally (so it works with mouse, stylus, or finger). We will capture touch events on the canvas. There are two possible interaction modes:

  • Drag/Draw Mode: The participant presses on the starting circle and drags a line connecting to the next circle, and so on in one continuous motion (like drawing the trail).

  • Tap Sequence Mode: The participant simply taps the circles in the correct order (the line can either auto-draw between the last tapped and current tapped circle for visual feedback).

Dragging can feel more like the original paper test, but it may be challenging for young kids to continuously drag without lifting their finger. The tap-sequence mode is more forgiving – the child can tap “1”, then “2”, etc., and the program will connect them. We will implement the plugin in tap-sequence mode for simplicity and robustness: when a circle is tapped, if it’s the correct next in sequence, the circle will highlight and a line will draw from the previous circle to this one; if it’s the wrong circle, we can provide a gentle indication (like a brief shake of that circle or a buzz) and not draw a line, prompting them to try again. The plugin will keep track of the next correct target internally.

  • Pediatric-Friendly Features:

  • Visual Design: The circles (nodes) will be large enough for small fingers and high-contrast for easy reading. We can use different colors for numbers vs letters in Part B, or perhaps different shapes, to assist those who might not be as familiar with the alphabet. In fact, there are established variants of TMT to reduce cultural or letter knowledge bias: the Color Trails Test uses alternating colored numbers instead of numbers and letters, and the Shape Trail Test (STT) uses alternating shapes (circles/squares) around numbers . We may incorporate an option in the plugin to use colored circles or different shapes as the distinguishing feature for the alternating sequence instead of letters, which can make Part B feasible for younger children who cannot recite the alphabet easily . For example, Part B could require connecting 1🔵, 2🔴, 3🔵, 4🔴, etc., alternating between blue and red numbered circles, rather than 1-A-2-B. This preserves the set-shifting aspect (alternating two sequences) without requiring letter knowledge or perfect color vision (if shape-based) . We will choose a method (color or shape) that is intuitive and verify it in instructions (e.g., “connect the numbers, switching between circle and square”). Perhaps we’ll include both as options (with shape as default to avoid reliance on color vision) .

  • Adaptive Difficulty: For very young children (ages 5-6), even connecting 1–15 might be challenging. We can introduce a practice mode with fewer nodes (say 1–8) just to teach the concept. The plugin could accept a parameter for the set size (N), so that a shorter practice trail can be presented first. Also, if a child is struggling (taking very long or many errors), the task could terminate early for Part B to avoid frustration, and we simply note an incomplete or give a max time score.
  • Feedback and Motivation: Since TMT is timed, typically you just measure completion time and errors. However, for a child-friendly twist, we might incorporate a fun element like guiding the child that they are helping a character travel from point to point. The feedback can be subtle during the task (perhaps each correct connection gives a “ping” sound or briefly highlights the line green; a wrong tap might give a gentle “try again” prompt). After finishing, we can show a “You did it!” message and maybe time taken. We will be careful that feedback doesn’t alter their performance (so as not to invalidate the measure), but gentle cues are acceptable.

  • Data and Scoring: The TMT plugin will record completion time (time from start until the last point is connected) as the primary outcome. It will also log errors, defined as taps on wrong circles (and possibly the distance drawn off-path if we did drag mode, but in tap mode, an error is just an incorrect tap). We might log each attempt at a connection for detailed analysis. jsPsych will allow us to store these easily (we’ll structure the plugin’s data output to include total time, number of errors, perhaps a list of tapped sequence). We will also measure time to complete first half vs second half of the sequence to see if they slow down, although that’s more for research analysis. The plugin’s output can be a JSON containing these metrics. We’ll ensure that timing precision is sufficient (ms resolution is fine; jsPsych uses performance.now which is high resolution). Since this is a custom plugin, we will manually call jsPsych.finishTrial(data) when done to end the trial.

  • Error Handling and Exit Criteria: In an unsupervised setting, we must handle what if the child is completely lost. We will set a reasonable time limit (e.g., 2 or 3 minutes) for each part, after which the trial auto-finishes to avoid endless frustration. If time runs out, the plugin can mark it as incomplete or record the number reached. We’ll state in instructions that if you get stuck, you can press a “Help” button (which might just highlight the next node or eventually skip the task). These details ensure even if a child cannot finish, the app doesn’t hang.

  • Integration with jsPsych: We will implement the plugin in accordance with jsPsych’s plugin architecture (defining trial() function, parameters, etc.). The plugin file (e.g., jspsych-trailmaking.js) will be included like other plugins. It will accept parameters for mode (A vs B), set of nodes (and their coordinates and labels), time limits, etc. For reuse, one could generate random layouts or supply a predefined layout. The plugin will handle rendering and input internally, then return data. We’ll test it thoroughly on both desktop and actual tablets to ensure touch responsiveness is good.

  • Existing Work as Reference: We know of an open effort to implement TMT with jsPsych: a GitHub project that used jsPsych v6 and a modified psychophysics plugin to create a digital TMT . This project generated TMT stimuli images and likely tracked mouse movements. We will draw inspiration from it but update the approach for jsPsych v7 and our specific needs. The prior implementation demonstrates that capturing TMT performance in jsPsych is feasible . Our plugin will be more self-contained (not requiring external image files for stimuli as that one did, where stimuli were pre-generated images). Instead, we’ll dynamically draw the circles and text. Also, our focus on touch input and child-friendly adaptation (shapes/colors) goes beyond that implementation.

  • Testing and Calibration: We will pilot the TMT module with a few users of different ages to calibrate the layout (e.g., make sure circle size is adequate on a typical tablet, ensure that adjacent circles are not too close causing mis-taps, etc.). If we see common mistakes (like kids tapping out of order frequently due to not seeing a number), we might adjust visual cues (maybe lightly highlight the next number in practice trials). Calibration will also involve ensuring timing is accurate – e.g., if a child pauses on the screen, the timer should continue until completion unless they explicitly pause (we likely won’t include a pause in the middle of a timed TMT trial to preserve standard administration).

  • Outcome Utility: The Trail Making Test yields rich information: Part A time (psychomotor processing speed), Part B time (executive flexibility), and the difference or ratio between B and A as an index of switching cost. Errors (if a child goes to the wrong item out of sequence) indicate executive mistakes. In long COVID or POTS, we anticipate some kids may have slowed Part A (if processing speed is affected) and/or a large increase in Part B time (if multitasking or executive function is affected by fatigue). The data from this plugin will thus directly inform those hypotheses. It will also be valuable to have a fully open-source TMT tool, which can be shared with the community (ensuring we choose a license compatible with jsPsych, likely MIT, see licensing section).

In summary, the TMT plugin is a central development item, providing a tablet-optimized, child-friendly digital Trail Making Test. It will allow both standard administration and adapted versions (with fewer points or non-letter alternation) for younger or impaired children. By designing it as a jsPsych plugin, we ensure it can seamlessly integrate into our battery’s timeline and that others can reuse it in their jsPsych experiments. The TMT’s known sensitivity to cognitive impairment makes it particularly well-suited to capture the executive function difficulties that might not show up in simpler tasks, and our implementation will preserve its core essence while adapting to our user group’s needs.

Survey Integration for Fatigue and Symptoms

In addition to performance tasks, self-reported symptoms and fatigue levels are crucial, especially for conditions like long COVID, POTS, and CFS. We will integrate standardized survey instruments into the jsPsych battery, leveraging jsPsych’s survey plugins to administer questionnaires such as the Modified Fatigue Impact Scale (MFIS) and other symptom rating scales.

  • Modified Fatigue Impact Scale (MFIS): The MFIS (21 items) evaluates the impact of fatigue on cognitive, physical, and psychosocial functioning – highly relevant for long COVID/POTS/CFS. We will implement MFIS in the battery, likely at the end or beginning of the session (to avoid biasing cognitive task performance). Using jsPsych’s survey-likert plugin, we can present MFIS items with their standard 5-point Likert scale (0=Never to 4=Almost Always). The survey-likert plugin allows multiple questions on one page with a common scale . However, 21 questions on one page might be overwhelming for younger participants, so we have two options:

  • Present MFIS in sections (e.g., 7 questions per page) to allow short breaks and better focus.

  • Or present each item one by one using survey-multi-choice (each as a single-choice question on its own page).

We will likely use the multi-question per page format for efficiency, but use jsPsych’s ability to customize question wording to be child-friendly (if needed). For example, we might slightly reword complex items for a 10-year-old’s comprehension (while retaining meaning). We will include an introductory instruction like “Please answer how much each problem has been for you in the past week.” The output data from the plugin will include each item’s response which we can sum or score according to MFIS guidelines.

  • Symptom Severity Scales: We may include additional surveys, such as:

  • A simple symptom checklist for current symptoms (headache, dizziness, etc.) or an autonomic symptom scale (for POTS, e.g., asking about lightheadedness, palpitations).

  • A brief cognitive difficulties scale (like asking the child or parent to rate attention, memory, thinking speed, perhaps drawn from a questionnaire or an ADHD checklist for context).
  • Mental health screening questions (depression/anxiety) if relevant, since mood can impact cognitive performance (parent-reported mood issues were high in long COVID samples ). However, since our focus is cognitive, we might keep this minimal or optional.

Implementation for any of these would use jsPsych’s survey plugins: survey-multi-choice (for multiple-choice symptom frequency/severity questions) or survey-text (if asking any open-ended question, though likely we won’t in an unsupervised setting because open text from children is less useful and harder to analyze).

  • Placement and Flow: Surveys can be inserted at the beginning or end of the task battery timeline in jsPsych. A logical flow is to do symptom questionnaires first, when the participant is fresh, to get subjective data, then cognitive tasks, and perhaps a quick fatigue rating after tasks to see if the testing itself induced fatigue. We might also ask for a current fatigue level rating (e.g., 0–10 scale) both pre- and post-battery to gauge if the session tired the participant out – a dynamic measure of cognitive fatigability.

  • Using jsPsych for Surveys: The jsPsych library has robust support for surveys as seen with the built-in plugins . We will utilize features like:

  • Required responses (to ensure we don’t get missing data – or provide an option like “Prefer not to answer” if appropriate).

  • Validation (if numerical input needed, etc., though mostly Likert and multiple choice don’t need custom validation).
  • We can apply custom CSS to make the surveys touch-friendly: e.g., larger radio buttons for easier tapping by children, maybe emoji faces for anchors if that resonates (though MFIS likely should stick to textual choices).
  • If using survey-html-form for more complex layouts, we could craft HTML forms, but likely the stock plugins suffice.

Each survey plugin trial will output responses in a structured way (e.g., each question gets a key in the data). We will assign each question a short code (for instance, “MFIS_Q1”) so we know which responses correspond to which questions in the resulting data. jsPsych can combine data from all trials at the end, which will include the survey responses along with task performance.

  • Multilingual Surveys: Since we aim for EN/FR support, we will prepare French versions of all survey questions (MFIS has been used internationally, so a French translation might be available or we translate it). We’ll ensure the plugin displays the version corresponding to the selected language. This might involve loading a separate set of question text if language=FR. The same goes for any symptom items.

  • Example Integration: After the last cognitive task, we might have a section heading “Now we have some questions about how you feel:” and then a jsPsych survey timeline of these questionnaires. Or begin with “Before we start, please answer these questions about yourself,” to capture baseline fatigue and symptoms.

  • Data Management: Survey data will be stored just like task data in jsPsych’s data object. We will combine these when saving results (so each participant’s dataset includes both cognitive metrics and survey answers). If needed for scoring (like calculating MFIS total score), we can do that post-hoc or even implement a jsPsych on_finish function that calculates scores and gives feedback. For instance, it could display “Your fatigue score is ___ out of 84” if that’s something desired (though that may not be necessary to show to the participant).

  • Considerations: We will be mindful of the time burden – MFIS has 21 items, plus any other scales. Filling these out can be tiring, especially for a fatigued individual. We might allow the possibility for a parent to assist younger children in reading questions. In an unsupervised home use, if a child is too young to read, a parent might read survey questions to them (we should mention this in instructions). We could also add an audio narration for survey questions as an optional feature (this is feasible with jsPsych by preloading audio of questions being read, and playing via an audio-play plugin or custom code when each page loads). That would improve accessibility for younger kids or those with reading difficulties.

Using jsPsych’s survey tools means we don’t have to develop new forms from scratch – it’s a matter of populating question text. The survey-likert and survey-multi-choice plugins are reliable and will ensure consistency in how data is recorded. This integration of subjective measures like MFIS complements the objective cognitive tasks, giving a fuller picture of the patient’s cognitive health. For example, we can later correlate a child’s MFIS cognitive fatigue score with their sustained attention task performance to see if subjective and objective measures align.

(A note on licensing for surveys: We will verify that using MFIS is permitted – MFIS is often freely used in clinical research, but it’s originally from the MS Council. If any special permission is needed, we’ll obtain it or choose a comparable open questionnaire. For now, we proceed with MFIS given its relevance.)

Deployment Architecture and Localization

Deploying the battery in a user-friendly and secure manner is as important as designing the tasks. Here we outline the architecture for hosting, data management, and supporting multiple languages:

  • Application Structure: The entire battery will be a web application, likely a single-page application that loads jsPsych and the relevant plugins, then runs through the timeline of tasks. We will organize the code such that each task module and survey is defined separately (for readability and modularity), and then a master jsPsych.init({...}) is used to start the experiment timeline in sequence. We can use jsPsych’s built-in flow control to branch or randomize as needed. The UI will start with a welcome page (with language selection), then instructions, then tasks, then surveys, then an end screen.

  • Hosting Options: Being a static web app, one convenient option is to host on a service like GitHub Pages or similar static hosting (Netlify, Vercel, etc.). This would allow easy access via URL and simplified updates (just pushing a new version). Such static hosting is free/low-cost and works well if data is saved to the client or sent to an external service. Alternatively, we could integrate with an experiment server like JATOS or use Firebase for handling data (if we prefer a database). However, given the open-source ethos, a simple static site plus a lightweight backend for data capture might be ideal. For example:

  • Host the HTML/JS/CSS on GitHub Pages (which gives a public URL for the app).

  • Use a back-end script (could be a tiny Node.js/Express app or even Google Forms/Sheets integration) to collect data submissions. JsPsych can be configured to send results via an AJAX POST request at the end of the experiment to a specified URL. We can write a simple server endpoint to accept that and store in a database or CSV. If simplicity is key, another approach is to have jsPsych email the data or prompt the user to download their data and email it in – but that’s less reliable and not user-friendly in unsupervised settings.

  • Data Storage and Security: We will ensure data is stored securely and in compliance with privacy rules for minors. If using a custom backend, data would be stored in a secured database (with no personal identifiers aside from perhaps an assigned participant code). If using a service like Firebase, data could be stored in Firestore with security rules to restrict access. Another option is using a platform like RedCap or Qualtrics which can host jsPsych experiments and automatically store data; but since we want an open-source deployment, we prefer not to rely on proprietary platforms. For our open-source release, we might provide two modes:

  • Local mode: Data is simply offered for download at the end of the session (jsPsych can prompt the user to save a CSV or JSON locally). This could be used in clinic settings where a practitioner directly collects the file.

  • Cloud mode: Data is sent to a configured server. We can give instructions in our documentation for how to set up a simple server script, or we might set up a default endpoint for our project if centrally coordinated.

We will use secure protocols (HTTPS for the site and any data submissions) to protect data in transit. Each session can generate a unique ID for the participant (which could be entered by the user if given a code, or randomly assigned) to match pre/post data if needed without using names.

  • Scalability: Hosting on static sites means it can handle many users simultaneously as files are just served. If we funnel data into something like Google Sheets via a Google Apps Script, that could be a quick low-maintenance solution for moderate usage (though Sheets might cap at certain entries per minute). For a more scalable approach, a serverless function (AWS Lambda, etc.) to capture data could handle a large number of requests. These details can be refined based on expected usage volume.

  • Language Localization Implementation: As mentioned, we will have all user-facing text externalized. Concretely, we might structure the code like:

const TEXT = {
  EN: { welcome: "Welcome to the Brain Games!", instruct_attention: "In this game, you will ...", ..., question_MFIS_1: "I am easily fatigued", ... },
  FR: { welcome: "Bienvenue ...", instruct_attention: "Dans ce jeu, tu vas ...", ..., question_MFIS_1: "Je suis facilement fatigué", ... }
};
let lang = "EN"; // or set based on user choice
  • Then, when creating jsPsych trials, use the prompt or stimulus parameters with values from this TEXT dictionary. For example:
timeline.push({
  type: jsPsychInstructions,
  pages: [ TEXT[lang].instruct_attention_page1, TEXT[lang].instruct_attention_page2 ]
});
  • Similarly, survey questions would be built from TEXT[lang].question_MFIS_X. This way, switching lang variable will switch all the content. We’ll likely implement a simple language selection screen at the very beginning (maybe with flags or just “English / Français” buttons). Once selected, we set the lang variable and all subsequent trials use the appropriate text. JsPsych doesn’t provide automatic i18n, but our approach makes it manageable.
  • Platform Testing: The app will be tested on common browsers (Chrome, Safari, Firefox) and devices (iPad, Android tablet, desktop) to ensure compatibility. We will utilize responsive design – the canvas for TMT and layout for other tasks will scale with screen size. If a phone tries to access it, we might either warn that a larger screen is recommended or adapt the UI (though a phone is not ideal for some tasks like TMT due to small screen). But since tablets are specified, we assume at least ~10-inch screens, which is fine.
  • Dependencies: We’ll use the latest jsPsych (v7+) which can be included via CDN or bundling. If any extensions (like the touchscreen extension from 2024 ) become relevant, we might include them, but likely we can handle touch in our code. We might use some auxiliary libraries for visuals (e.g., if using Canvas, we might just use raw Canvas API or a lightweight drawing library if needed). All dependencies will be documented and kept minimal.
  • Version Control & Continuous Deployment: Since this is open-source, we will maintain a repository (e.g., on GitHub or GitLab) for the code. That repository can be linked to the hosting (if GitHub Pages, the main branch can directly publish). We’ll include instructions for others to deploy their own instance if they want to self-host (for example, a clinic could fork the repo and run on their own server, especially if they want data stored locally).
  • User Interface & Navigation: We will implement a simple navigation flow with jsPsych’s fullscreen mode and progress bars. For example, we can use the fullscreen plugin at the start to make the app fullscreen (important on tablets to utilize the whole screen and avoid distractions). We can display a progress bar at the top of the screen indicating how many tasks are done (jsPsych provides a progress bar functionality for timeline if configured). We’ll segment the progress by module (e.g., “Task 3 of 5” etc.). This helps users know how much is left, which is good for children’s patience. If the battery is lengthy, we may consider allowing it to be done in two shorter sessions, but since each task is a few minutes, a single session of ~20-30 minutes including surveys is the aim.
  • Error Recovery: If the app or device crashes mid-way, data up to the crash may be lost unless we implement save points. One idea is to use localStorage to periodically save partial data, so if the page refreshes, it could resume or at least not lose everything. This can be complex, but we will at least document that in case of a crash the test might need to be redone. For clinic use, someone might supervise and ensure connectivity, etc. For home use, we might provide a “resume code” if that’s viable (maybe out of scope initially). Simpler: instruct users to ensure a stable environment and not to close the browser until they see the completion message.

In essence, the deployment will use a web-first architecture: static front-end with jsPsych, optional back-end for data. This makes it easy for anyone to spin up the battery by opening a URL, and aligns with the open-source goal (no proprietary app installation needed). By supporting bilingual text and being accessible on standard hardware, we maximize the reach of the tool.

Key Resources and Example Projects

During development, we will draw on several existing resources and prior implementations to guide us:

  • CAM (Computerized Attention Measure) Battery – Pinelli et al. 2025: This is an open-source web-based battery for preteens measuring attention . It includes tasks like Stroop, Flanker, Go/No-Go, sustained attention (visual and auditory), working memory, and visuo-motor coordination . The CAM project has made their battery code available on GitLab . We can review this repository for inspiration on how they structured the tasks and their gamification approach (they used a cartoon narrative and gamified instructions ). The fact that CAM was implemented in a school setting and achieved good psychometric results suggests their task designs (timing, difficulty) are effective for kids. We can potentially reuse stimulus sets or ideas (for example, their selective attention visual search task with monkeys , or their complex span task logic). The CAM battery’s open license (as the paper is CC-BY) encourages reuse as long as credit is given .
  • Maxime Adolphe et al. 2022 – Open-Source Attention & Memory Battery: Adolphe and colleagues developed a battery of seven cognitive tasks (multiple object tracking, enumeration, go/no-go, load-induced blindness, task-switching, working memory, and memorability) using p5.js, and demonstrated high reliability in an online experiment . The key takeaway for us is the open-source spirit – they share their source code to allow others to expand it . We will locate their code (possibly on an open repository linked to the paper) to see if any task can be adapted or if we can integrate p5.js sketches into jsPsych (jsPsych has a canvas plugin that might allow embedding of such tasks). Their battery confirms that complex tasks like task-switching and multiple object tracking can run in browsers smoothly. We might not implement MOT or enumeration for now, but if needed, their approach could guide future extensions (e.g., adding a visual memory task).
  • Experiment Factory (ExpFactory): This is a collection of cognitive tasks implemented in a unified framework. Many tasks are HTML/JS (some using jsPsych or custom JS). For example, they have a Stroop task, n-back, Go/NoGo, etc. Pinelli et al. even referenced ExpFactory for their Stroop design . We will check the ExpFactory repository for any tasks that match our needs and see how they implemented them. This can save development time and ensure tasks are consistent with how they’ve been done in literature. We must check licenses (ExpFactory is open source, likely MIT). We’ll adapt the tasks to our style (ExpFactory might not use jsPsych for all tasks; if not, we’ll translate them into jsPsych trials).
  • jsPsych Documentation and Example Experiments: The official jsPsych documentation (jspsych.org) contains many example paradigms and best practices for building experiments. We will rely on it for technical guidance (e.g., how to preload images, how to randomize trials, plugin parameter details). For instance, jsPsych’s own demo experiments include a simple reaction time task, which we can mirror . The list of plugins and their documentation pages will be a constant reference to maximize use of existing functionality instead of writing custom code. Additionally, the jsPsych community (like the Google Group or Slack, if available) can be consulted if we run into issues (though being careful to preserve privacy of our specific content if needed).
  • Touchscreen Research (Strittmatter et al. 2024): The paper we reviewed on a jsPsych touchscreen extension demonstrates that jsPsych can be extended for better touchscreen support and that children can perform tasks on touch devices reliably . While our implementation will largely rely on standard pointer events, this resource is useful for validation and any pointers on calibrating touch response timing. If their extension is publicly available, we might integrate it to capture touch positions or swipes if needed (for example, if we wanted to analyze the exact trail drawn in TMT, capturing continuous touch coordinates would be beneficial – something a mouse-tracking or touch extension could facilitate). At minimum, this reference provides evidence that our approach is methodologically sound.
  • Trail Making Test References: We looked at studies on TMT adaptations (like the Shape Trail Test for children ) to ensure our TMT plugin incorporates best practices for younger ages. These academic references reassure us that using shapes or colors as alternatives in Part B is an accepted method to eliminate letter knowledge requirements . We will also consider the Children’s Color Trails Test (CCTT), which is a published tool using colored circles. While we cannot use the exact proprietary test, we emulate the concept in our own open implementation. There is also the TRAILS-P (Trail Making for Preschoolers) study that used a storybook context – we likely won’t go as far as a storybook for TMT, but it’s good to know such creative adaptations exist.
  • Licensing of Code and Instruments: We will choose a permissive open-source license for our project, likely MIT or BSD, to be consistent with jsPsych’s MIT license . This allows others to reuse our code freely. We must ensure any external code we incorporate is compatible (CAM battery code is CC-BY, which is fine as we can credit them; ExpFactory tasks might be MIT; we will include attributions in our documentation). For instruments like MFIS, we found that it is available through clinical outcome assessment repositories – we will double-check that using it in a non-profit research context is allowed. If required, we’ll seek permission or use an alternative like the Pediatric Quality of Life Fatigue module (which might also have licensing). All tasks we implement (like Stroop, TMT, etc.) are standard cognitive tests in the public domain, so no issues there aside from not using any trademarked names in the tool’s title. We’ll include a section in our documentation listing all third-party resources and their licenses.
  • Community and Example Repos: We will provide references to example jsPsych experiment repositories (for instance, the official jsPsych GitHub has an experiments repo with classic paradigms). Also, any code we use or modify (like the jspsych-psychophysics plugin for precise stimulus presentation if we decide to use it for consistent drawing timings) will be acknowledged.

To illustrate with a concrete example of resource use: If we take the open jsPsych TMT by GEJ1 on GitHub, we will examine how they handled drawing and input . We might borrow some of their stimulus generation logic (they used MATLAB scripts to generate stimuli images, but we’ll do it in JS to avoid external dependencies). This repository being open with no restrictive license (implicitly MIT as it’s based on jsPsych) means we can use it as a reference implementation. It’s an invaluable head start for our plugin.

By standing on the shoulders of these projects, we ensure our battery is built on tested paradigms and we accelerate development. We also strengthen the open-source network effect: our battery, once complete, will itself become a resource for others. We will maintain clear documentation and perhaps a website demonstrating the tasks (so that researchers or clinicians can try a demo).

Licensing and Ethical Considerations

Our project embraces an open-source philosophy. We will license the code under an MIT License (or similarly permissive license), which is the same license jsPsych uses . This choice allows integration with jsPsych seamlessly and encourages broad reuse – anyone can copy, modify, or distribute our battery as long as they include the license. We will include the license file in our repository and header comments in code files as appropriate.

Third-Party Content: We will be careful that all content incorporated is either original, public domain, or compatible with our license:

  • jsPsych itself (MIT licensed) is fine to use .
  • The CAM battery code is released under CC-BY 4.0 (from their paper) which is compatible as long as we credit them . If we use any of their specific code or assets, we’ll note “Portions of this code from Pinelli et al. 2025 CAM battery ”.
  • Any images or audio we include (if any for rewards or instructions) will be either created by us or from free libraries (with attribution if needed). For example, if we use a cartoon character for narrative, we’ll either draw one or use an open-license graphic.
  • The MFIS questionnaire is owned by the National Multiple Sclerosis Society. We will check if it’s freely usable; if not, we might use an alternative like the PedsQL Multidimensional Fatigue Scale (which has a validated child report version) or the short 5-item MFIS (MFIS-5) which might be open via publication . If we include MFIS, we might need to include a note “MFIS © NMSS, used with permission” depending on their terms. This is a minor legal consideration outside code licensing but important for distribution.

Contribution and Community: By open-sourcing our battery, we also enable other developers to contribute (e.g., adding a new language translation, or extending a plugin). We’ll maintain our code in a public repository and encourage community feedback, possibly adopting a Contributor License Agreement (CLA) if needed for external contributions to ensure the project remains open and free.

Data and Privacy: While not exactly a licensing issue, when deploying in real settings, we need to ensure compliance with data protection laws (e.g., GDPR if in Europe, or Canadian PIPEDA, etc.). Our code will include consent dialogs for research use, and we won’t include any tracking or analytics beyond what’s needed for the tests. If someone else deploys our battery, they should also host it securely. We’ll provide documentation on how to configure data saving and advise on obtaining appropriate ethical approvals if used in research.

Clinical Use Disclaimer: We should state that this battery is not a diagnostic tool by itself, but rather an assessment aid. Any open-source cognitive test battery should include a disclaimer that it’s not medical advice, etc., to avoid liability. Clinicians can use it as part of their assessment, but interpretation should be done by professionals.

Maintaining Open-Source Status: To prevent misuse, we might explicitly prohibit using the code in closed-source proprietary products without abiding by the license. MIT license allows commercial use, so someone could technically incorporate it into a paid app – we accept that possibility as part of open source, but we hope they contribute back improvements. If this is a concern, an alternative license like AGPL could force share-alike, but that might hinder adoption. Likely MIT is fine given the nature of the project.

In conclusion, all these considerations ensure that the pediatric cognitive assessment battery is developed in line with its foundational goals: openness, accessibility, and scientific rigor. By addressing technical, practical, and legal aspects, the final implementation will be a comprehensive tool that can be readily adopted or adapted by others to improve cognitive assessment for children affected by long COVID, POTS, CFS, and beyond. The development team can proceed with a clear roadmap for building each component, confident that prior art and community resources are available to support each step. With careful implementation and testing, this battery can become a valuable open resource in pediatric neuropsychology.