Skip to content

Introduction

Deploying cognitive tests and patient-reported outcome (PRO) surveys on mobile devices for clinical research requires a platform that is mobile-friendly, offline-capable, and open-source. The ideal solution should support high-precision timing for cognitive tasks, provide a child-friendly user interface, integrate with data systems like REDCap, and pose minimal regulatory or licensing hurdles. Traditionally, jsPsych (an open-source JavaScript library for behavioral experiments) has been used for such web-based tasks. Here we examine several alternative open-source platforms and frameworks – including lab.js, PsychoJS/Pavlovia, MindProbe/JATOS, Open Lab, and approaches like Cordova/Capacitor wrappers or native app frameworks – and compare them to jsPsych on key criteria. We highlight each option’s support for offline use, timing accuracy, UI customizability, REDCap/API integration, and learning curve/maintenance, and summarize the strengths and trade-offs. A comparison table and recommendations are provided to guide decisions on whether switching or hybridizing with jsPsych is justified.

Key Requirements for Mobile Cognitive Testing

Any platform chosen must meet the following requirements:

  • Offline Functionality: The ability to run tests without continuous internet connectivity, caching all required resources locally and storing results for later sync. This is critical for reliability in clinics or home environments with inconsistent connectivity.
  • Timing Precision: High temporal accuracy for stimulus presentation and response collection (on the order of milliseconds). While browser-based tests inherently have some variability, frameworks should minimize timing jitter or offer mitigation strategies acceptable in clinical research.
  • Mobile-First, Child-Friendly UI: Interfaces must be responsive and touch-optimized for smartphones/tablets, with options to customize layouts, fonts, and graphics to be engaging and usable by children.
  • Data Integration: Facilities to securely transmit or sync captured data to back-end systems – ideally directly to REDCap via its API, or to a secure custom server – with minimal manual steps. Open APIs and flexible data export are needed for clinical trial workflows.
  • Open-Source Licensing: The platform should be free, open-source, and permissive in license (e.g. MIT, Apache) to avoid legal obstacles in regulated trial settings. This also encourages community validation and extensions.
  • Compatibility with jsPsych Assets: Since there is existing investment in jsPsych, it’s advantageous if the new solution can reuse or interface with jsPsych tasks or at least minimize the effort to port existing experiments.

With these criteria in mind, we evaluate the leading open-source options beyond jsPsych.

Overview of jsPsych (Baseline)

jsPsych is a popular MIT-licensed JavaScript library for building behavioral experiments in a web browser . It provides a rich set of plugins for common cognitive tasks and survey questions, and it measures response times with accuracy comparable to standard lab software (typical JS responses are ~10–40 ms slower but with similar variance to tools like E-Prime) . By default, jsPsych runs in any modern browser and can achieve reaction time precision on the order of a few milliseconds – studies show its timing variability (approx. 3–8 ms across setups) is only slightly higher than dedicated platforms like PsychoPy, and well below human physiological variability .

Offline use: jsPsych experiments consist of static HTML/JS/CSS files, so they can be run offline by simply opening the file in a browser (using the file:// protocol or a local server) . In fact, the jsPsych docs note this as the quickest way to test an experiment . However, due to browser security, certain features (like WebAudio-based sound timing and video preloading) are automatically disabled in offline “safe mode” to prevent errors . This means that while offline use is possible, multimedia stimuli might not achieve the same timing precision unless you work around these restrictions (e.g. by using a local webserver or a Cordova app which can bypass CORS limitations). In practice, many have packaged jsPsych studies into mobile apps or used service workers to enable offline progressive web apps.

UI and mobile: jsPsych’s plugin system makes it easy to implement surveys (Likert scales, text responses, etc.) and cognitive paradigms, but the default HTML elements may need custom CSS for a truly mobile-friendly, kid-friendly interface (e.g. larger buttons, touch-responsive controls). Fortunately, jsPsych is very flexible: one can inject custom HTML/CSS for stimuli or instructions, or even write custom plugins, meaning any interface achievable in a webpage is possible in jsPsych with enough coding. There is no graphical study builder for jsPsych (experiments are coded in JavaScript), so some web development proficiency is required to heavily customize the UI. That said, its large user community and examples help reduce the development burden.

Integration: jsPsych by itself does not lock you into a specific data backend – you can save data as JSON/CSV files or send it to a server. With some scripting, jsPsych can make HTTP POST requests (e.g. using fetch or AJAX) to REDCap’s API. Indeed, researchers have successfully configured jsPsych/PsychoJS to send data directly to REDCap by obtaining an API token and formatting the data as an API request . This requires a developer to write the integration code, as there’s no one-click REDCap plugin. Alternatively, jsPsych’s CLI “Builder” tool can output a JATOS-ready archive, meaning you could use JATOS (see below) to handle data collection on your own server . In sum, jsPsych offers full flexibility for data handling, but you must implement the pipeline.

Learning curve: For those with coding experience, jsPsych is relatively straightforward, and its documentation is thorough. But non-programmers may find the purely code-based approach challenging at first. Maintaining jsPsych experiments is manageable – the library is actively maintained, and upgrading to the latest version (v7) is smooth, though one must update custom code accordingly. Given that our team already has jsPsych-based tasks, sticking with jsPsych leverages our existing knowledge and codebase. The question is whether another tool might significantly ease offline deployment or provide other benefits to justify a switch.

lab.js – Visual Study Builder with Offline Support

lab.js is an open-source experiment builder that provides a graphical interface to construct studies via drag-and-drop, while still allowing advanced users to tweak HTML/CSS/JS as needed . It is free and open-source (MIT license) and was explicitly designed to make web experiments as powerful as lab software with easier usability . Notably, lab.js emphasizes broad compatibility and integration: “if it’s possible on a web page, you can do it” and the library can work offline, with external services, or on your own server .

  • Offline functionality: lab.js fully supports offline data collection scenarios. Experiments can be packaged as a single HTML file with embedded assets or a bundle, which can then be run locally in a browser or distributed as an app. The project website states that “lab.js supports your workflow, whether you’re collecting data offline, using external data collection services, or running studies on your own server.” . In practice, using lab.js offline is similar to jsPsych – you can open the study file without internet. Additionally, lab.js has a built-in preloader and caching mechanisms to ensure stimuli are available, and it even provides a one-click export for offline/PWA. This makes it straightforward to create a study in the GUI and then deploy it as a static package for offline use.
  • Timing precision: lab.js puts strong emphasis on timing accuracy. Published benchmarks indicate that lab.js achieves highly precise stimulus timing across browsers, typically deviating at most by one display frame (~16 ms) and often hitting exact frame timing in Chrome . The developers conducted extensive validation with external devices, and claim “presentation and response times are kept and measured with high accuracy and precision heretofore unmatched in browser-based studies.” . While that claim may be ambitious, independent comparisons (e.g. the “timing mega-study”) found lab.js’s performance roughly on par with jsPsych and other top platforms, all showing response time jitter on the order of a few milliseconds under good conditions. Thus, lab.js is suitable for reaction-time tasks; as with any web-based tool, careful testing on your target devices is prudent.
  • UI customization and mobile UX: lab.js allows rich customization. Through the GUI, you can configure stimuli, forms, loops, randomization, etc., and you can insert custom HTML elements or CSS styles for fine control. Because the output is a standard web page, you can incorporate images, videos, or even external JS libraries to enhance the UI. The platform supports touch events, and you can design the interface with a “responsive” layout to adapt to smaller screens. Many common paradigms have templates in lab.js’s gallery (e.g. Stroop, Flanker) which you can modify . This jump-starts development and ensures tasks are presented in a consistent, polished way. For surveys, lab.js has dedicated questionnaire components and can easily create Likert scales, multiple-choice, etc., similar to Qualtrics style forms. Overall, achieving a child-friendly interface is quite feasible – one can include large, colorful buttons, images as response options, auditory feedback, etc., all within the lab.js framework. The ability to visually preview the study while building it is a plus for UX refinement.
  • REDCap or API integration: lab.js is designed to “integrate well with external tools”, and will “happily send [its] results to an external data collection service.” . In practice, lab.js supports embedding experiments within survey platforms via an <iframe> and uses the postMessage API to deliver data to the parent page . For example, one could embed a lab.js experiment in a hidden field of a REDCap survey page or use a wrapper that catches the data and pushes it to REDCap. More directly, lab.js’s survey integration export can prepare a version of the study that automatically sends the final JSON or CSV results to a specified URL or callback function . This means a developer can set up a small script (in the hosting page or server) to receive lab.js data and use REDCap’s API to import it. While this still requires some configuration, lab.js provides guidance for integrating with Qualtrics, JATOS, OSF, and others out-of-the-box. REDCap integration would be analogous – achievable with moderate effort. The open nature of lab.js means you have full control to format and send data as needed.
  • Learning curve and maintenance: lab.js offers a gentler learning curve for non-coders compared to jsPsych. A researcher can use the GUI builder to set up an experiment without writing code, which is ideal for quick prototyping or for users less comfortable with programming . The documentation and community support (Slack channel, tutorials) are strong. However, mastering the tool’s advanced features (for example, writing complex logic or custom trial sequences) might require digging into the code or using the developer mode. The good news is that the builder generates a fully editable script – advanced users can directly modify the code for fine-tuning. Maintenance of lab.js studies is relatively low-effort: since it’s actively maintained (recently updated in 2025) and versioned, you can update the library if needed, though you’d want to test that existing studies still run the same. One consideration is migrating existing jsPsych tasks – there’s no automatic converter from jsPsych to lab.js, so tasks would need to be rebuilt in the lab.js builder or manually coded using lab.js’s library. If we have many complex existing experiments, this porting effort is non-trivial. But for new studies or a gradual transition (e.g. using jsPsych for some tasks and lab.js for new ones), lab.js is a strong candidate, especially given its seamless offline deployment and user-friendly design.

PsychoJS/Pavlovia – PsychoPy for the Web

PsychoJS is the JavaScript counterpart of the PsychoPy experiment builder. It enables running PsychoPy-designed experiments in a browser, maintaining much of PsychoPy’s precise timing and stimulus control features . PsychoJS is also open-source (MIT licensed) . Researchers often use PsychoPy’s GUI (“Builder”) to design an experiment and then export it to web-friendly PsychoJS automatically, which can then be hosted on Pavlovia (the official server) or elsewhere. This approach leverages PsychoPy’s rich feature set (originally for lab use) and brings it online.

  • Offline and deployment: PsychoJS experiments are essentially static web pages with a bunch of JS libraries (including the core PsychoJS library and your experiment script). Therefore, like jsPsych/lab.js, they can be run offline if you have the files locally. In PsychoPy Builder, there’s an option to “Export to HTML” which produces the necessary files; you can then run a local webserver or use a tool like Cordova to package them. On a desktop, PsychoPy’s Runner will even launch a temporary local server for testing the PsychoJS output . However, the typical use case is to upload the experiment to Pavlovia.org and let participants access it via URL – which does require internet. Pavlovia itself doesn’t support offline data collection (it’s an online platform). For offline mobile use, one would likely need to create a hybrid app of the PsychoJS experiment. This is technically feasible (it’s analogous to doing so with jsPsych), but not officially documented in detail. Some developers on the PsychoPy forums have done local/offline PsychoJS by running a local server or by embedding in an app . In summary, PsychoJS doesn’t natively provide an “offline mode” beyond local testing, but being open-source JS, it can be deployed in an offline manner with additional effort.
  • Timing accuracy: PsychoJS (and PsychoPy by extension) has a strong reputation for timing. PsychoJS uses the Pixi.js WebGL rendering engine for drawing stimuli on an HTML5 canvas , which allows it to synchronize screen refreshes and present stimuli with minimal dropped frames. The Timing Mega-Study found that PsychoPy (running locally) was among the best for timing precision (sub-millisecond jitter) and that its online version (PsychoJS on certain browsers) achieved very close to millisecond precision as well . In particular, PsychoJS showed ~3–4 ms response time SD in many conditions, which is as good as or better than jsPsych’s typical ~5–8 ms in the same study . PsychoJS is also careful about syncing stimulus onset to screen refresh (it offers functions analogous to PsychoPy’s win.flip() for controlling flips). For tasks requiring fine-grained timing (e.g. visual threshold tests, psychophysics), PsychoJS might have an edge due to these optimizations. It’s worth noting that all web experiments can suffer from variability due to device hardware and browser behavior, but PsychoJS is designed to mitigate this as much as possible. It also supports collecting precise reaction times relative to stimulus onset, and can use browser performance clocks. In practice, PsychoJS is certainly adequate for most clinical cognitive assessments where differences are on the order of tens or hundreds of milliseconds.
  • UI customization and child-friendly design: PsychoJS inherits the paradigm of PsychoPy – you typically define stimuli (text, images, shapes) and their layout in a Cartesian coordinate space, or use form components, rather than designing arbitrary HTML pages. This is great for traditional cognitive experiments (where you might draw stimuli at precise locations, present fixation crosses, etc.), but less straightforward for making fancy GUIs or multi-question pages. PsychoPy has a Form component that can present questionnaires (multiple questions on screen with response fields), but there have been some issues with its online implementation in past versions . Nevertheless, one can certainly create surveys in PsychoJS by either using the Form component or by treating each question as a separate routine. For a child-friendly interface, PsychoJS allows you to incorporate images and sounds easily (e.g. you could present a fun graphic when a child completes a task). However, customizing the look (fonts, button styles) might require adding CSS or JS in “code components”. PsychoJS isn’t as free-form as jsPsych or lab.js in terms of arbitrary HTML; it’s more akin to a controlled environment (especially when using Builder). If you code directly in PsychoJS (without Builder), you could in theory integrate HTML elements, but that is uncommon. The intended workflow is to let PsychoJS handle all rendering via the canvas for consistency. This yields a smooth experience but could be a bit limiting for complex UI layouts. That said, PsychoJS absolutely supports mobile: the team explicitly mentions that experiments can run on phones and tablets . In fact, “participants can run them on any device… desktops, laptops, or tablets. In some circumstances, they can even use their phone!” . Ensuring touch input works (e.g. capturing on-screen button presses) is possible by using the Mouse/Touch components or by binding keys to on-screen stimuli. We’d likely need to test our tasks on small screens and adjust text size and positions accordingly in PsychoPy’s Builder. PsychoJS can go full-screen in the browser to maximize usable area – an important feature for mobile use.
  • Integration with REDCap/servers: Out-of-the-box, if you use Pavlovia, data are stored on Pavlovia’s server (and can be downloaded as CSV or via their API). But since we prefer REDCap, we’d bypass Pavlovia’s storage. PsychoJS code can use JavaScript XMLHttpRequest or fetch to send data to external endpoints, similar to jsPsych. The PsychoPy team does not officially support custom server integrations (to encourage using Pavlovia), but they acknowledge it’s possible because “PsychoJS is open source… you can have it talk with something else” . Indeed, developers have made PsychoJS directly communicate with REDCap’s API . One approach is to include a small JS snippet at the end of the experiment that formats the data and POSTs it to REDCap’s API URL (with the appropriate token). This requires solving any cross-origin issues (e.g. hosting the experiment on a domain allowed by REDCap’s CORS policy, or using a proxy script). In our context, we might host the PsychoJS study on the same server that can relay data to REDCap. Alternatively, we could run the experiment inside a WebView in a mobile app where CORS is less restrictive and then call REDCap. In any case, it’s feasible but not plug-and-play. Pavlovia itself does not integrate with REDCap, so a custom approach is needed. The maintenance burden might increase if we diverge from the standard Pavlovia workflow, since we’ll need to ensure our custom data push works reliably.
  • Learning curve and maintenance: If our team is not already familiar with PsychoPy, there would be some learning curve to use the Builder interface and its concepts (routines, timelines, etc.). However, many psychologists find PsychoPy intuitive after some practice, and its community is large. For those already using PsychoPy for lab tests, moving to PsychoJS is easy – just design as usual and hit “export”. For a team coming from jsPsych, though, it might feel like a shift to a new paradigm. Debugging PsychoJS experiments can also be tricky if you only use Builder, since the generated code might not be immediately familiar. On the plus side, PsychoPy/PsychoJS’s rich feature set might cover some needs that would otherwise require custom coding in jsPsych (e.g. built-in staircase procedures, specialized visual stimuli, etc.). Maintenance of PsychoJS experiments usually means maintaining the PsychoPy source experiment; as PsychoPy and PsychoJS versions evolve, you might need to re-export or adjust components. PsychoPy is under active development (with frequent updates), which can be a double-edged sword – new features appear, but you must occasionally adapt old experiments to new version quirks. Pavlovia hosting has a cost (typically per participant or a subscription) if used; since we prefer open/free solutions, we’d likely self-host the experiments. That could mean setting up our own server (or using JATOS with PsychoJS compatibility). Overall, switching to PsychoJS primarily makes sense if we want to leverage PsychoPy’s capabilities or if our team is already fluent in it. Its strength is precise control and an integrated GUI design process, but it offers less native flexibility for web integrations and requires some workaround for offline use.

JATOS (and MindProbe) – Open-Source Experiment Server

Rather than a front-end library, JATOS (Just Another Tool for Online Studies) is an open-source server application for managing and deploying studies. It complements front-end frameworks like jsPsych, lab.js, or PsychoJS by handling data storage, user management, and study flow on the back-end . JATOS is released under Apache 2.0 and gives you full control by running on your own server . MindProbe is a free public JATOS server (sponsored by ESCoP) that anyone can use, which demonstrates JATOS’s capabilities without requiring your own infrastructure .

  • Offline capabilities: As a server-based solution, JATOS is primarily intended for online use, but it can be used in offline/local network scenarios. For example, a lab could run a JATOS server on a local machine or closed network, and devices could connect to it via Wi-Fi without internet. However, JATOS itself does not have a mobile app or offline client – it still serves studies through a browser. So if a device has absolutely no network connectivity, it wouldn’t be able to load the study from JATOS or send data to it in real-time. One workaround for true offline field use is that JATOS allows packaging a study for offline data collection, where data can be later uploaded. There’s a feature where you can run a study “offline” in a browser and then manually upload the result files to the JATOS server afterwards (this is described in their docs as using JATOS in batch mode offline). Still, this is a bit cumbersome and not as seamless as an app that caches and syncs. MindProbe, being a hosted server, requires internet connectivity; it wouldn’t help for offline. In summary, JATOS is excellent for self-hosting (enhancing data privacy and compliance) and can be part of an offline solution if you configure a local server, but on-device offline operation is not its primary use case.
  • Timing accuracy: JATOS itself does not determine timing – that’s handled by whatever front-end (jsPsych, lab.js, etc.) you use within it. JATOS simply delivers the files to the browser and collects the data. Thus, a jsPsych experiment running via JATOS has the same timing characteristics as it would elsewhere. One minor consideration is that JATOS can coordinate multi-part or multi-user studies (like sending messages between participants in real-time), but for our single-participant cognitive tests, this isn’t directly relevant to timing. The overhead JATOS introduces is negligible in terms of reaction time measurements, since those are recorded client-side. So JATOS does not compromise timing; it supports all the major frameworks which have been validated for precision .
  • UI and customization: Again, JATOS is back-end. The participant interface is still whatever you built with jsPsych/lab.js/etc. JATOS does provide a basic wrapper web page when serving a study, but you can fully customize the study pages themselves. It also has a nice admin UI for researchers to monitor sessions, but participants won’t see that. One UI aspect to note: JATOS can present a list of available studies or require login codes, etc., before the experiment loads. If we were to use JATOS in a trial, we might embed it in a tablet and have participants select a task, etc. We would have to design that user flow. But fundamentally, the look and feel of each task is determined by our front-end code (which could be styled for kids as needed).
  • Integration with REDCap: JATOS stores data in its own database and lets you download it (e.g. as CSV) after collection. It doesn’t directly push to external systems. To integrate with REDCap, one approach would be to use JATOS’s API or export functions to periodically export data and then import to REDCap (potentially using REDCap’s API or manual upload). Another approach: since JATOS studies can execute arbitrary JavaScript, you could potentially add a callback in the study that also sends data to REDCap’s API upon completion (just as you would in jsPsych). However, a more straightforward pattern is to treat JATOS as the primary data store and then do a batch transfer to REDCap for record-keeping. Considering regulatory environments, some prefer not to have patient data on multiple systems. In that case, one might skip JATOS and send directly to REDCap, or conversely, use JATOS exclusively and not use REDCap for the ePRO data at all. If REDCap integration is a must and in real-time, JATOS doesn’t simplify that – it still requires custom work or double-handling of data.
  • Learning curve and maintenance: Setting up JATOS requires deploying a web server (it’s a Java application). For technically-inclined team members, this is moderate effort – the documentation is clear, and many labs run JATOS on institutional servers or even locally on a PC for lab studies. Once running, adding studies is relatively easy: for jsPsych or lab.js experiments, you zip them in a JATOS package (both jsPsych and lab.js have tools to export in JATOS format ). The JATOS web UI then allows uploading that study, setting worker credentials, etc. For a clinical trial, the advantage is full control of data (no third-party cloud) . Maintenance involves keeping the server up to date and secure, and managing user accounts if needed. MindProbe offers a no-setup option (just create an account and upload your study to their server), but you’d be entrusting data to a public server (which may not be acceptable for patient data unless properly consented/encrypted). Using JATOS doesn’t replace the need to program the experiments in a front-end framework – rather, it’s an infrastructure choice. If our main challenge is offline mobile use, JATOS alone isn’t a full solution (since the device still needs to load the study). But if our challenge is data sovereignty and integration, JATOS is quite attractive. We could imagine a hybrid: package jsPsych or lab.js tasks in a Cordova app for offline use, and have that app sync with a JATOS server when online, then from JATOS export to REDCap. This is complex, but JATOS would serve as the intermediary data cache with a user-friendly admin interface.

In summary, JATOS/MindProbe shine in scenarios where you want an open-source, flexible server backend for studies, possibly in your own IT environment. It plays well with jsPsych and lab.js (lab.js can directly export JATOS studies , making the combo very smooth ). It’s less about improving the experiment experience and more about improving deployment and data management. If our issue with jsPsych is not wanting to run our own server or dealing with CORS, JATOS solves that by providing a ready-made server environment specifically for experiments. It doesn’t inherently solve UI or offline concerns on the client side, but it addresses integration and data control.

Open Lab – Platform for Hosting lab.js Experiments

Open Lab is another server-side solution, specifically designed to work with lab.js. It’s a web application (open-source) that allows researchers to upload lab.js experiment files and deploy studies without dealing with their own servers . Essentially, Open Lab is to lab.js what Pavlovia is to PsychoJS – a hosting and management platform, but with an emphasis on openness and ease of sharing.

  • Offline use: Open Lab itself is an online platform (hosted at open-lab.online ). It doesn’t provide an offline app; rather, it streamlines online deployment of lab.js studies. If offline capability is needed, one could still export the lab.js study and run it locally (bypassing Open Lab). In fact, lab.js + Open Lab gives flexibility: you can collect data online via Open Lab’s server, and for situations with no connectivity, you could run the same study offline on a laptop or device using the exported files. Later, data collected offline could be merged manually. However, Open Lab doesn’t offer a built-in sync for offline data collection. Its core purpose is making it easy for non-technical users to get a study running on the web securely. Therefore, in a mobile offline context, Open Lab by itself isn’t a full solution – it would require either connectivity or a plan to run without it.
  • Features and integration: Open Lab provides account management, participant management, and some degree of data security oversight for lab.js studies . It integrates with the Open Science Framework (OSF) and supports collaboration, versioning, and study sharing . For data, Open Lab will store the results from participants on its server, and researchers can download them. There’s no direct REDCap integration, but since the experiments are lab.js, one could embed them or send data out as discussed earlier. Open Lab’s advantage is that it reduces maintenance burden – you don’t need to maintain your own server (unlike JATOS) – and it’s free to use (it’s an academic project, not a paid service). In terms of UI, participants just see the lab.js experiment as usual, possibly preceded by an information page or login code prompt that Open Lab manages. It doesn’t alter the experiment UI.
  • Use case fit: Open Lab would be useful if we choose lab.js as our experiment framework and want an easy deployment for online portions of the study. For a trial scenario, one might use Open Lab for participants who do tasks at home online. If designing a mobile app for in-clinic use, though, Open Lab might not be involved (instead, we’d embed the lab.js tasks in the app). One interesting possibility is a hybrid approach: use lab.js for tasks, use Open Lab for general deployment and data collection in internet-connected settings, but also have an offline fallback via a local app. Because lab.js tasks are portable, we could do this without rewriting tasks. The question is whether maintaining two modes (app vs online) is worth it. If connectivity is generally available, Open Lab alone could suffice and provide a user-friendly experience (just send families a link or have them use a tablet with the site loaded). But if offline robustness is critical, Open Lab doesn’t eliminate the need for an app.
  • Maintenance: Since Open Lab is basically “lab.js in the cloud,” it inherits lab.js’s ease of use. Uploading a study is simpler than configuring JATOS or building a custom web app. It also handles user accounts and data encryption (important for clinical data). Being open-source, one could even host a private instance of Open Lab within a hospital’s network, ensuring data never leaves. That, however, is a deployment project on its own. Using the public Open Lab might or might not be acceptable for clinical trial data – we’d have to consider data agreements and whether the server meets required compliance. In any event, Open Lab reduces the technical maintenance of the web side; we’d focus just on creating the experiments in lab.js.

In summary, Open Lab is a convenient companion if lab.js is chosen. It doesn’t introduce new capabilities beyond what lab.js does, but it simplifies deployment and sharing. If our priority is minimizing IT overhead and we are comfortable with online data collection, Open Lab is attractive. But for strict offline needs, we would still need a local deployment method in addition.

Cordova/Capacitor – Wrapping Web Apps for Mobile

Using Apache Cordova or Ionic Capacitor (open-source frameworks for hybrid apps) is an approach rather than a content platform. The idea is to take our web-based experiment (whether built in jsPsych, lab.js, or PsychoJS) and package it as a native mobile application. This yields a standalone app that can run offline on Android/iOS and utilize native device features if needed. Many research teams have successfully used Cordova to deploy jsPsych experiments on tablets in the field, achieving robust offline functionality without reinventing the wheel.

  • Offline functionality: Wrapping a web experiment in Cordova/Capacitor gives full offline capability. All HTML, JS, and media assets are bundled with the app, so once installed on the device, no internet connection is required to run the study. Data can be stored locally (for example in the device’s file system or a SQLite database) and then synced to a server (like REDCap or JATOS) when a connection is available. Cordova apps can detect network status via events , enabling the app to queue data uploads for later. This approach essentially turns our experiment into a mobile app that children could open with one tap, with no URLs or browsers involved – a very user-friendly deployment. It’s worth noting that Cordova uses an embedded WebView under the hood, which on modern devices is equivalent to a Chrome or Safari browser engine. Thus, the timing and performance characteristics of our experiment remain similar, but we have more control (for example, we can fix the orientation, disable multi-touch, keep the screen awake, etc., through native plugins).
  • Timing and performance: Running in a Cordova WebView is roughly the same as running in the native browser, but sometimes with fewer distractions. We can configure the app to run fullscreen and perhaps at higher priority, so it’s less likely that background processes will interrupt. Still, the device hardware and OS ultimately determine timing variability. Cordova itself doesn’t add timing overhead (it’s just a container). Capacitor (the newer alternative by Ionic) similarly just hosts a WebView. One nuance: if our experiment relies on certain web APIs, we need to be sure the WebView supports them. Most do support WebGL, Web Audio, etc. For example, if PsychoJS is using WebGL via Pixi, the Cordova WebView will handle that as well as Chrome does. So we can expect that a jsPsych or lab.js experiment in a Cordova app has essentially identical precision to in-browser usage. The benefit is consistency – the environment is fixed (you can lock the app to use a specific WebView engine), whereas if you deploy via web you can’t control if someone uses an outdated browser.
  • UI customization: With a Cordova or Capacitor project, we have freedom to design additional native-like UI around the web content if desired. For instance, one could create a native menu or use an Ionic UI to navigate between multiple tasks or surveys within the app. Or we could keep it simple and just have the app launch straight into the experiment’s first screen. The main content is still HTML/JS, so any customization that was possible in the web frameworks remains possible. We might use this approach to add polish such as custom loading screens, an animated mascot for kids while tasks load, or native dialogs for certain prompts. Cordova supports plugins that can do things like text-to-speech, haptic feedback, or writing files – features that could enrich a child-friendly experience. These go beyond what pure web can do easily. That said, incorporating those requires additional development. If our goal is mostly to get offline capability, we could minimally wrap the existing experiment and not add new UI elements. Capacitor, in particular, makes it straightforward to invoke native functionality from JavaScript if needed, which could be useful (for example, to beep the device or access motion sensors for experimental purposes).
  • Integration with REDCap/API: In a Cordova app, we can leverage the device’s connectivity when available to call external APIs. This could be done in the same way as from a webpage (using fetch), but Cordova could also use native HTTP plugins if needed. One advantage is we can include stored credentials or tokens securely in the app (or obtain them securely) and ensure data is only sent when on Wi-Fi, etc. Essentially, the app could accumulate results for multiple sessions and then batch-upload to REDCap via its API. The logic for this needs to be coded, but it’s straightforward for a developer familiar with REST APIs. There are even JavaScript libraries for REDCap API (one listed on REDCap Tools is a lightweight JS client) that we could include. Additionally, because the app is under our control, we could implement encryption for data at rest and in transit to satisfy any security requirements. The integration effort is similar to doing it in a web page, but possibly more reliable since we can retry in case of failures and use background sync.
  • Learning curve and maintenance: Building a Cordova/Capacitor app means dealing with mobile app development processes: setting up development environments (Android Studio, Xcode for iOS), managing app signing, and eventually distributing the app (via app stores or enterprise deployment). This is a different skillset than web programming, but teams often manage it by having a developer familiar with the process or following community tutorials. Once the app is built, maintaining it involves updating it for OS updates and possibly submitting new versions to app stores. This can introduce some overhead, especially for iOS where app review is required. However, if the study is distributed privately (not through public app stores), maintenance is simpler (e.g., sideload or use mobile device management to install on trial devices). The hybrid app approach is quite powerful but does require more initial setup than a pure web solution. If changes to the experiment are needed, you’d have to rebuild the app and redeploy it, rather than just updating a web page. On the flip side, the app encapsulation avoids the issue of participants accidentally navigating away or dealing with browser UI.

In summary, Cordova/Capacitor wrappers are an excellent way to utilize our existing web-based experiment content in an offline, controlled mobile environment. They offer the best of both worlds: we keep using jsPsych or lab.js (no need to rewrite tasks in a new language) and simply package them for mobile distribution. The trade-off is additional development complexity and the need to maintain an app. This approach is justified if offline reliability and a polished participant experience are top priorities and if the team has or can acquire basic mobile build expertise. Given our context (clinical trial, children as users), providing an app that is tap-and-go could significantly improve compliance and ease-of-use, which may outweigh the costs of building it.

Native Frameworks (React Native, Flutter, etc.) – Rebuilding as a Native App

Another route is to build the PRO and cognitive testing app entirely with a native or cross-platform framework like React Native (JavaScript/JSX), Flutter (Dart), or native iOS/Android (Swift/Objective-C, Java/Kotlin). These frameworks are open-source and allow creation of highly customized mobile applications. Essentially, instead of using a browser environment, we’d be rewriting tasks using native UI components or a game engine. This approach can yield excellent performance and flexibility, but it represents a significant reinvestment of effort.

  • Offline and performance: A fully native app can be designed to run completely offline (just like the Cordova approach) and store data locally. In terms of timing, a well-written native app can achieve very high precision. For instance, a game engine could sync stimuli to the display’s refresh rate precisely, and native code can often poll input devices with minimal lag. However, modern devices and good web frameworks already get close to the hardware limits for timing, so the practical gain might be modest. If, for example, we needed to integrate with hardware (like a response button box via Bluetooth) or do intensive graphics (like AR/VR cognitive tests), then a native approach might be necessary. But for typical reaction time tasks (taps on the screen, showing images), a native implementation likely offers similar accuracy to web (on the order of a few milliseconds of jitter). The main performance advantage might be smoother graphics/animations and guaranteed frame rates on a variety of devices due to using lower-level APIs. We could also reduce overhead by not loading a full browser engine. Yet, frameworks like React Native still use a JS bridge and Flutter uses its rendering engine – they are not magically faster than optimized web JS, but they give more control over threading and optimization.
  • UI customization: This is where native frameworks shine. We can craft an interface exactly as we want, using the native widgets or custom drawing. For a child-friendly experience, a native app could have playful transitions, interactive feedback (vibration, sound), and more complex interactive elements (e.g., drag-and-drop games) that might be trickier in jsPsych. We could integrate multimedia or even mini-games between test trials to maintain engagement. React Native would allow us to write JavaScript but utilize native UI components; there are libraries for creating surveys or game-like experiences in RN. Flutter similarly can render beautiful custom UI and has a growing ecosystem of packages. The design would be limited only by our imagination and development time. Also, a native app could integrate both surveys and cognitive tests in one coherent app without feeling like “web pages” – everything can follow a consistent design language. For example, we could implement the MFIS questionnaire using native form controls that match the app’s theme, and implement cognitive tests using either a canvas (Flutter’s CustomPaint or RN’s Canvas) or even a small game engine (some have embedded Unity for tasks). This gives ultimate flexibility in tailoring to children (colors, fonts, avatars, etc.).
  • Data integration: As with Cordova, a fully native app can use REDCap’s API or any secure transmission to send data. You might use the platform’s HTTP libraries to post data. Additionally, a native app could potentially integrate REDCap more deeply – for instance, using a webview to show a REDCap survey or using REDCap’s mobile app SDK if such existed (Redcap actually has a mobile app, but it’s not really open for integration). In most cases, we’d treat our app as an independent data collector and just use the API. The difference is that we might be able to ensure data is stored securely on the device (encryption, etc.) and then automatically synced. We’d have to code the logic for data handling, but that’s manageable. If needed, a native app could also interface with device security features (like biometric unlock for accessing the app, if that were relevant to a study).
  • Learning curve and maintenance: Moving to native frameworks is the largest shift. It means essentially rewriting all our tasks in a new format. While React Native uses JavaScript, one would have to create components for each trial type or survey question, manage navigation, state management, etc. The wealth of jsPsych plugins or lab.js templates would no longer be directly usable – though one could take the logic and reimplement it. Maintenance of a native codebase might require mobile development expertise, especially for iOS quirks and Android fragmentation. Also, as with Cordova, distributing updates means rebuilding and deploying to devices or app stores. This approach likely has the steepest learning curve if the team is currently composed of web-minded developers or psychologists. It might only be justified if the web-based approach proves insufficient in some critical way (e.g., if web timing was too untrustworthy on certain devices, or if the UI demands were beyond what CSS/HTML can do, or if we needed integration with native sensors).
  • Hybridizing with jsPsych: One could consider a partial native approach: for example, use a WebView inside a native app for jsPsych tasks (which is basically Cordova), but also have some native screens. Or use React Native to write some parts of the app in RN components (like a dashboard, login, etc.) and for the cognitive test, either implement it natively or drop down to a WebView running jsPsych. In fact, React Native has a WebView component that could host a jsPsych experiment if needed. However, mixing approaches can add complexity. Another hybrid idea: if a particular cognitive test is very demanding timing-wise, one could write just that test in native code, while using web for other parts. For example, a continuous performance test that needs frame-perfect stimuli might be a small Unity or native module, whereas surveys remain web-based. This kind of hybridization is advanced but sometimes done in research apps.

Given our scenario, a fully native build is probably high cost unless we have strong justification. It offers maximal control and possibly the slickest user experience, but at the expense of losing all the existing jsPsych/js libraries we have. It might be warranted if, say, jsPsych or lab.js couldn’t deliver a particular feature (like precise audio-visual synchronization or complex interactive graphics for kids). But since those frameworks are quite capable for typical tasks, the native route seems like overkill for most PROs and cognitive tests. We should weigh this in the recommendations.

Comparison of Platforms and Frameworks

Below is a comparison table summarizing how each option stacks up against the key criteria, with jsPsych as the baseline reference. (All options discussed are open-source; all support basic mobile use, so the table focuses on distinctions.)

Platform / Framework Offline Functionality Timing Precision UI Customization & Mobile UX REDCap / API Integration Learning Curve & Maintenance
jsPsych(baseline) Partial : Can run offline by opening files or bundling; needs workaround for audio/vid (safe-mode). Packaging as PWA or Cordova enables full offline use. High : Proven ~<10 ms RT precision; uses browser performance API. Slight constant lag (10–40 ms) vs lab software, but low variability. Adequate for most RT tasks. Flexible : Full control via HTML/CSS/JS. Can create any layout, embed media, etc. But no GUI – must code for styling. Mobile-friendly UI requires custom CSS (e.g. larger buttons for touch). Many plugins available for standard tasks. Flexible : No built-in data store – developer can send data via fetch/XHR to any endpoint (REDCap API feasible, used in practice). Requires custom scripting for secure transfer. Moderate : Requires JS coding. Well-documented with large community. Maintenance is straightforward, but updating multiple experiments can be labor-intensive. Existing code can be reused; large plugin library reduces need to reinvent tasks.
lab.js Yes : Explicit support for offline use. Studies can be exported as a static bundle for local use. No special limitations offline (aside from same browser restrictions as jsPsych). Easy caching of stimuli. High : Optimized for precise timing; lab validation shows frame accuracy in Chrome. Response latency overestimation ~1 frame (~16 ms) like most JS, with low noise. Essentially comparable to jsPsych; suitable for clinical timing needs. High : Visual builder and templates simplify design. Custom HTML/CSS can be injected for styling. Supports responsive layouts and touch. Easily incorporate images, loops, branching. Good for multi-page surveys and classic cognitive tasks. Can achieve child-friendly designs (colors, images) without coding. High : Built to send results to external services. Provides export option for survey integration, posting data viapostMessageor HTTP to parent page or server. Can embed in REDCap (via iframe + hidden field technique) or call API with moderate effort. Low/Moderate : Low code option – easy for non-programmers via drag-drop. Extensive docs and Slack support. Maintenance is easy: experiments are shareable and version-controlled. However, migrating existing jsPsych code to lab.js would require rebuilding in the new system.
PsychoJS(PsychoPy online) Partial : Can run outside Pavlovia if files are hosted or packaged, but not a one-click feature. Typically runs online via Pavlovia (requires internet). For offline, must self-host or use an app wrapper. Very High : Designed for neuro experiments. Syncs stimuli to screen refresh; uses WebGL rendering. In tests, PsychoJS (with certain browsers) achieves ~3–5 ms timing precision, matching or beating other web tools . Excellent for accurate stimulus timing. Moderate : If using Builder, UI elements are somewhat limited (text, image stimuli, form dialogs). Good for controlled experiment screens, less for fancy layouts. Can be made mobile-friendly (works on tablets/phones)but may require careful scaling of stimulus size and using mouse/touch components. Custom JS/CSS can be added, but this is advanced. Not as straightforward for multi-question pages, though form component exists (with some online issues). Flexible : Able to send data to custom REST endpoints (via JS code). Pavlovia’s standard route stores data on their server, but one can bypass and POST to REDCap or another server. No native REDCap module, so requires developer to implement API calls. PsychoJS doesn’t hinder integration; it just doesn’t provide it out-of-box. Moderate : Easy if already familiar with PsychoPy (just export to web). For new users, must learn PsychoPy Builder logic or code in PsychoJS (not widely documented for hand-coding). Pavlovia hosting requires account and possibly costs; self-hosting PsychoJS means managing files/servers (or using JATOS). Maintenance can be a bit complex if experiments rely on specific PsychoPy/PsychoJS versions. Porting jsPsych tasks to PsychoJS means rebuilding them in PsychoPy (significant effort).
JATOS(server platform) Limited (client) : Not a client framework – it’s a server. Supports offlineserver(you can run JATOS on local machine without internet). But participants still need to load study via a browser. True offline use would require a local server or running study in offline mode and later uploading results. No native mobile app, so offline on device not inherent. N/A (depends on task) : JATOS doesn’t affect stimulus timing; it delivers studies made in jsPsych/lab.js/etc. Those handle timing. JATOS adds no extra delay in RT measurement. It can time overall experiment duration or schedule messages, but for in-trial timing it’s neutral. N/A (client UI as per task) : JATOS’s participant UI is basically the study content from your front-end. It does provide a basic web interface for launching studies (e.g., an intro page or login code), which can be customized to some extent (logos, instructions). But the look and feel of tasks is determined by whatever framework is used (jsPsych, etc.). Moderate/High : JATOS excels in data management but doesn’t directly integrate with REDCap. You can download data from JATOS’s GUI or via API and then import to REDCap. Alternatively, one could code the study to send data to REDCap during runtime, but usually you’d use JATOS’s own result storage and then do a batch export. So extra steps or scripting are needed for REDCap (e.g., a cron job to push JATOS data to REDCap API). The benefit is JATOS keeps data secure on your server (compliance friendly). Moderate : Requires server setup and some IT know-how. Once running, adding studies (JATOS “.jzip” packages) is straightforward, especially with lab.js or jsPsych builder exports. Maintaining JATOS means managing server updates, user accounts, and ensuring backups of data. It’s a powerful solution but likely overkill if one only needs a few tasks – it shines when running many studies or needing multi-user coordination. For a single-app clinical trial, it introduces an extra system to maintain. Using MindProbe (hosted JATOS) avoids server maintenance but entrusts data to an external server (check data agreements).
Open Lab(lab.js hosting) No (client) : Similar to JATOS, it’s a hosting platform – users need internet to access the study on Open Lab’s site. No offline mode for participants. (Researchers could self-host Open Lab’s code to have an instance in a closed network, but that’s a server project.) For pure offline usage, one would bypass Open Lab and run the lab.js study locally. N/A (as above) : The timing is governed by lab.js (or whichever library used) – Open Lab just serves the experiment. It ensures the study loads quickly and securely online, but doesn’t alter client-side performance. N/A (client UI) : Participants see the lab.js experiment as designed. Open Lab manages things like the landing page, consent forms, and post-study confirmation if configured, but the task interface is unchanged. It does add convenience features (multi-language support, etc.), but visual customization is mostly in the experiment design itself. Moderate : Open Lab integrates with lab.js and OSF, but not specifically with REDCap. Data collected can be downloaded as CSV/JSON from Open Lab’s interface. If REDCap integration is needed, one could manually upload the data or potentially set up a script to fetch from Open Lab and send to REDCap. There’s no built-in bridge. Essentially, Open Lab centralizes experiment deployment, after which you’d treat it like another data source to import into REDCap. Low : Very easy for lab.js users – just upload and go. No need to manage a server (if using the hosted service). Maintenance of studies on Open Lab is minimal, as it handles security updates and so on. However, relying on an external service means you’re subject to its uptime and data policies. If our institution requires data stored internally, we’d have to deploy Open Lab ourselves which is non-trivial. Compared to JATOS, Open Lab is more turnkey but less flexible (only supports lab.js experiments).
Cordova/Capacitor App Yes: Complete offline capability. All content is packaged in the app; no internet required to administer tasks. Data can be stored on device and synced later (app can detect connection and upload). Ideal for poor connectivity settings. High : Runs in an optimized WebView; timing similar to running in Chrome/Safari. No network delays in stimulus delivery since assets are local. Can lock orientation and use full-screen to reduce variability. Empirically, reaction time precision remains in the few-ms range (underlying jsPsych/lab.js still governs). High: Can leverage web framework flexibility plus some native features. App can enforce a consistent, simplified UI (no browser chrome). Possible to add native plugins (e.g. vibration on response, text-to-speech for encouragement, etc.). Multi-task navigation or a kid-friendly menu can be built with a JS framework (like Ionic UI components). Essentially as customizable as a website, but with added device integration if needed. High : The app can directly call REDCap API using fetch or a native HTTP plugin. Can store an API token securely. Because app knows when it’s online, it can batch-send data in one go (e.g., end of day sync). No CORS issues to worry about. Implementation involves coding the data submission logic, but tools and examples are available. Testing and error handling are needed for reliability (e.g., queue data until 200 OK from server). High : Initial setup requires mobile dev knowledge (build tools, app signing). Each OS update may require app updates. Maintaining two platforms (Android/iOS) can be done from one codebase (Cordova apps are cross-platform), but you need to test on both. Distribution to participants (especially on iOS) can be a hurdle if not going through App Store (TestFlight or enterprise provisioning may be needed). On the development side, though, you mostly maintain your web code (jsPsych/lab.js), so theexperiment logicdoesn’t change – only the container does. So if your web tasks are stable, the extra maintenance is mainly the app shell.
Native App (React Native / Flutter / etc.) Yes: Full offline. All logic on device. Data stored locally (e.g., in RN’s AsyncStorage or a local database) and later sent to server when online. Nothing inherently requiring internet except when syncing data. Very High: Potential for sub-ms timing if optimized. Can use high-resolution timers and render directly to GPU surfaces. But implementing this is complex. Realistically, for typical tasks, a well-made native app will have very stable timing (<5 ms jitter easily). Input event handling might be slightly more immediate than in WebView. If extreme precision (e.g. audio <-> visual sync within 5 ms) is needed, native gives better tools (e.g., audio callback APIs). Very High: Unlimited freedom. Can create custom animations, use device sensors (camera, accelerometer for novel cognitive tasks), and design bespoke interfaces. React Native allows using any UI component or library – one could build an engaging, game-like task environment. Flutter similarly can draw anything (including cute graphics, game elements at 60fps). A native approach can be tailored to children (e.g., cartoon characters guiding them through tasks, interactive feedback) in ways limited only by dev time. High: Direct integration via native networking libraries. One could even embed the REDCap mobile app’s functionality if such existed. Typically, you’d POST data to REDCap API as with other approaches. RN/Flutter can also utilize background services to sync data even if the app isn’t open (helpful in ensuring data uploads). You’d manage API tokens in app code or secure storage. Again, a custom implementation, but completely doable and perhaps more robust control (e.g., verifying server responses, custom error retries). Very High: Essentially developing a software product. Requires expertise in chosen framework and possibly native modules (especially for RN if precise timing or custom features are needed, one might dip into native Android/iOS code). Debugging across device types, handling app crashes, and ensuring UX consistency is more involved than for web. Any change in task logic requires an app update. There’s also a risk of needing to support multiple OS versions. The maintenance burden can be significant, akin to maintaining two codebases (though RN/Flutter unify a lot of it). Unless the team has strong mobile dev resources, this route can slow down iteration.

Table Notes: All listed options are open-source (permissive licenses like MIT or Apache), suitable for academic and clinical use without licensing fees. Timing precision ratings assume use of best practices (preloading stimuli, using appropriate APIs); in all cases, external timing validation is recommended for critical measures. Integration with REDCap assumes usage of REDCap’s API (which requires enabling API and having secure tokens) – none of the platforms natively “know” about REDCap, but all can work with it via HTTP requests. Learning curve considers the incremental effort to adopt/maintain that option given a team already familiar with web technologies.

Strengths, Trade-offs, and Recommendations

jsPsych (with potential Cordova)Current Approach . Strengths: We already have investment and expertise here. jsPsych covers both cognitive tests and surveys (it has multiple survey plugins), and its community has produced many ready paradigms (which could save time). It’s flexible to shape into a mobile UI and has adequate timing for clinical research . By itself, jsPsych doesn’t provide offline data sync, but combined with a wrapper like Cordova, it can meet offline requirements. The advantage of staying with jsPsych is minimal rewrite – we can incrementally improve what we have (e.g., adding a service worker or packaging an app) rather than starting over. Drawbacks: Lacks a GUI, so non-developers in the team might find it hard to tweak. Without a platform like JATOS or Pavlovia, we must handle a lot of “glue” (data upload, user management) ourselves. Also, styling the interface for children might require front-end development effort (HTML/CSS skills). If our jsPsych tasks are already working in the lab, a natural path is to invest in packaging and UI enhancements rather than abandoning the framework.

lab.jsUser-Friendly and Compatible . Strengths: The visual builder lowers the barrier to create and modify tasks, which is great if clinical staff or non-programmers need to be involved in content creation. It has first-class support for offline and external integration scenarios (explicit documentation for both) . lab.js also outputs standardized formats that work with JATOS and Open Lab easily, giving us flexibility in deployment (online or offline). The timing performance is on par with jsPsych, and possibly more systematically validated across platforms . Another strength is maintainability: experiments built with lab.js can be versioned and shared, enabling transparency (helpful if our trial needs to share protocols or if multiple sites will run the tasks). Trade-offs: We would need to rebuild existing jsPsych tasks in lab.js – depending on complexity, this could be time-consuming (though templates could accelerate it). Also, while lab.js is powerful, very custom logic might occasionally require writing JS within it (which we can do, but then it’s not all GUI). The decision might come down to team skills: if we foresee a lot of iterative development and want to empower non-coders, lab.js is attractive. If our team is mostly developers, jsPsych vs lab.js differences are smaller. lab.js shines especially if we utilize Open Lab or JATOS for deployment, as it integrates with those out-of-the-box . In a hybrid scenario, one could even use jsPsych for some tasks and lab.js for others, but maintaining two frameworks isn’t ideal. Given lab.js’s capabilities and open-source nature, a switch could be justified if ease-of-use and built-in offline packaging are priorities.

PsychoJS/PavloviaPrecision and PsychoPy Ecosystem . Strengths: PsychoJS’s biggest selling point is the PsychoPy ecosystem behind it. If any of our cognitive tests are highly specialized (e.g., requiring precise frame-by-frame control or integration of hardware like eye trackers), PsychoPy/PsychoJS might offer support that jsPsych/lab.js would require custom coding for. The timing is top-notch (perhaps the best among web solutions for certain modalities) , and the PsychoPy Builder interface is familiar to many experimental psychologists. The ability to design tasks graphically (like lab.js) but with more of a focus on low-level control is unique. Pavlovia hosting is convenient for online studies and is under active development with an eye towards scientific rigor. Trade-offs: For our use case, PsychoJS may be less convenient in terms of integration and UI. It’s not inherently better at surveys or multi-device support – in fact, some survey features lag behind. If we were starting from scratch and needed guaranteed millisecond accuracy and already knew PsychoPy, it would be a strong option. But with an existing jsPsych codebase, switching to PsychoJS means essentially rewriting tasks in PsychoPy (since there’s no compatibility). We’d also either need to pay for Pavlovia or host ourselves – neither is a showstopper, but something to consider. Moreover, customizing the interface for kids might ironically be harder in PsychoJS if we stay within the Builder’s paradigm (which is more oriented to controlled lab tasks). Recommendation on PsychoJS: Use it if we have specific tasks that demand its strengths (e.g., complex psychophysics, or if we want to leverage existing PsychoPy paradigms from literature). Otherwise, jsPsych or lab.js likely suffice and are easier to integrate with REDCap.

MindProbe/JATOSData Control and Flexibility . Strengths: Running our own JATOS server (or using MindProbe) could solve data management concerns elegantly. We’d have all data stored on a server we control in real-time, with an interface to monitor sessions. This can be valuable in a trial – we could see which participants completed which tasks, send invites via JATOS, etc. JATOS’s compatibility with multiple experiment frameworks means we could even mix paradigms (some tasks in jsPsych, some in OpenSesame/OSWeb for instance) under one roof . It also supports group experiments (less relevant for us) and complex study flows (like randomizing task order at the server level). Trade-offs: Using JATOS doesn’t inherently improve the participant experience; it adds complexity for developers/IT. Essentially, it’s best when you have many participants and need robust study management. In an internal clinical trial context, if we already have REDCap as the system of record, introducing JATOS might duplicate functionality (participant tracking, etc.). Another point: JATOS works best when participants access it via a web browser. If we go the mobile app route, we wouldn’t be using most of JATOS’s UI; we’d only be hitting its API endpoints. In that scenario, a simpler custom backend might suffice. However, if we want an open-source alternative to REDCap for the cognitive test data, JATOS is worth considering – it’s designed for experiment data and might handle high-frequency data or larger payloads better than REDCap. The recommendation would be to use JATOS if our team is comfortable running servers and if we prefer not to directly interface each client with REDCap. For offline-first usage, though, JATOS alone isn’t enough (we’d likely still need an app or local web instance).

Open LabEase of Deployment for lab.js . Strengths: If we choose lab.js, Open Lab can remove a lot of friction in getting studies running on the web. In a scenario where connectivity is expected (e.g., participants at home with Wi-Fi), Open Lab would let us avoid infrastructure worries and quickly launch studies in a secure manner . It’s free and lets collaborators access data easily. Trade-offs: For clinical use, one must ensure any cloud platform meets privacy requirements. We’d have to verify where Open Lab’s servers are hosted, data encryption, etc., or deploy our own instance. Compared to JATOS, Open Lab is more specialized (lab.js only) but more turnkey. It doesn’t add capabilities beyond what lab.js does, so its justification lies in convenience and possibly nicer UX for launching studies. If offline use is a must, Open Lab has limited role since it can’t operate without internet – except as a complement (online deployment for some, offline app for others). Recommendation: Use Open Lab in conjunction with lab.js if a significant portion of data collection will be online. If all data collection is intended to be via an app in clinic, Open Lab might not be needed; we could directly run lab.js in the app.

Cordova/CapacitorHybrid App solution . Strengths: This approach directly addresses the offline requirement and leverages our existing web content. It allows a “have your cake and eat it too” scenario: we keep coding in jsPsych (or lab.js) which is relatively rapid and high-level, and we still deliver a native-like experience. It also gives us flexibility to incorporate multiple frameworks; for example, we could include some jsPsych tasks and if one particular questionnaire is easier to do via a web form or another library, we could embed that too (since we control the app environment). Another advantage is distribution control – we can preload all media, which is good for ensuring stimulus delivery (no relying on network to load an image mid-task). Trade-offs: The need to build and maintain an app is non-trivial. It introduces overhead in testing on multiple devices, dealing with app updates, etc. However, since frameworks like Capacitor are well-supported, many of these tasks (like building and updating) can be automated and are documented. In a clinical trial, deploying an app might require coordinating with participants to install it, or provisioning tablets with it pre-installed. That logistical consideration is important – if participants are remote, getting them to install an app might be a barrier versus just visiting a link. If participants are on-site (e.g., in a clinic using provided devices), an app is fine. For kids, an app could be more engaging (with an icon they tap). Recommendation: The Cordova approach is justified if offline reliability is paramount or if we want to tightly control the testing environment on mobile devices. Given that children might use the app unsupervised, having an app that can store data until it successfully uploads can prevent data loss (e.g., if a family has spotty internet, the child can still do the tasks and the results sync later without fuss). If we have the resources to manage an app, this is a strong solution. It might be especially good as a hybrid strategy: use the app in clinic or for those without good internet, and perhaps allow a web option for those who prefer not to install an app (the content is the same).

Native FrameworksGround-Up Custom App . Strengths: Absolute freedom to design the experience, potentially leading to better engagement from child participants. We could optimize everything for the device – for example, ensure minimal latency by using native touch handling, or create an immersive game-like environment that seamlessly incorporates cognitive assessments (making it feel like a fun activity rather than a test). If the trial required integration with other mobile features (GPS, pedometer, etc.), a native app would handle that gracefully. Trade-offs: This approach essentially abandons the existing investment in web frameworks. Everything would need to be built anew, tested, validated. It is the highest cost path in development terms. The maintainability is a concern – any change requires app developer time, whereas in something like lab.js a researcher could tweak a text or timing easily in the builder. If the timeline for the trial is short, developing a full native app could introduce delays. Moreover, we’d have to ensure the same level of scientific rigor (timing verification, etc.) in a custom app, which might lack the benefit of the broader community testing that jsPsych/PsychoJS/lab.js have undergone. Recommendation: Only pursue a fully native rebuild if the web-based approach proves inadequate and if you have access to skilled mobile developers and a clear rationale. For example, if during piloting we find that kids are not engaging with the web-style tasks, or if devices show inconsistent performance, one might consider a native game-based approach. Otherwise, leveraging existing platforms is more efficient and less risky.

When to Switch or Hybridize:

  • Stay with jsPsych (add Cordova) if our current setup mostly works and we just need offline support and some UI polish. This path minimizes disruption: we can wrap the jsPsych tasks in a mobile shell and apply child-friendly CSS and maybe some simple graphics. It addresses the core needs (offline, mobile) without a full rebuild. We’d invest effort in the app packaging and integration scripting (for REDCap), but not in rewriting every task. This is recommended when internal resources are limited and timeline is tight, as it delivers a solution relatively quickly by building on what we have.
  • Adopt lab.js (with or without Open Lab) if we foresee ongoing creation of new tasks or adjustments and want to empower team members who aren’t programmers. The switch to lab.js could pay off in easier maintenance (drag-and-drop editing) and possibly fewer errors, thanks to its structured approach. Also, lab.js would integrate well if we later bring in JATOS or other services. We might also choose lab.js if our jsPsych codebase is not too large (making rebuild feasible) and if we value the formal timing validation that lab.js emphasizes. If we do this, using Open Lab for any web-based deployment would smooth that process. In a hybrid scenario, we could use lab.js for the cognitive tests and maybe keep REDCap for PRO questionnaires (embedding lab.js tasks in REDCap via iframe as they suggest, though that requires some custom setup) . However, lab.js itself can handle questionnaires like MFIS, so it might be cleaner to do all within lab.js for consistency in look-and-feel.
  • Hybrid jsPsych + JATOS might be justified if we want to stick to jsPsych but need a robust backend and possibly an online option. For instance, if the trial will have some sites using tablets offline and other sites where participants do it from home online, we could use JATOS: Tablets could run jsPsych in a browser pointed to a local JATOS (or an app that syncs to JATOS), while remote users access the same jsPsych study via the JATOS web link. JATOS would unify the data collection. This hybrid gives us offline and online under one umbrella, but it’s complex to set up initially.
  • Switch to PsychoJS if there’s a clear PsychoPy feature we need (e.g., precise audio timing, complex stimulus generation, or simply the team’s familiarity with PsychoPy). Also, if any collaborators have a library of PsychoPy tasks, using PsychoJS could leverage those. A partial switch could be: use PsychoJS for a subset of tasks that require it, and jsPsych for others – but mixing frameworks within one project is generally not recommended unless absolutely needed, due to increased maintenance.
  • Full native app if after trying web-based solutions we find they cannot meet our needs in user experience or reliability. This would be a last resort given the resource implications. Alternatively, if funding and timeline allow, one could develop a native app in parallel while rolling out a web-based solution as a stopgap, then compare performance. But that might be beyond scope.

Recommendation: Based on the analysis, a pragmatic recommendation is to extend our existing jsPsych setup with offline capabilities via Cordova/Capacitor, and optionally integrate a platform like JATOS or direct REDCap API for data sync. This approach gives us a quick win: children get a mobile app that works without internet, while we reuse our validated jsPsych tasks (ensuring consistency with any prior data). We can enhance the UI within jsPsych by adding custom CSS and perhaps using images and audio to make it more engaging for kids. Timing will remain scientifically sound (backed by jsPsych’s proven record ), and we won’t introduce new unknowns. For data, we can implement a straightforward module in the app that pushes results to REDCap whenever a connection is present – this keeps REDCap as the central repository, which is likely desirable for a clinical trial.

However, we should remain open to lab.js as an alternative if down the line we need a more user-friendly authoring environment or if we encounter limitations in jsPsych’s survey elements. We could even pilot one or two tasks in lab.js to see how it compares (since lab.js is compatible with our plan of using a Cordova app or JATOS server anyway). If those pilots demonstrate significant advantages (e.g., much faster development of new instruments or better maintainability), then a gradual switch could be justified.

In conclusion, switching frameworks is only justified if it substantially reduces burden or risks. jsPsych vs lab.js vs PsychoJS each ultimately accomplish similar ends – high-quality web experiments. Given our specific needs (mobile-first, offline, child-friendly, REDCap integration), these can all be met within the web paradigm we know, with clever deployment strategies. A full switch to a new framework like lab.js may yield improvements in ease-of-use but at the cost of reimplementation. A hybrid approach (jsPsych + an app wrapper + custom integration code) is an efficient path to fulfill requirements now, while keeping the option to incorporate other frameworks if and when needed.

Ultimately, if our goal is to ensure maximum reliability and user engagement, an approach of “jsPsych inside a Cordova app” (or lab.js inside an app) appears to check all the boxes: offline operation, fine-grained timing control, customizable UI, secure data transfer, and low licensing friction – leveraging open-source at every layer. We should proceed with that as our primary solution, and monitor during pilot testing whether any shortcomings arise that might prompt introducing another framework or tool from the ones discussed. This way, we maintain continuity with what works and only change what we need to, ensuring a successful deployment in the clinical trial setting.