Skip to content

Tmb testset

In principle, yes: most of what TestMyBrain (TMB) calls “cognitive tests” are standard experimental paradigms (e.g., Stroop, Flanker, n-back, task switching, go/no-go, simple/choice RT, digit span variants, mental rotation, matrix reasoning–style items) and those can be implemented in jsPsych because jsPsych is essentially a framework for timed stimulus presentation, response capture, branching logic, and data logging.

That said, there are three important distinctions:

1) “Replicate the task” vs “replicate the test”

You can usually replicate the task structure in jsPsych (stimuli → timing → response → scoring).

What is much harder is replicating the test as a product, meaning:

  • Exact stimuli sets (items, images, word lists, difficulty calibration)
  • Adaptive logic (staircases / CAT-style adaptation)
  • Norm-referenced scoring (percentiles, age/education adjustments)
  • Quality controls (attention checks, device/latency controls, retest handling)
  • Psychometric equivalence (reliability/validity comparable to TMB’s published instruments)

So: you can rebuild “a Stroop-like test” easily; rebuilding “TMB’s Stroop with their stimuli, norms, and scoring” is not the same thing.

2) Technical feasibility in jsPsych

jsPsych can support nearly all cognitive task primitives:

  • RT tasks: keyboard/mouse/touch responses with millisecond timestamps
  • Randomization / counterbalancing: trial shuffles, block structures, Latin-square logic
  • Adaptive procedures: via custom code (e.g., 2-up/1-down, Bayesian staircases) and dynamic timelines
  • Complex stimuli: images, audio, video, canvas drawing (for custom rendering)
  • Branching: conditional timelines, loop nodes, performance-based progression
  • Scoring: compute accuracy/RT metrics per trial/block and write summaries

Where it gets tricky (but still solvable):

  • Precise timing in uncontrolled browsers/devices (variable refresh rates, background tabs, Bluetooth keyboards, mobile throttling)
  • Audio latency (for auditory tasks)
  • Touchscreen RT variability
  • Participants’ hardware heterogeneity (keyboards, browsers, OS power settings)

For many research-grade web studies, this is acceptable if you implement best practices (full-screen, preloading, tab-focus checks, device logging, minimum FPS checks, excluding outliers, etc.).

Even if a paradigm is standard, TMB’s specific implementations may be protected (stimuli, item banks, branding, and potentially proprietary scoring/norms). If your goal is functionally similar cognitive measurement , you are fine. If your goal is a near-clone , you should treat that as a licensing question.

Practical path forward

If you tell me which TMB tests you care about (names or links), I can:

  1. Map each test to its underlying paradigm (e.g., Flanker, n-back, etc.)
  2. Identify the jsPsych plugins you’d use (or where you’d write a small custom plugin)
  3. Outline the timing/scoring logic and any adaptive components
  4. Flag the high-risk areas (timing sensitivity, proprietary stimuli/norms)

If you want a default “yes/no” without the list: expect that ~80–90% of typical TMB-style tasks are implementable in jsPsych without major constraints, and the remaining share are implementable but require more custom rendering/adaptation and careful handling of timing/psychometrics.