Guide

Understanding BrainGap

Learn the concepts behind adaptive assessment and how to integrate BrainGap into your application.

Core Concepts

Understand theta, psychometrics, and the adaptive loop that powers precision assessment.

Integration Guides

Step-by-step guides for common use cases like certification prep and technical hiring.

Looking for endpoint docs?
See the API Reference for request/response schemas, authentication, and code examples.

How BrainGap Works

BrainGap uses adaptive testing to measure ability precisely with fewer questions. Instead of giving everyone the same 50-question test, it selects the optimal question for each person based on their current estimated ability.

The Adaptive Loop

  1. 1. Call /assess/next. BrainGap picks the question that maximizes information gain.
  2. 2. User answers. Submit their response to /assess/respond.
  3. 3. Ability updates. BrainGap recalculates the user's ability estimate (theta).
  4. 4. Repeat until precision is high enough (typically 15-20 questions).
Why adaptive?
Questions that are too easy or too hard tell you almost nothing. BrainGap always asks questions at the edge of what the user knows, where each answer provides maximum signal.

The Item Lifecycle

Blueprint AI Generation Item Bank /assess/next User Response Calibration

Every response improves the system. Item difficulty and discrimination parameters are recalibrated, and items that consistently perform poorly are automatically flagged for review.

Blueprints, Topics & Concepts

Before you can assess users, you define what you're testing. BrainGap organizes knowledge hierarchically:

Blueprint
The top-level knowledge domain you're assessing. Examples: "AWS Solutions Architect", "Python Fundamentals", "Company Onboarding".
Topic
A logical grouping within a blueprint. Example: "Networking" within AWS SAA.
Concept
The atomic unit of knowledge. Each concept gets its own ability score. Example: "VPC Subnet Design".

When you assess a user against a blueprint, BrainGap measures their ability on each underlying concept. The /gaps endpoint shows which concepts need work.

Creating blueprints
Use AI decomposition to auto-generate topics and concepts from a description, or define them manually for precise control. See the Blueprints API.

Measuring Ability (Theta)

Every assessment produces a theta (θ): a standardized ability score centered at 0, with most people falling between -2 and +2.

θ = 0 Average ability (50th percentile)
θ = +1 One standard deviation above average (~84th percentile)
θ = -1 One standard deviation below average (~16th percentile)
θ = +2 High ability (~98th percentile)

Standard Error (SE)

Every theta comes with a standard error indicating precision. Lower SE = more confidence in the estimate. BrainGap tells you when to stop testing based on SE.

Example Response
{
  "theta": 0.85,         // Above average ability
  "standardError": 0.32, // 95% CI ≈ θ ± 1.96×SE
  "proficiencyScore": 0.80 // P(correct) on avg item
}
SE < 0.30 High precision. Good for certification decisions.
SE < 0.40 Moderate precision. Fine for screening or placement.
SE < 0.50 Low precision. Only use for formative feedback.
Going Deeper

Optional: understand the psychometric foundations and compliance details.

The Science: IRT & Item Selection

TL;DR: BrainGap uses the same psychometric model as the GRE and GMAT. It picks questions that maximize information at the current ability estimate. You don't need to understand the math to use the API.

BrainGap uses the Two-Parameter Logistic (2PL) IRT model:

P(correct) = 1 / (1 + e-a(θ - b))
θ Examinee ability (what we're measuring)
b Item difficulty (how hard the question is)
a Item discrimination (how well it separates high/low ability)

Maximum Fisher Information (MFI)

When we say BrainGap selects the "optimal" question, we mean the one with maximum Fisher Information at the current θ estimate. Information is maximized when item difficulty matches examinee ability (b ≈ θ).

Why this matters: A hard question (b = 2) tells you nothing about a low-ability test-taker (θ = -1). They'll get it wrong anyway. MFI picks questions that actually discriminate.

Privacy & Compliance

BrainGap is designed for anonymous-first assessment. User IDs are opaque strings you control, and we never require PII.

HIPAA Ready

No protected health information stored. Assessment data contains only opaque IDs and scores. Suitable for healthcare training and credentialing.

GDPR Compliant

Full data deletion via DELETE /users/{id}. Data portability supported. No tracking cookies or fingerprinting.

FERPA Compatible

Educational records stay in your system. BrainGap only sees anonymous assessment interactions. You control the ID mapping.

SOC 2 Type II

Enterprise-grade security controls. Encrypted at rest and in transit. Audit logs available for all API access.

Anonymous by design

BrainGap never needs to know who your users are. The userId parameter is any string you choose:

Examples
// Use your internal IDs
"user_abc123"

// Use hashed emails
"sha256:a1b2c3..."

// Use session tokens for fully anonymous
"session_xyz789"

// Use UUIDs
"550e8400-e29b-41d4-a716-446655440000"
Data residency
All data stored in US-East by default. EU data residency available on Enterprise plans. Contact us for details.

Building a Certification Prep App

Here's how to integrate BrainGap for AWS, Azure, or any certification prep.

1. User starts studying

When a user selects a certification, store their user ID and the blueprint slug.

Your backend
// User picks AWS SAA
const userId = "user_abc123";  // Your user's ID
const blueprint = "aws-solutions-architect";

2. Run assessment sessions

Loop: get question → show to user → submit answer. Stop when precision is sufficient or user quits.

Assessment loop
while (session.isActive) {
  // Get optimal next question
  const { interaction, meta } = await braingap.assessNext(userId, blueprint);

  // Check if we should stop
  if (meta.suggestedStopReason) break;

  // Show question to user, get their answer
  const answer = await showQuestionUI(interaction);

  // Submit and get score
  const result = await braingap.respond(interaction.id, answer);
  showFeedback(result.score, result.explanation);
}

3. Show progress dashboard

Use /gaps to show what to study and /mastery for detailed scores.

Progress dashboard
// Get prioritized study list
const gaps = await braingap.getGaps(userId, blueprint);

// Show: "72% ready for exam"
// Show: "Focus on: VPC Peering, IAM Policies, Lambda Concurrency"
displayReadiness(gaps.summary.overallReadiness);
displayTopGaps(gaps.gaps.slice(0, 5));
Tip: Use suggestedStopReason
BrainGap tells you when to stop: "precision_achieved" (enough data), "pool_exhausted" (no more questions), or "time_limit" (if using attempts).

Technical Hiring Assessments

Use BrainGap to screen candidates efficiently before interviews.

Key differences from learning apps

  • Use Attempts to enforce time limits and item counts
  • Don't show correct answers, just score
  • Use cohort endpoints to compare candidates
  • Theta gives you a standardized score across all candidates
Create timed assessment
// Candidate starts assessment
const attempt = await braingap.createAttempt({
  blueprint: "senior-backend-engineer",
  maxItems: 25,
  timeLimitMinutes: 45
});

// Run assessment...

// Get final results
const results = await braingap.getAttempt(attempt.attemptId);

// results.theta = 1.2 means "well above average"
// results.standardError = 0.28 means "high confidence"

Compare candidates

Cohort comparison
const comparison = await braingap.compareCohorts({
  cohortA: ["candidate_1", "candidate_2"],  // Interviewed
  cohortB: ["candidate_3", "candidate_4"],  // New applicants
  blueprint: "senior-backend-engineer"
});

// See where cohortB excels or struggles vs cohortA

Questions? hello@braingap.io