Understanding BrainGap
Learn the concepts behind adaptive assessment and how to integrate BrainGap into your application.
Understand theta, psychometrics, and the adaptive loop that powers precision assessment.
Step-by-step guides for common use cases like certification prep and technical hiring.
How BrainGap Works
BrainGap uses adaptive testing to measure ability precisely with fewer questions. Instead of giving everyone the same 50-question test, it selects the optimal question for each person based on their current estimated ability.
The Adaptive Loop
- 1. Call
/assess/next. BrainGap picks the question that maximizes information gain. - 2. User answers. Submit their response to
/assess/respond. - 3. Ability updates. BrainGap recalculates the user's ability estimate (theta).
- 4. Repeat until precision is high enough (typically 15-20 questions).
The Item Lifecycle
Every response improves the system. Item difficulty and discrimination parameters are recalibrated, and items that consistently perform poorly are automatically flagged for review.
Blueprints, Topics & Concepts
Before you can assess users, you define what you're testing. BrainGap organizes knowledge hierarchically:
When you assess a user against a blueprint, BrainGap measures their ability on each underlying concept. The /gaps endpoint shows which concepts need work.
Measuring Ability (Theta)
Every assessment produces a theta (θ): a standardized ability score centered at 0, with most people falling between -2 and +2.
| θ = 0 | Average ability (50th percentile) |
| θ = +1 | One standard deviation above average (~84th percentile) |
| θ = -1 | One standard deviation below average (~16th percentile) |
| θ = +2 | High ability (~98th percentile) |
Standard Error (SE)
Every theta comes with a standard error indicating precision. Lower SE = more confidence in the estimate. BrainGap tells you when to stop testing based on SE.
{ "theta": 0.85, // Above average ability "standardError": 0.32, // 95% CI ≈ θ ± 1.96×SE "proficiencyScore": 0.80 // P(correct) on avg item }
| SE < 0.30 | High precision. Good for certification decisions. |
| SE < 0.40 | Moderate precision. Fine for screening or placement. |
| SE < 0.50 | Low precision. Only use for formative feedback. |
Optional: understand the psychometric foundations and compliance details.
The Science: IRT & Item Selection
BrainGap uses the Two-Parameter Logistic (2PL) IRT model:
| θ | Examinee ability (what we're measuring) |
| b | Item difficulty (how hard the question is) |
| a | Item discrimination (how well it separates high/low ability) |
Maximum Fisher Information (MFI)
When we say BrainGap selects the "optimal" question, we mean the one with maximum Fisher Information at the current θ estimate. Information is maximized when item difficulty matches examinee ability (b ≈ θ).
Privacy & Compliance
BrainGap is designed for anonymous-first assessment. User IDs are opaque strings you control, and we never require PII.
No protected health information stored. Assessment data contains only opaque IDs and scores. Suitable for healthcare training and credentialing.
Full data deletion via DELETE /users/{id}. Data portability supported. No tracking cookies or fingerprinting.
Educational records stay in your system. BrainGap only sees anonymous assessment interactions. You control the ID mapping.
Enterprise-grade security controls. Encrypted at rest and in transit. Audit logs available for all API access.
Anonymous by design
BrainGap never needs to know who your users are. The userId parameter is any string you choose:
// Use your internal IDs "user_abc123" // Use hashed emails "sha256:a1b2c3..." // Use session tokens for fully anonymous "session_xyz789" // Use UUIDs "550e8400-e29b-41d4-a716-446655440000"
Building a Certification Prep App
Here's how to integrate BrainGap for AWS, Azure, or any certification prep.
1. User starts studying
When a user selects a certification, store their user ID and the blueprint slug.
// User picks AWS SAA const userId = "user_abc123"; // Your user's ID const blueprint = "aws-solutions-architect";
2. Run assessment sessions
Loop: get question → show to user → submit answer. Stop when precision is sufficient or user quits.
while (session.isActive) { // Get optimal next question const { interaction, meta } = await braingap.assessNext(userId, blueprint); // Check if we should stop if (meta.suggestedStopReason) break; // Show question to user, get their answer const answer = await showQuestionUI(interaction); // Submit and get score const result = await braingap.respond(interaction.id, answer); showFeedback(result.score, result.explanation); }
3. Show progress dashboard
Use /gaps to show what to study and /mastery for detailed scores.
// Get prioritized study list const gaps = await braingap.getGaps(userId, blueprint); // Show: "72% ready for exam" // Show: "Focus on: VPC Peering, IAM Policies, Lambda Concurrency" displayReadiness(gaps.summary.overallReadiness); displayTopGaps(gaps.gaps.slice(0, 5));
Technical Hiring Assessments
Use BrainGap to screen candidates efficiently before interviews.
Key differences from learning apps
- • Use Attempts to enforce time limits and item counts
- • Don't show correct answers, just score
- • Use cohort endpoints to compare candidates
- • Theta gives you a standardized score across all candidates
// Candidate starts assessment const attempt = await braingap.createAttempt({ blueprint: "senior-backend-engineer", maxItems: 25, timeLimitMinutes: 45 }); // Run assessment... // Get final results const results = await braingap.getAttempt(attempt.attemptId); // results.theta = 1.2 means "well above average" // results.standardError = 0.28 means "high confidence"
Compare candidates
const comparison = await braingap.compareCohorts({ cohortA: ["candidate_1", "candidate_2"], // Interviewed cohortB: ["candidate_3", "candidate_4"], // New applicants blueprint: "senior-backend-engineer" }); // See where cohortB excels or struggles vs cohortA
Questions? hello@braingap.io