MIRT Analysis Dashboard
Multidimensional Item Response Theory analysis tools
webR Status
R runtime for mirt analysis (optional - visualizations work without it)
Note: webR download is ~30MB and may take a moment
Calibration Data
0 total sessions • Using default parameters
Item Parameters (2PL Model)
| Item | a (Discrimination) | b (Difficulty) | Max Info | Sample Size | Source |
|---|---|---|---|---|---|
| Item 1 | 0.80 | -1.50 | 0.160 | 0 | Default |
| Item 2 | 1.20 | -0.50 | 0.360 | 0 | Default |
| Item 3 | 1.80 | 0.00 | 0.810 | 0 | Default |
| Item 4 | 1.50 | 0.50 | 0.563 | 0 | Default |
| Item 5 | 2.00 | 1.20 | 1.000 | 0 | Default |
Item Characteristic Curves (ICC)
Probability of correct response as a function of ability (θ)
Item Information Functions (IIF)
Information provided by each item at different ability levels
Items with higher discrimination (a) have taller, narrower information peaks
Test Information Function (TIF)
Total information across all items - determines measurement precision
SE(θ) = 1/√I(θ) — Higher information means lower standard error
Understanding ICCs
- Steepness (a): Higher discrimination = steeper curve = better differentiation
- Location (b): Where P(θ)=0.5, the item's difficulty level
- Dots: Mark each item's difficulty parameter on the 0.5 line
CAT Item Selection
- Maximum Information: Select item with highest I(θ) at current estimate
- Adaptive: As θ updates, different items become optimal
- Efficiency: Fewer items needed for precise measurement