ILEARN, YOU LEARN
INFO, TIPS, AND TRICKS FOR THE NEW ILEARN ASSESSMENT
What is a COMPUTER-ADAPTIVE test (CAT)?
- In a CAT, each test question is selected by a computer algorithm that is based on the student’s performance on previously administered items.
- A CAT is tailored to the student’s ability based on whether the student responded successfully to the preceding item or sets of items.
- Targeting test information to the student’s ability increases the precision of resulting test scores, especially for very low- and high-ability students.
- Each time a student answers a question, his or her response helps to determine the next question or set of questions that will be presented to the student.
NWEA is a computer-adaptive test!
What is a STANDARDS-BASED computer adaptive test?
- The goal of a standards-based, computer-adaptive test is to enact a complex blueprint that ensures breadth of coverage of the state’s content standards, as well as the depth of knowledge of those standards as also defined in the standards.
- Within the constraint of matching the blueprint, items are selected to maximize test information at the student’s estimated ability level.
- The difficulty of the test will adjust to each student’s skills, providing a better measure of what the student knows and can do.
- Adaptive tests measure the same content for all students on the basis of the test blueprint.
NWEA is NOT a standards-based, computer adaptive test because NWEA has the ability to test below or above grade level standards. ILEARN can ONLY test at grade level.
Does this mean that each student will take a totally different ILEARN assessment (similar to NWEA)?
The algorithm is designed to meet a complex set of content constraints and, within these constraints, vary item DIFFICULTY (NOT complexity) to adapt to the student’s current performance. (For example, 2.7 + 2.4 is a more DIFFICULT problem than 2 + 2, but it is not a more COMPLEX problem.)
ILEARN will only test grade level standards but the difficulty of the questions may look different for each student.
Let's run through what an actual test will look like...
Students are administered the first item on the test.
- The first time the system encounters a student, it assigns the state mean as the student’s initial ability estimate.
Test Continues –
Students are administered additional items.
- For subsequent questions, the algorithm first identifies a subset of items that best satisfy the blueprint requirements.
- From that subset, the algorithm then identifies a further subset with item difficulties that maximize test information to student ability.
- Finally, it randomly selects from the best items, providing a measure of exposure control.
Test Ends –
The maximum number of items has been administered.
If every student receives a different set of questions, how will they receive a comparable score?
- Student performance is reported as a scale score, which is based on the difficulty of the administered test items, and the student’s pattern of correct and incorrect responses to those items.
- CAT item selection is designed to maximize test information near each student’s ability. Very high- and very low-performing students may both respond correctly to about 50% of test items. The high achiever is being administered much more difficult items, so they will receive a higher scale score.
- Students can and will correctly answer items above their ability, and may incorrectly answer items below their ability. However, the probability of such response patterns decreases as the item difficulty moves away from the student’s ability.
- The performance level is determined from the scale score. For ILEARN, educators will set the cut scores associated with proficiency levels when standard-setting takes place.
DID YOU KNOW?
Will some students experience a longer test than other students?
No. Test blueprints have defined the number of items that all students will receive for the summative assessment at each grade level. However, since the assessment is untimed, students may not all finish at the same time.