Of That

Brandt Redd on Education, Technology, Energy, and Trust

14 September 2018

Quality Assessment Part 5: Blueprints and Computerized-Adaptive Testing

This is part 5 of a 10-part series on building high-quality assessments.

Arrows in a tree formation.

Molly is a 6th grade student who is already behind in math. Near the end of the school year she takes her state's annual achievement tests in mathematics and English Language Arts. Already anxious when she sits down to the test, her fears are confirmed by the first question where she is asked to divide 3/5 by 7/8. Though they spent several days on this during the year, she doesn't recall how to divide one fraction by another. As she progresses through the test, she is able to answer a few questions but resorts to guessing on all too many. After twenty minutes of this she gives up and just guesses on the rest of the answers. When her test results are returned a month later she gets the same rating as three previous years, "Needs Improvement." Perpetually behind, she decides that she is, "Just not good at math."

Molly is fictional but she represents thousands of students across the U.S. and around the world.

Let's try another scenario. In this case, Molly is given a Computerized-Adaptive Test (CAT). When she gets the first question wrong, the testing engine picks an easier question which she knows how to answer. Gaining confidence she applies herself to the next question which she also knows how to answer. The system presents easier and harder questions as it works to pinpoint her skill level within a spectrum extending back to 4th grade and ahead to 8th grade. When her score report comes she has a scale score of 2505 which is below the 6th grade standard of 2552. The report shows her previous year's score of 2423 which was well below standard for Grade 5. The summary says that, while Mollie is still behind, she has achieved significantly more than a year's progress in the past year of school; much like this example of a California report.

Computerized-Adaptive Testing

A fixed-form Item Response Theory test presents a set of questions at a variety of skill levels centered on the standard for proficiency for the grade or course. Such tests result in a scale score, which indicates the student's proficiency level, and a standard error which indicates a confidence level of the scale score. A simplified explanation is that the student's actual skill level should be within the range of the scale score plus or minus the standard error. Because a fixed-form test is optimized for the mean, the standard error is greater the further the student is from the target proficiency for that test.

Computerized Adaptive Tests (CAT) start with a large pool of assessment items. Smarter Balanced uses a pool of 1,200-1,800 items for a 40 item test. Each question is calibrated according to its difficulty within the range of the test. The test administration starts with a question near the middle of the range. From then on, the adaptive algorithm tracks the student's performance on prior items and then selects questions most likely to discover and increase confidence in the student's skill level.

A stage-adaptive or multistage test is similar except that groups of questions are selected together.

CAT tests have three important advantages over fixed-form:

  • The test can measure student skill across a wider range while maintaining a small standard error.
  • Fewer questions are required to assess the student's skill level.
  • Students may have a more rewarding experience as the testing engine offers more questions near their skill level.

When you combine more accurate results with a broader measured range and then use the same test family over time, you can reliably measure student growth over a period of time.

Test Blueprints

As I described in Part 2 and Part 3 of this series, each assessment item is designed to measure one or two specific skills. A test blueprint indicates what skills are to be measured in a particular test and how many items of which types should be used to measure each skill.

As an example, here's the blueprint for the Smarter Balanced Interim Assessment Block (IAB) for "Grade 3 Brief Writes":

Block 3: Brief Writes
ClaimTargetItemsTotal Items
Writing1a. Write Brief Texts (Narrative)46
3a. Write Brief Texts (Informational)1
6a. Write Brief Texts (Opinion)1

This blueprint, for a relatively short fixed-form test, indicates a total of six items spread across one claim and three targets. For more examples, you can check out the Smarter Balanced Test Blueprints. The Summative Tests, which are used to measure achievement at the end of each year, have the most items and represent the broadest range of skills to be measured.

When developing a fixed-form test, the test producer will select a set of items that meets the requirements of the blueprint and represents an appropriate mix of difficulty levels.

For CAT tests it's more complicated. The test producer must select a much larger pool of items than will be presented to the student. A minimum is five to ten items in the pool for each item in to be presented to the student. For summative tests, Smarter Balanced uses a ratio averaging around 25 to 1. These items should represent the skills to be measured in approximately the same ratios as they are represented in the blueprint. And they should represent difficulty levels across the range of skill to be measured. (Difficulty level is represented by the IRT b parameter of each item.)

As the student progresses through the test, the CAT Algorithm selects the next item to be presented. In doing so, it takes into account three factors: 1. Information it has determined about the student's skill level so far, 2. How much of the blueprint has been covered so far and what it has yet to cover, and 3. The pool of items it has to select from. From those criteria it selects an item that will advance coverage of the blueprint and will improve measurement of the student's skill level.

Data Form

To present a CAT assessment the test engine needs three sets of data:

  • The Test Blueprint
  • A Catalog of all items in the pool. The entry for each item must specify its alignment to the test blueprint (which is equivalent to its alignment to standards), and its IRT Parameters.
  • The Test Items themselves.

Part 3 of this series describes formats for the items. The item metadata should include the alignment and IRT information. The manifest portion of IMS Content Packaging is one format for storing and transmitting item metadata.

To date, there is no standard or commonly-used data format for test blueprints. Smarter Balanced has published open specifications for its Assessment Packages. Of those, the Test Administration Package format includes the test blueprint and the item catalog. IMS CASE is designed for representing achievement standards, but it may also be applicable to test blueprints.

IMS Global has formed an "IMS CAT Task Force" which is working on interoperable standards for Computerized Adaptive Testing. They anticipate releasing specifications later in 2018.

Quality Factors

A CAT Simulation is used to measure the quality of a Computerized Adaptive Test. These simulations use a set of a few thousand simulated students each assigned a particular skill level. The system then simulates each student taking the test. For each item, the item characteristic function is used to determine whether a student at that skill level is likely to answer correctly. The adaptive algorithm uses those results to determine which item to present next.

The results of the simulation are used to see how well the CAT measures the skill levels of the simulated students by comparing the test scores against the skill levels of the simulated students. Results of a CAT simulation are used to ensure that the item pool has sufficient coverage, that the CAT algorithm satisfies the blueprint, and to find out which items get the most exposure. This feedback is used to tune the item pool and the configuration of the CAT algorithm to achieve optimal results across the simulated population of students.

To build a high-quality CAT assessment:

  • Build a large item pool with items of difficulty levels spanning the range to be measured.
  • Design a test blueprint that focuses on the skills to be measured and correlates with the overall score and the subscores to be reported.
  • Ensure that the adaptive algorithm effectively covers the blueprint and also focuses in on each student's skill level.
  • Perform CAT simulations to tune the effectiveness of the item pool, blueprint, and CAT algorithm.

Wrapup

Computerized adaptive testing offers significant benefits to students by delivering more accurate measures with a shorter, more satisfying test. CAT is best suited to larger tests with 35 or more questions spread across a broad blueprint. Shorter tests, focused on mastery of one or two specific skills, may be better served by conventional fixed-form tests.

01 September 2018

Quality Assessment Part 4: Item Response Theory, Field Testing, and Metadata

This is part 4 of a 10-part series on building high-quality assessments.

Drafting tools - triangle, compass, ruler.

Consider a math quiz with the following two items:

Item A:

x = 5 - 2
What is the value of x?

Item B:

x2 - 6x + 9 = 0
What is the value of x?

George gets item A correct but gets the wrong answer for item B. Sally has the wrong answer for A but answers B correctly. Using traditional scoring, George and Sally each get 50%.

A more sophisticated quiz might assign 2 points to item A and 6 points to item B (recognizing that B is harder than A). Under such a scoring system, George would get 25% and Sally would get 75%.

But the score is still short on meaning. George scored 25% of what? Sally scored 75% of what?

An even more sophisticated model should acknowledge that knowing how to solve quadratics (item B) is evidence that the student can also perform subtraction (item A). Such a model would position George somewhere between first grade (single-digit subtraction) and High School (solving quadratics). That same model would indicate that Sally either guessed correctly on item B or made a mistake on item A that's not representative of her skill. Due to the conflicting evidence, we are less sure about Sally's skill level than George's. For both students, more items would be required to gain greater confidence in their skill levels.

Item Response Theory

Item Response Theory or IRT is a statistical method for describing how student performance on assessment items relates to their skill in the area the item was designed to measure.

The "three parameter logistic model" (3PL) for IRT describes the probability that a student of a certain skill level will answer the item correctly. Student proficiency is represented by θ (theta) and the three item parameters are a, b, and c. They represent the following factors:

  • a = Discrimination. This value indicates how well the item discriminates between proficient students and those who have not yet learned this skill.
  • b = Difficulty. This value indicates how difficult an item is for the student to answer correctly.
  • c = Guessing. The probability that a student might guess the correct response. For a four-item multiple-choice question, this would be 0.25 because the student has a one-in-four chance of guessing the right answer.

From these parameters we can create an item characteristic curve. The formula is as follows:

formula: p=c+(1-c)/(1+e^(-a(θ-b))

This is much easier to understand in graph form. So I loaded it into the Desmos graphing calculator.

The vertical (y) axis indicates the probability that a student will answer the item correctly. The horizontal (y) axis is student proficiency (represented by θ in the equation). You can move the sliders to change the a, b, and c parameters and see how different items would be represented in an item characteristic curve.

In addition to this "three-parameter" model, there are other IRT models but they all follow this same basic premise: The function represents the probability that a student of given skill (represented by θ, theta) will answer the question correctly. At least one parameter of the function represents the difficulty of the question. For items scored on multi-point scale, there are difficulty parameters (typically d1, d2, etc.) representing the difficulty thresholds for each point value.

Scale Scores

The difficulty parameter b, and the student skill value θ, are on the same logistic scale and center on the skill level being measured. For example, if an item is written for grade 5 math, a b parameter of 0 means that the average 5th grade student should be able to answer the question correctly 50% of the time.

Most assessments convert from this theta score into a scale score which is a consistent score reported to educators, students, and parents. For Smarter Balanced, the scale score ranges from 2000 to 3000 and represents skill levels from Kindergarten to High School Graduation. Theta scores are converted to scale scores using a polynomial function.

Field Testing

So how do we come up with the a, b, and c parameters for a particular item? Based on the item type and potential responses we can predict c (guessing) fairly well but our experience at Smarter Balanced has shown that authors are not very good at predicting b (difficulty) or c (discrimination). To get an objective measure of these values we use a field test.

In Spring 2014 Smarter Balanced held a field test in which 4.2 million students completed a test - typically in either English Language Arts or Mathematics. Some students took both. For the participating schools and students, this was a practice test - gaining experience in administering and taking tests. Since the items were not yet calibrated, we could not reliably score the tests. For Smarter Balanced it offered critical data on more than 19,000 test items. For each item we gained more than 10,000 scored responses from students representing the target grades across all demographics.

Psychometricians used these data, from students taking the test, to calculate the parameters (a, b, and c) for each item in the field test. The process of calculating IRT parameters from field test data is called calibration. Once items were calibrated we examined the parameters and the data to determine which items are approved for use in tests. For example, if a is too low then the question likely has a flaw. It may not measure the right skill or the answer key may be incorrect. Likewise, if the b parameter is different across demographic groups than the item may be sensitive to gender, cultural, or ethnic bias. Items from the field test that met statistical standards were approved and became the initial bank of items from which Smarter Balanced produces tests.

Each year Smarter Balanced does an embedded field test. Each test that a student takes has a few new "field test" items included. These items do not contribute to the student's test score. Rather, the students' scored responses are used to calibrate the items. This way the test item bank is being constantly renewed. Other organizations like ACT and SAT follow the same practice of embedding field test questions in regular tests.

To understand more about IRT, I recommend A Simple Guide to IRT and Rasch Modeling by Ho Yu.

Item Metadata

The IRT parameters, alignment to standards, and other critical information are collected as metadata about each item. In most cases, metadata is represented as a set of name-value pairs. There are many formats for representing metadata and also many dictionaries of field definitions. Smarter Balanced uses the metadata structure from IMS Content Packaging and draws field definitions from The Learning Resource Metadata Initiative (LRMI), from Schema.org, and from Common Education Data Standards (CEDS).

Here are some of the most critical metadata elements for assessment items with links to their definitions in those standards:

  • Identifier: An number that uniquely identifies this item.
  • PrimaryStandard: An identifier of the principal skill the item is intended to measure. The skill would be described in an Achievement Standard or Content Specification.
  • SecondaryStandard: Optional identifiers of additional Achievement Standards or Content Specifications that the item measures.
  • InteractionType: The type of interaction (multiple choice, matching, short answer, essay, etc.).
  • IRT Parameters: The a, b, and c parameters or another parameter set for the Item Response Theory function.
  • History: A record of when and how the item has been used to estimate how much it has been exposed.

Quality Factors

States, schools, assessment consortia, and assessment companies all maintain banks of assessment items from which they construct their assessments. There are a number of efforts underway to pool resources from multiple entities into large, joint item banks. The value of items in any such bank is multiplied tenfold if the items have consistent and reliable metadata regarding alignment to standards and IRT parameters.

Here are factors to consider related to IRT Calibration and Metadata:

  • Are all items field-tested and calibrated before they are used in an operational test?
  • Is alignment to standards and content specifications an integral part of item writing?
  • Are the identifiers used to record alignment consistent across the entire item bank?
  • Is field testing an integral part of the assessment design?
  • Are IRT parameters consistent and comparable across the entire bank?
  • When sharing items or an item bank across multiple organizations, do all participants agree to contribute data (field testing and operational use) back to the bank?

Wrapup

Field testing can be expensive, inconvenient, or both. But without actual data from student performance we have no objective evidence that a particular assessment item measures what it's intended to measure at the expected level of difficulty.

The challenges around field testing combined with the lack of training in IRT and related psychometrics have been kept these measures from being used in anything other than large-scale, high stakes tests. Nevertheless, it's concerning to me that final exams and midterms of great consequence are rarely, if ever, calibrated and validated. Greater collaboration among institutions, among curriculum developers, or both could achieve sufficient scale for calibrated tests to become more common.