Of That

Brandt Redd on Education, Technology, Energy, and Trust

23 August 2018

Quality Assessment Part 3: Items and Item Specifications

This is part 3 of a 10-part series on building high-quality assessments.

Transparent cylindrical vessel with wires leading to an electric spark inside.

Some years ago I remember reading my middle school science textbook. The book was attempting to describe the difference between a mixture and a compound. It explained that water is a compound of two parts hydrogen and one part oxygen. However, if you mix two parts hydrogen and one part oxygen in a container, you will simply have a container with a mixture of the two gasses, they will not spontaneously combine to form water.

So far, so good. Next, the book said that if you introduced an electric spark in the mixed gasses you would, "start to see drops of water appear on the inside surface of the container as the gasses react to form water." This was accompanied by an image of a container with wires and an electric spark.

I suppose the book was technically correct; that is what would happen if the container was strong enough to contain the violent explosion. But, even as a middle school student, I wondered how the dangerously misleading passage got written and how it survived the review process.

The writing and review of assessments requires the same or better rigor than writing textbooks. An error on an assessment item affects the evaluation of all students who take the test.

Items

In the parlance of the assessment industry, test questions are called items. The latter term is intended include more complex interactions than just answering questions.

Stimuli and Performance Tasks

Oftentimes, an item is based on a stimulus or passage that sets up the question. It may be an article, short story, or description of a math or science problem. The stimulus is usually associated with three to five items. When presented by computer, the stimulus and the associated items are usually presented on one split screen so that the student can refer to the stimulus while responding to the items.

Sometimes, item authors will write the stimulus; this is frequently the case for mathematics stimuli as they set up a story problem. But the best items draw on professionally-written passages. To facilitate this, the Copyright Clearance Center has set up the Student Assessment License as a means to license copyrighted materials for use in student assessment.

A performance task is a larger-scale activity intended to allow the student to demonstrate a set of related skills. Typically, it begins with a stimulus followed by a set of ordered items. The items build on each other usually finishing with an essay that asks the student to draw conclusions from the available information. For Smarter Balanced this pattern (stimulus, multiple items, essay) is consistent across English Language Arts and Mathematics.

Prompt or Stem

The prompt, sometimes called a stem, is the request for the student to do something. A prompt might be as simple as, "What is the sum of 24 and 62." Or it might be as complex as, "Write an essay comparing the views of the philosophers Voltaire and Kant regarding enlightenment. Include quotes from each that relate to your argument." Regardless, the prompt must provide required information, clearly describe what the student is to do, and how they are to express their response.

Interaction or Response Types

The response is a student's answer to the prompt. Two general categories of items are selected response and constructed response. Selected response items require the student to select one or more alternatives from a set of pre-composed responses. Multiple choice is the most common selected response type, but others include multi-select (in which more than one response may be correct), matching, true/false, and others.

Multiple choice items are particularly popular due to the ease of recording and scoring student responses. For multiple choice items, alternatives are the responses that a student may select from, distractors are the incorrect responses, and the answer is the correct response.

The most common constructed response item types are short answer and essay. In each case, the student is expected to write their answer. The difference is the length of the answer; short answer is usually a word or phrase while essay is a composition of multiple sentences or paragraphs. A variation of short answer may have a student enter a mathematical formula. Constructed responses may also have students plot information on a graph or arrange objects into a particular configuration.

Technology-Enhanced items are another commonly used category. These items are delivered by computer and include simulations, composition tools, and other creative interactions. However, all technology-enhanced items can still be categorized as either selected response or constructed response.

Scoring Methods

There are two general ways of scoring items, deterministic scoring and probabilistic scoring.

Deterministic scoring is indicated when a student's response may be unequivocally determined to be correct or incorrect. When a response is scored on multiple factors there may be partial credit for the factors the student addressed correctly. Deterministic scoring is most often associated with selected response items, but many constructed response items may also be deterministically scored when the factors of correctness are sufficiently precise, such as a numeric answer or a single word for a fill-in-the-blank question. When answers are collected by computer or are easily entered into a computer, deterministic scoring is almost always done by computer.

Probabilistic scoring is indicated when the quality of a student's answer must be judged on a scale. This is most often associated with essay type questions but may also apply to other constructed response forms. When handled well, a probabilistic score may include a confidence level — how confident is the scoring person or system that the score is correct.

Probabilistic scoring may be done by humans (e.g. judging the quality of an essay) or by computer. When done by computer, Artificial Intelligence techniques are frequently used with different degrees of reliability depending on the question type and the quality of the AI.

Answer Keys and Rubrics

The answer key is the information needed to score a selected-response item. For multiple choice questions, it's simply the letter of the correct answer. A machine scoring key or machine rubric is an answer key coded in such a way that a computer can perform the scoring.

The rubric is a scoring guide used to evaluate the quality of student responses. For constructed response items the rubric will indicate which factors should be evaluated in the response and what scores should be assigned to each factor. Selected response items may also have a rubric which, in addition to indicating which response is correct, would also give an explanation about why that response is correct and why each distractor is incorrect.

Item Specifications

An item specification describes the skills to be measured and the interaction type to be used. It serves as both a template and a guide for item authors.

The skills should be expressed as references to the Content Specification and associated Competency Standards (see Part 2 of this series). A consistent identifier scheme for the Content Specification and Standards greatly facilitates this. However, to assist item authors, the specification often quotes relevant parts of the specification and standards verbatim.

If the item requires a stimulus, the specification should describe the nature of the stimulus. For ELA, that would include the type of passage (article, short-story, essay, etc.), the length, and the reading difficulty or text complexity level. In mathematics, the stimulus might include a diagram for Geometry, a graph for data analysis, or a story problem.

The task model describes the structure of the prompt and the interaction type the student will use to compose their response. For a multiple choice, item, the task model would indicate the type of question to be posed, sometimes with sample text. That would be followed by the number of multiple choice options to be presented, the structure for the correct answer, and guidelines for composing appropriate distractors. Task models for constructed response would include the types of information to be provided and how the student should express their response.

The item specification concludes with guidelines about how the item will be scored including how to compose the rubric and scoring key. The rubric and scoring key focus on what evidence is required to demonstrate the student's skill and how that evidence is detected.

Smarter Balanced content specifications include references to the Depth of Knowledge that should be measured by the item, and guidelines on how to make the items accessible to students with disabilities. Smarter Balanced also publishes specifications for full performance tasks.

Data Form for Item Specifications

Like Content Specifications, Item Specifications have traditionally been published in document form. When offered online they are typically in PDF format. Like Content Specifications, there are great benefits to be achieved by publishing content specs in a structured data form. Doing so can integrate the content specification into the item authoring system — presenting a template for the item with pre-filled content-specification alignment metadata, pre-selected interaction time, and guidelines about stimulus and prompt alongside the places where the author is to fill in the information.

Smarter Balanced has selected the IMS CASE format for publishing item specifications in structured form. This is the same data format we used for the content specifications.

Data Form for Items

The only standardized format for assessment items in general use is IMS Question and Test Interoperability (QTI). It's a large standard with many features. Some organizations have chosen to implement a custom subset of QTI features known as a "profile." The soon-to-be-released QTI 3.0 aims to reduce divergence among profiles.

A few organizations, including Smarter Balanced and CoreSpring have been collaborating on the Portable Interactions and Elements (PIE) concept. This is a framework for packaging custom interaction types using Web Components. If successful, this will simplify the player software and support publishing of custom interaction types.

Quality Factors

A good item specification will likely be much longer than the items it describes. As a result, producing an item specification also consumes a lot more work than writing any single item. But, since each item specification will result in dozens or hundreds of items, the effort of writing good item specifications pays huge dividends in terms of the quality of the resulting assessment.

  • Start with a good quality standards and content specifications
  • Create task models that are authentic to the skills being measured. The task that the student is asked to perform should be as similar as possible to how they would manifest the measured skill in the real world.
  • Choose or write high-quality stimuli. For language arts items, the stimulus should demand the skills being measured. For non-language-arts items, the stimulus should be clear and concise so as to reduce sensitivity to student reading skill level.
  • Choose or create interaction types that are inherently accessible to students with disabilities.
  • Ensure that the correct answer is clear and unambiguous to a person who possesses the skills being measured.
  • Train item authors in the process of item writing. Sensitize them to common pitfalls such as using terms that may not be familiar to students of diverse ethnic backgrounds.
  • Use copy editors to ensure that language use is consistent, parallel in structure, and that expectations are clear.
  • Develop a review, feedback, and revision process for items before they are accepted.
  • Write specific quality criteria for reviewing items. Set up a review process in which reviewers apply the quality criteria and evaluate the match to the item specification.

Wrapup

Most tests and quizzes we take, whether in K-12 or college, are composed one question at a time based on the skills taught in the previous unit or course. Item specifications are rarely developed or consulted in these conditions and even the learning objectives may be somewhat vague. Furthermore, there is little third-party review of such assessments. Considering the effort students go through to prepare for and take an exam, not to mention the consequences associated with their performance on those exams, it seems like institutions should do a better job.

Starting from an item specification is both easier and produces better results than writing an item from scratch. The challenge is producing the item specifications themselves, which is quite demanding. Just as achievement standards are developed at state or multi-state scale, so also could item specifications be jointly developed and shared broadly. As shown in the links above, Smarter Balanced has published its item specifications and many other organizations do the same. Developing and sharing item specifications will result in better quality assessments at all levels from daily quizzes to annual achievement tests.

No comments:

Post a Comment