Of That

Brandt Redd on Education, Technology, Energy, and Trust

05 October 2018

Quality Assessment Part 6: Achievement Levels and Standard Setting

Two mountains, one with a flag on top.

This is part 6 of a series on building high-quality assessments.

If you have a child in U.S. public school, chances are that they took a state achievement test this past spring and sometime this summer you received a report on how they performed on that test. That report probably looks something like this sample of a California Student Score Report. It shows that "Matthew" achieved a score of 2503 in English Language Arts/Literacy and 2530 in Mathematics. Both scores are described as "Standard Met (Level 3)". Notably, in prior years Matthew was in the "Standard Nearly Met" category so his performance has improved.

The California School Dashboard offers reports of school performance according to multiple factors. For example, the Detailed Report for Castle View Elementary includes a graph of "Assessment Performance Results: Distance from Level 3".

Line graph showing performance of Lake Matthews Elementary on the English and Math tests for 2015, 2016, and 2017. In all three years, they score between 14 and 21 points above proficiency in math and between 22 and 40 points above proficiency in English.

To prepare this graph, they take the average difference between students' scale scores and the Level 3 standard for proficiency in the grade in which they were tested. For each grade and subject, California and Smarter Balanced use four achievement levels, each assigned to a range of scores. Here are the achievement levels for 5th grade Math (see this page for all ranges).

LevelRangeDescriptor
Level 1Less than 2455Standard Not Met
Level 22455 to 2527Standard Nearly Met
Level 32528 to 2578Standard Met
Level 4Greater than 2578Standard Exceeded

So, for Matthew and his fellow 5th graders, the Math standard for proficiency, or "Level 3" score, is 2528. Students at Lake Matthews Elementary, on average, exceeded the Math standard by 14.4 points on the 2017 tests.

Clearly, there are serious consequences associated with the assignment of scores to achievement levels. A difference of 10-20 points can make the difference between a school, or student, meeting or failing to meet the standard. Changes in proficiency rates can affect allocation of federal Title 1 funds, the careers of school staff, and even the value of homes in local neighborhoods.

More importantly to me, achievement levels must be carefully set if they are to provide reliable guidance to students, parents, and educators.

Standard Setting

Standard Setting is the process of assigning test score ranges to achievement levels. A score value that separates one achievement level from another is called a cut score. The most important cut score is the one that distinguishes between proficient (meeting the standard) and not proficient (not meeting the standard). For the California Math test, and for Smarter Balanced, that's the "Level 3" score but different tests may have different achievement levels.

When Smarter Balanced performed its standard setting exercise in October of 2014, it used the Bookmark Method. Smarter Balanced had conducted a field test that previous spring (described in Part 4 of this series). From those field test results, they calculated a difficulty level for each test item and converted that into a scale score. For each grade, a selection of approximately 70 items were sorted from easiest to most difficult. This sorted list of items is called an Ordered Item Booklet (OIB) though, in the Smarter Balanced case, the items were presented online. A panel of experts, composed mostly of teachers, went through the OIB starting at the beginning (easiest item), and set a bookmark at the item they believed represented proficiency for that grade. A proficient student should be able to answer all preceding items correctly but might have trouble with the items that follow the bookmark.

There were multiple iterations of this process on each grade, and then the correlation from grade-to-grade was also reviewed. Panelists were given statistics on how many students in the field tests would be considered proficient at each proposed skill level. Following multiple review passes the group settled on the recommended cut scores for each grade. The Smarter Balanced Standard Setting Report describes the process in great detail.

Data Form

For each subject and grade, the standard setting process results in cut scores representing the division between achievement levels. The cut scores for Grade 5 math, from table above, are 2455, 2528, and 2579. Psychometricians also calculate the Highest Obtainable Scale Score (HOSS) and Lowest Obtainable Scale Score (LOSS) for the test.

I am not aware of any existing data format standard for achievement levels. Smarter Balanced publishes its achievement levels and cut scores on its web site. The Smarter Balanced test administration package format includes cut scores, and HOSS and LOSS; but not achievement level descriptors.

A data dictionary for publishing achievement levels would include the following elements:

ElementDefinition
Cut ScoreThe lowest *scale score* included in a particular achievement level.
LOSSThe lowest obtainable *scale score* that a student can achieve on the test.
HOSSThe highest obtainable *scale score* that a student can achieve on the test.
Achievement Level DescriptorA description of what an achievement level means. For example, "Met Standard" or "Exceeded Standard".

Quality Factors

The stakes are high for standard setting. Reliable cut scores for achievement levels ensure that students, parents, teachers, administrators, and policy makers receive appropriate guidance for high-stakes decisions. If the cut scores are wrong - many decisions may be ill informed. Quality is achieved by following a good process:

  • Begin with a foundation of high quality achievement standards, test items that accurately measure the standards, and a reliable field test.
  • Form a standard-setting panel composed of experts and grade-level teachers.
  • Ensure that the panelists are familiar with the achievement standards that the assessment targets.
  • Inform the panel with statistics regarding actual student performance on the test items.
  • Follow a proven standard-setting process.
  • Publish the achievement levels and cut scores in convenient human-readable and machine-readable forms.

Wrapup

Student achievement rates affect policies at state and national levels, direct budgets, impact staffing decisions, influence real estate values, and much more. Setting achievement level cut scores too high may set unreasonable expectations for students. Setting them too low may offer an inappropriate sense of complacency. Regardless, achievement levels are set on a scale calibrated to achievement standards. If the standards for the skills to be learned are not well-designed, or if the tests don't really measure the standards, then no amount of work on the achievement level cut scores can compensate.

14 September 2018

Quality Assessment Part 5: Blueprints and Computerized-Adaptive Testing

Arrows in a tree formation.

This is part 5 of a series on building high-quality assessments.

Molly is a 6th grade student who is already behind in math. Near the end of the school year she takes her state's annual achievement tests in mathematics and English Language Arts. Already anxious when she sits down to the test, her fears are confirmed by the first question where she is asked to divide 3/5 by 7/8. Though they spent several days on this during the year, she doesn't recall how to divide one fraction by another. As she progresses through the test, she is able to answer a few questions but resorts to guessing on all too many. After twenty minutes of this she gives up and just guesses on the rest of the answers. When her test results are returned a month later she gets the same rating as three previous years, "Needs Improvement." Perpetually behind, she decides that she is, "Just not good at math."

Molly is fictional but she represents thousands of students across the U.S. and around the world.

Let's try another scenario. In this case, Molly is given a Computerized-Adaptive Test (CAT). When she gets the first question wrong, the testing engine picks an easier question which she knows how to answer. Gaining confidence she applies herself to the next question which she also knows how to answer. The system presents easier and harder questions as it works to pinpoint her skill level within a spectrum extending back to 4th grade and ahead to 8th grade. When her score report comes she has a scale score of 2505 which is below the 6th grade standard of 2552. The report shows her previous year's score of 2423 which was well below standard for Grade 5. The summary says that, while Mollie is still behind, she has achieved significantly more than a year's progress in the past year of school; much like this example of a California report.

Computerized-Adaptive Testing

A fixed-form Item Response Theory test presents a set of questions at a variety of skill levels centered on the standard for proficiency for the grade or course. Such tests result in a scale score, which indicates the student's proficiency level, and a standard error which indicates a confidence level of the scale score. A simplified explanation is that the student's actual skill level should be within the range of the scale score plus or minus the standard error. Because a fixed-form test is optimized for the mean, the standard error is greater the further the student is from the target proficiency for that test.

Computerized Adaptive Tests (CAT) start with a large pool of assessment items. Smarter Balanced uses a pool of 1,200-1,800 items for a 40 item test. Each question is calibrated according to its difficulty within the range of the test. The test administration starts with a question near the middle of the range. From then on, the adaptive algorithm tracks the student's performance on prior items and and then selects questions most likely to discover and increase confidence in the student's skill level.

A stage-adaptive or multistage test is similar except that groups of questions are selected together.

CAT tests have three important advantages over fixed-form:

  • The test can measure student skill across a wider range while maintaining a small standard error.
  • Fewer questions are required to assess the student's skill level.
  • Students may have a more rewarding experience as the testing engine offers more questions near their skill level.

When you combine more accurate results with a broader measured range and then use the same test family over time, you can reliably measure student growth over a period of time.

Test Blueprints

As I described in Part 2 and Part 3 of this series, each assessment item is designed to measure one or two specific skills. A test blueprint indicates what skills are to be measured in a particular test and how many items of which types should be used to measure each skill.

As an example, here's the blueprint for the Smarter Balanced Interim Assessment Block (IAB) for "Grade 3 Brief Writes":

Block 3: Brief Writes
ClaimTargetItemsTotal Items
Writing1a. Write Brief Texts (Narrative)46
3a. Write Brief Texts (Informational)1
6a. Write Brief Texts (Opinion)1

This blueprint, for a relatively short fixed-form test, indicates a total of six items spread across one claim and three targets. For more examples, you can check out the Smarter Balanced Test Blueprints. The Summative Tests, which are used to measure achievement at the end of each year, have the most items and represent the broadest range of skills to be measured.

When developing a fixed-form test, the test producer will select a set of items that meets the requirements of the blueprint and represents an appropriate mix of difficulty levels.

For CAT tests it's more complicated. The test producer must select a much larger pool of items than will be presented to the student. A minimum is five to ten items in the pool for each item in to be presented to the student. For summative tests, Smarter Balanced uses a ratio averaging around 25 to 1. These items should represent the skills to be measured in approximately the same ratios as they are represented in the blueprint. And they should represent difficulty levels across the range of skill to be measured. (Difficulty level is represented by the IRT b parameter of each item.)

As the student progresses through the test, the CAT Algorithm selects the next item to be presented. In doing so, it takes into account three factors: 1. Information it has determined about the student's skill level so far, 2. How much of the blueprint has been covered so far and what it has yet to cover, and 3. The pool of items it has to select from. From those criteria it selects an item that will advance coverage of the blueprint and will improve measurement of the student's skill level.

Data Form

To present a CAT assessment the test engine needs three sets of data:

  • The Test Blueprint
  • A Catalog of all items in the pool. The entry for each item must specify it's alignment to the test blueprint (which is equivalent to its alignment to standards), and its IRT Parameters.
  • The Test Items themselves.

Part 3 of this series describes formats for the items. The item metadata should include the alignment and IRT information. The manifest portion of IMS Content Packaging is one format for storing and transmitting item metadata.

To date, there is no standard or commonly-used data format for test blueprints. Smarter Balanced has published open specifications for its Assessment Packages. Of those, the Test Administration Package format includes the test blueprint and the item catalog. IMS CASE is designed for representing achievement standards but it may also be applicable to test blueprints.

IMS Global has formed an "IMS CAT Task Force" which is working on interoperable standards for Computerized Adaptive Testing. They anticipate releasing specifications later in 2018.

Quality Factors

A CAT Simulation is used to measure the quality of a Computerized Adaptive Test. These simulations use a set of a few thousand simulated students each assigned a particular skill level. The system then simulates each student taking the test. For each item, the item characteristic function is used to determine whether a student at that skill level is likely to answer correctly. The adaptive algorithm uses those results to determine which item to present next.

The results of the simulation are used to see how well the CAT measured the skill levels of the simulated students by comparing the test scores against the skill levels of the simulated students. Results of a CAT simulation are used to ensure that the item pool has sufficient coverage, that the CAT algorithm satisfies the blueprint, and to find out which items get the most exposure. This feedback is used to tune the item pool and the configuration of the CAT algorithm to achieve optimal results across the simulated population of students.

To build a high-quality CAT assessment:

  • Build a large item pool with items of difficulty levels spanning the range to be measured.
  • Design a test blueprint that focuses on the skills to be measured and correlates with the overall score and the subscores to be reported.
  • Ensure that the adaptive algorithm effectively covers the the blueprint and also focuses in on each student's skill level.
  • Perform CAT simulations to tune the effectiveness of the item pool, blueprint, and CAT algorithm.

Wrapup

Computerized adaptive testing offers significant benefits to students by delivering more accurate measures with a shorter, more satisfying test. CAT is best suited to larger tests with 35 or more questions spread across a broad blueprint. Shorter tests, focused on mastery of one or two specific skills, may be better served by conventional fixed-form tests.

01 September 2018

Quality Assessment Part 4: Item Response Theory, Field Testing, and Metadata

Drafting tools - triangle, compass, ruler.

This is part 4 of a multi-part series on building high-quality assessments.

Consider a math quiz with the following two items:

Item A:

x = 5 - 2 What is the value of x?

Item B:

x2 - 6x + 9 = 0 What is the value of x?

George gets item A correct but gets the wrong answer for item B. Sally has the wrong answer for A but answers B correctly. Using traditional scoring, George and Sally each get 50%.

A more sophisticated quiz might assign 2 points to item A and 6 points to item B (recognizing that B is harder than A). Under such a scoring system, George would get 25% and Sally would get 75%.

But the score is still short on meaning. George scored 25% of what? Sally scored 75% of what?

An even more sophisticated model should acknowledge that knowing how to solve quadratics (item B) is evidence that the student can also perform subtraction (item A). Such a model would position George somewhere between first grade (single-digit subtraction) and High School (solving quadratics). That same model would indicate that Sally either guessed correctly on item B or made a mistake on item A that's not representative of her skill. Due to the conflicting evidence, we are less sure about Sally's skill level than George's. For both students, more items would be required to gain greater confidence in their skill levels.

Item Response Theory

Item Response Theory or IRT is a statistical method for describing how student performance on assessment items relates to their skill in the area the item was designed to measure.

The "three parameter logistic model" (3PL) for IRT describes the probability that a student of a certain skill level will answer the item correctly. Student proficiency is represented by θ (theta) and the three item parameters are a, b, and c. They represent the following factors:

  • a = Discrimination. This value indicates how well the item discriminates between proficient students and those who have not yet learned this skill.
  • b = Difficulty. This value indicates how difficult an item is for the student to answer correctly.
  • c = Guessing. The probability that a student might guess the correct response. For a four-item multiple-choice question, this would be 0.25 because the student has a one-in-four chance of guessing the right answer.

From these parameters we can create an item characteristic curve. The formula is as follows:

formula: p=c+(1-c)/(1+e^(-a(θ-b))

This is much easier to understand in graph form. So I loaded it into the Desmos graphing calculator.

The vertical (y) axis indicates the probability that a student will answer the item correctly. The horizontal (y) axis is student proficiency (represented by θ in the equation). You can move the sliders to change the a, b, and c parameters and see how different items would be represented in an item characteristic curve.

In addition to this "three-parameter" model, there are other IRT models but they all follow this same basic premise: The function represents the probability that a student of given skill (represented by θ, theta) will answer the question correctly. At least one parameter of the function represents the difficulty of the question. For items scored on multi-point scale, there are difficulty parameters (typically d1, d2, etc.) representing the difficulty thresholds for each point value.

Scale Scores

The difficulty parameter b, and the student skill value θ, are on the same, logistic, scale and center on the skill level being measured. For example, if an item is written for grade 5 math, a b parameter of 0 means that the average 5th grade student should be able to answer the question correctly 50% of the time.

Most assessments convert from this theta score into a scale score which is a consistent score reported to educators, students, and parents. For Smarter Balanced, the scale score ranges from 2000 to 3000 and represents skill levels from Kindergarten to High School Graduation. Theta scores are converted to scale scores using a polynomial function.

Field Testing

So how do we come up with the a, b, and c parameters for a particular item? Based on the item type and potential responses we can predict c (guessing) fairly well but our experience at Smarter Balanced has shown that authors are not very good at predicting b (difficulty) or c (discrimination). To get an objective measure of these values we use a field test.

In Spring 2014 Smarter Balanced held a field test in which 4.2 million students completed a test - typically in either English Language Arts or Mathematics. Some students took both. For the participating schools and students, this was a practice test - gaining experience in administering and taking tests. Since the items were not yet calibrated, we could not reliably score the tests. For Smarter Balanced it offered critical data on more than 19,000 test items. For each item we gained more than 10,000 scored responses from students representing the target grades across all demographics.

Psychometricians used these data, from students taking the test, to calculate the parameters (a, b, and c) for each item in the field test. The process of calculating IRT parameters from field test data is called calibration. Once items were calibrated we examined the parameters and the data to determine which items are approved for use in tests. For example, if a is too low then the question likely has a flaw. It may not measure the right skill or the answer key may be incorrect. Likewise, if the b parameter is different across demographic groups than the item may be sensitive to gender, cultural, or ethnic bias. Items from the field test that met statistical standards were approved and became the initial bank of items from which Smarter Balanced produces tests.

Each year Smarter Balanced does an embedded field test. Each test that a student takes has a few new "field test" items included. These items do not contribute to the student's test score. Rather, the students' scored responses are used to calibrate the items. This way the test item bank is being constantly renewed. Other organizations like ACT and SAT follow the same practice of embedding field test questions in regular tests.

To understand more about IRT, I recommend A Simple Guide to IRT and Rasch by Ho Yu.

Item Metadata

The IRT parameters, alignment to standards, and other critical information are collected as metadata about each item. In most cases, metadata is represented as a set of name-value pairs. There are many formats for representing metadata and also many dictionaries of field definitions. Smarter Balanced uses the metadata structure from IMS Content Packaging and draws field definitions from The Learning Resource Metadata Initiative (LRMI), from Schema.org, and from Common Education Data Standards (CEDS).

Here are some of the most critical metadata elements for assessment items with links to their definitions in those standards:

  • Identifier: An number that uniquely identifies this item.
  • PrimaryStandard: An identifier of the principal skill the item is intended to measure. The skill would be described in an Achievement Standard or Content Specification.
  • SecondaryStandard: Optional identifiers of additional Achievement Standards or Content Specifications that the item measures.
  • InteractionType: The type of interaction (multiple choice, matching, short answer, essay, etc.).
  • IRT Parameters: The a, b, and c parameters or another parameter set for the Item Response Theory function.
  • History: A record of when and how the item has been used to estimate how much it has been exposed.

Quality Factors

States, schools, assessment consortia, and assessment companies all maintain banks of assessment items from which they construct their assessments. There are a number of efforts underway to pool resources from multiple entities into large, joint item banks. The value of items in any such bank is multiplied tenfold if the items have consistent and reliable metadata regarding alignment to standards and IRT parameters.

Here are factors to consider related to IRT Calibration and Metadata:

  • Are all items field-tested and calibrated before they are used in an operational test?
  • Is alignment to standards and content specifications an integral part of item writing?
  • Are the identifiers used to record alignment consistent across the entire item bank?
  • Is field testing an integral part of the assessment design?
  • Are IRT parameters consistent and comparable across the entire bank?
  • When sharing items or an item bank across multiple organizations, do all participants agree to contribute data (field testing and operational use) back to the bank?

Wrapup

Field testing can be expensive, inconvenient, or both. But without actual data from student performance we have no objective evidence that a particular assessment item measures what it's intended to measure at the expected level of difficulty.

The challenges around field testing combined with the lack of training in IRT and related psychometrics have been kept these measures from being used in anything other than large-scale, high stakes tests. Nevertheless, it's concerning to me that final exams and midterms of great consequence are rarely, if ever, calibrated and validated. Greater collaboration among institutions, among curriculum developers, or both could achieve sufficient scale for calibrated tests to become more common.

23 August 2018

Quality Assessment Part 3: Items and Item Specifications

This is part 3 of a multi-part series on building high-quality assessments.
Transparent cylindrical vessel with wires leading to an electric spark inside.

Some years ago I remember reading my middle school science textbook. The book was attempting to describe the difference between a mixture and a compound. It explained that water is a compound of two parts hydrogen and one part oxygen. However, if you mix two parts hydrogen and one part oxygen in a container, you will simply have a container with a mixture of the two gasses, they will not spontaneously combine to form water.

So far, so good. Next, the book said that if you introduced an electric spark in the mixed gasses you would, "start to see drops of water appear on the inside surface of the container as the gasses react to form water." This was accompanied by an image of a container with wires and an electric spark.

I suppose the book was technically correct; that is what would happen if the container was strong enough to contain the violent explosion. But, even as a middle school student, I wondered how the dangerously misleading passage got written and how it survived the review process.

The writing and review of assessments requires the same or better rigor than writing textbooks. An error on an assessment item affects the evaluation of all students who take the test.

Items

In the parlance of the assessment industry, test questions are called items. The latter term is intended include more complex interactions than just answering questions.

Stimuli and Performance Tasks

Oftentimes, an item is based on a stimulus or passage that sets up the question. It may be an article, short story, or description of a math or science problem. The stimulus is usually associated with three to five items. When presented by computer, the stimulus and the associated items are usually presented on one split screen so that the student can refer to the stimulus while responding to the items.

Sometimes, item authors will write the stimulus; this is frequently the case for mathematics stimuli as they set up a story problem. But the best items draw on professionally-written passages. To facilitate this, the Copyright Clearance Center has set up the Student Assessment License as a means to license copyrighted materials for use in student assessment.

A performance task is a larger-scale activity intended to allow the student to demonstrate a set of related skills. Typically, it begins with a stimulus followed by a set of ordered items. The items build on each other usually finishing with an essay that asks the student to draw conclusions from the available information. For Smarter Balanced this pattern (stimulus, multiple items, essay) is consistent across English Language Arts and Mathematics.

Prompt or Stem

The prompt, sometimes called a stem, is the request for the student to do something. A prompt might be as simple as, "What is the sum of 24 and 62." Or it might be as complex as, "Write an essay comparing the views of the philosophers Voltaire and Kant regarding enlightenment. Include quotes from each that relate to your argument." Regardless, the prompt must provide required information, clearly describe what the student is to do, and how they are to express their response.

Interaction or Response Types

The response is a student's answer to the prompt. Two general categories of items are selected response and constructed response. Selected response items require the student to select one or more alternatives from a set of pre-composed responses. Multiple choice is the most common selected response type but others include multi-select (in which more than one response may be correct), matching, true/false, and others.

Multiple choice items are particularly popular due to the ease of recording and scoring student responses. For multiple choice items, alternatives are the responses that a student may select from, distractors are the incorrect responses, and the answer is the correct response.

The most common constructed response item types are short answer and essay. In each case, the student is expected to write their answer. The difference is the length of the answer; short answer is usually a word or phrase while essay is a composition of multiple sentences or paragraphs. A variation of short answer may have a student enter a mathematical formula. Constructed responses may also have students plot information on a graph or arrange objects into a particular configuration.

Technology-Enhanced items are another commonly-used category. These items are delivered by computer and include simulations, composition tools, and other creative interactions. However, all technology-enhanced items can still be categorized as either selected response or constructed response.

Scoring Methods

There are two general ways of scoring items, deterministic scoring and probabilistic scoring.

Deterministic scoring is indicated when a student's response may be unequivocally determined to be correct or incorrect. When a response is scored on multiple factors there may be partial credit for the factors the student addressed correctly. Deterministic scoring is most often associated with selected response items but many constructed response items may also be deterministically scored when the factors of correctness are sufficiently precise; such as a numeric answer or a single word for a fill-in-the-blank question. When answers are collected by computer or are easily entered into a computer, deterministic scoring is almost always done by computer.

Probabilistic scoring is indicated when the quality of a student's answer must be judged on a scale. This is most often associated with essay type questions but may also apply to other constructed response forms. When handled well, a probabilistic score may include a confidence level — how confident is the scoring person or system that the score is correct.

Probabilistic scoring may be done by humans (e.g. judging the quality of an essay) or by computer. When done by computer, Artificial Intelligence techniques are frequently used with different degrees of reliability depending on the question type and the quality of the AI.

Answer Keys and Rubrics

The answer key is the information needed to score a selected-response item. For multiple choice questions, it's simply the letter of the correct answer. A machine scoring key or machine rubric is an answer key coded in such a way that a computer can perform the scoring.

The rubric is a scoring guide used to evaluate the quality of student responses. For constructed response items the rubric will indicate which factors should be evaluated in the response and what scores should be assigned to each factor. Selected response items may also have a rubric which, in addition to indicating which response is correct, would also give an explanation about why that response is correct and why each distractor is incorrect.

Item Specifications

An item specification describes the skills to be measured and the interaction type to be used. It serves as both a template and a guide for item authors.

The skills should be expressed as references to the Content Specification and associated Competency Standards (see Part 2 of this series). A consistent identifier scheme for the Content Specification and Standards greatly facilitates this. However, to assist item authors, the specification often quotes relevant parts of the specification and standards verbatim.

If the item requires a stimulus, the specification should describe the nature of the stimulus. For ELA, that would include the type of passage (article, short-story, essay, etc.), the length, and the reading difficulty or text complexity level. In mathematics, the stimulus might include a diagram for Geometry, a graph for data analysis, or a story problem.

The task model describes the structure of the prompt and the interaction type the student will use to compose their response. For a multiple choice item, the task model would indicate the type of question to be posed, sometimes with sample text. That would be followed by the number of multiple choice options to be presented, the structure for the correct answer, and guidelines for composing appropriate distractors. Task models for constructed response would include the types of information to be provided and how the student should express their response.

The item specification concludes with guidelines about how the item will be scored including how to compose the rubric and scoring key. The rubric and scoring key focus on what evidence is required to demonstrate the student's skill and how that evidence is detected.

Smarter Balanced content specifications include references to the Depth of Knowledge that should be measured by the item, and guidelines on how to make the items accessible to students with disabilities. Smarter Balanced also publishes specifications for full performance tasks.

Data Form for Item Specifications

Like Content Content Specifications, Item Specifications have traditionally been published in document form. When offered online they are typically in PDF format. Like Content Specifications, there are great benefits to be achieved by publishing content specs in a structured data form. Doing so can integrate the content specification into the item authoring system — presenting a template for the item with pre-filled content-specification alignment metadata, pre-selected interaction time, and guidelines about stimulus and prompt alongside the places where the author is to fill in the information.

Smarter Balanced has selected the IMS CASE format for publishing item specifications in structured form. This is the same data format we used for the content specifications.

Data Form for Items

The only standardized format for assessment items in general use is IMS Question and Test Interoperability (QTI). It's a large standard with many features. Some organizations have chosen to implement a custom subset of QTI features known as a "profile." The soon-to-be-released QTI 3.0 aims to reduce divergence among profiles.

A few organizations, including Smarter Balanced and CoreSpring have been collaborating on the Portable Interactions and Elements (PIE) concept. This is a framework for packaging custom interaction types using Web Components. If successful, this will simplify the player software and support publishing of custom interaction types.

Quality Factors

A good item specification will likely be much longer than the items it describes. As a result, producing an item specification also consumes a lot more work than writing any single item. But, since each item specification will result in dozens or hundreds of items, the effort of writing good item specifications pays huge dividends in terms of the quality of the resulting assessment.

  • Start with a good quality standards and content specifications
  • Create task models that are authentic to the skills being measured. The task that the student is asked to perform should be as similar as possible to how they would manifest the measured skill in the real world.
  • Choose or write high-quality stimuli. For language arts items, the stimulus should demand the skills being measured. For non-language-arts items, the stimulus should be clear and concise so as to reduce sensitivity to student reading skill level.
  • Choose or create interaction types that are inherently accessible to students with disabilities.
  • Ensure that the correct answer is clear and unambiguous to a person who possesses the skills being measured.
  • Train item authors in the process of item writing. Sensitize them to common pitfalls such as using terms that may not be familiar to students of diverse ethnic backgrounds.
  • Use copy editors to ensure that language use is consistent, parallel in structure, and that expectations are clear.
  • Develop a review, feedback, and revision process for items before they are accepted.
  • Write specific quality criteria for reviewing items. Set up a review process in which reviewers apply the quality criteria and evaluate the match to the item specification.

Wrapup

Most tests and quizzes we take, whether in K-12 or college, are composed one question at a time based on the skills taught in the previous unit or course. Item specifications are rarely developed or consulted in these conditions and even the learning objectives may be somewhat vague. Furthermore, there is little third-party review of such assessments. Considering the effort students go through to prepare for and take an exam, not to mention the consequences associated with their performance on those exams, it seems like institutions should do a better job.

Starting from an item specification is both easier and produces better results than writing an item from scratch. The challenge is producing the item specifications themselves, which is quite demanding. Just as achievement standards are developed at state or multi-state scale, so also could item specifications be jointly developed and shared broadly. As shown in the links above, Smarter Balanced has published its item specifications and many other organizations do the same. Developing and sharing item specifications will result in better quality assessments at all levels from daily quizzes to annual achievement tests.

11 August 2018

Quality Assessment Part 2: Standards and Content Specifications

Mountain with Flag on Summit

In Part 1 of this series I introduced the factors that distinguish a high quality assessment from other assessments. The balance of this series will discuss the process of constructing an assessment and the factors that make them high quality. Today I'm writing about Achievement Standards and the Content Specification.

Some years ago my sister was in middle school and I had just finished my freshman year at college. My sister's English teacher kept assigning word search puzzles and she hated them. The family had just purchased an Apple II clone and so I wrote a program to solve word searches for my sister. I'm not sure what skills her teacher was trying to develop with the puzzles; but I developed programming skills and my sister learned to operate a computer. Both skill sets have served us later in life.

Alignment to Standards

The first step in building any assessment, from a quiz to a major exam, should be to determine what you are trying to measure. In the case of academic assessments, we measure skills, also known as competencies. State standards are descriptions of specific competencies that a student should have achieved by the end of the year. They are organized by subject and grade. State summative tests indicate student achievement by measuring how close each student is to the state standards — typically at the close of the school year. Interim tests can be used during the school year to measure progress and to offer more detailed focus on specific skill areas.

At the higher education level, Colleges and Universities set Learning Objectives for each course. A common practice is to use the term, "competencies" as a generic reference to state standards and college learning objectives and I'll follow that pattern here.

The Smarter Balanced Assessment Consortium, where I have been working, measures student performance relative to the Common Core State Standards. Choosing standards that have been adopted by multiple states enables us to write one assessment that meets the needs all of our member states and territories.

The Content Specification

The content specification is a restatement of competencies organized in a way that facilitates assessment. Related skills are clustered together so that performance measures on related tasks may be aggregated. For example, Smarter Balanced collects skill measures associated with "Reading Literary Texts" and "Reading Informational Texts" together into a general evaluation of "Reading". In contrast, a curriculum might cluster "Reading Literary Texts" with "Creative Writing" because synergies occur when you teach those skills together.

The Smarter Balanced content specification follows a hierarchy of Subject, Grade, Claim, and Target. In Mathematics, the four claims are:

  1. Concepts and Procedures
  2. Problem Solving
  3. Communicating Reasoning
  4. Modeling and Data Analysis

In English Language Arts, the four claims are:

  1. Reading
  2. Writing
  3. Speaking & Listening
  4. Research & Inquiry

These same four claims are repeated in each grade but the expected skill level increases. That increase in skill is represented by the targets assigned to the claims at each grade level. In English Reading (Claim 1), the complexity of the text presented to the student increases and the information the student is expected to draw from the text is increasingly demanding. Likewise, in Math Claim 1 (Concepts and Procedures) the targets progress from simple arithmetic in lower grades to Geometry and Trigonometry in High School.

Data Form

Typical practice is for states to publish their standards as documents. When published online they have been published as PDF files. Such documents are human readable but they lack the structure needed for data systems to facilitate access. In many cases they also lack identifiers that are required when referencing standards or content specifications.

Most departments within colleges and universities will develop a set of learning objectives for each course. Often times a state college system will develop statewide objectives. While these objectives are used internally for course design, there's little consistency in publishing the objectives. Some institutions publish all of their objectives while others keep them as internal documents. The Temple University College of Liberal Arts offers an example of publicly published learning objectives in HTML form.

In August 2017, IMS Global published the Competencies & Academic Standards Exchange (CASE) data standard. It is a vendor-independent format for publishing achievement standards suitable for course learning objectives, state standards, content specifications, and many other competency frameworks.

Public Consulting Group, in partnership with a several organizations built OpenSALT, an open source "Standards Alignment Tool" as a reference implementation of CASE.

Here's an example. Smarter Balanced originally published its content specifications in PDF form. The latest versions, from July of 2017, are available on the Development and Design page of their website. These documents have complete information but they do not offer any computer-readable structure.

"Boring" PDF form of Smarter Balanced Content Specifications:

In Spring 2018, Smarter Balanced published the same specifications, in CASE format, using the OpenSALT tool. The structure of the format lets you navigate the hierarchy of the specifications. The CASE format also supports cross-references between publications. In this case, Smarter Balanced also published a rendering of the Common Core State Standards in CASE format to facilitate references from the content specifications to the corresponding Common Core standards.

"Cool" CASE form of Smarter Balanced Content Specifications and CCSS:

I hope you agree that the Standards and Content Specifications are significantly easier to navigate in their structured form. Smarter Balanced is presently working on a "Content Specification Explorer" which will offer a friendlier user interface on the structured CASE data.

Identifiers

Regardless of how they are published, use of standards is greatly facilitated if an identifier is assigned to each competency. There are two general categories of identifiers: Opaque identifiers carry no meaning - they are just a number. Often they are "Univerally Unique IDs" (UUIDs) which are generated using an algorithm to assure that identifier is not used anywhere else in the world. Any meaning of the identifier is by virtue of the record to which it is assigned. "Human Readable" identifiers are constructed to have a meaningful structure to a human reader. There are good justifications each approach.

The Common Core State Standards assigned both types of identifier to each standard. Smarter Balanced has followed a similar practice in the identifiers for our Content Specification.

Common Core State Standards Example:

  • Opaque Identifier: DB7A9168437744809096645140085C00
  • Human Readable Identifier: CCSS.Math.Content.5.OA.A.1
  • URL: http://corestandards.org/Math/Content/5/OA/A/1/
  • Statement: Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols.

Smarter Balanced Content Specification Target Example:

You'll notice that the Smarter Balanced Content Specification target is a copy of the corresponding Common Core State Standard. The CASE representation includes an "Exact Match Of" cross-reference from the content specification to the corresponding standard to show that's the case.

Smarter Balanced has published a specification for its human-readable Content Specification Identifiers. Here's the interpretation of "M.G5.C1OA.TA.5.OA.A.1":

  • M Math
  • G5 Grade 5
  • C1 Claim 1
  • OA Domain OA (Operations & Algebraic Thinking)
  • TA Target A
  • 5.OA.A.1 CCSS Standard 5.OA.A.1

Quality Factors

The design of any educational activity should begin with a set of learning objectives. State Standards offer a template for curricula, lesson plans, assessments, supplemental materials, games and more. At the higher education level, Colleges and Universities set learning objectives for each course that serve a similar purpose. The quality of the achievement standards will have a fundamental impact on the quality of the related learning activities.

Factors to consider when selecting or building standards or learning objectives include the following:

  • Are the competencies relevant to the discipline being taught?
  • Are the competencies parallel in construction, describing skills at a similar grain size?
  • Are the skills ordered in a natural learning progression?
  • Are related skills, such as reading and writing, taught together in a coordinated fashion?
  • Is the amount of material covered by the competencies appropriate for the amount of time that will be allocated for learning?

The Development Process and the Standards-Setting Criteria used by the authors of the Common Core State Standards offer some insight into how they sought to develop high quality standards.

Factors to consider when developing an assessment content specification include the following:

  • Does the specification reference an existing standard or competency set?
  • Are the competencies described in such a way that they can be measured?
  • Is the grain size (the amount of knowledge involved) for each competency optimal for construction of test questions?
  • Are the competencies organized so that related skills are clustered together?
  • Does the content standard factor in dependencies between competencies? For example, performing long division is evidence that an individual is also competent at multiplication.
  • Is the organization of the competencies, typically into a hierarchy, consistent and easy to navigate?
  • Does the competency set lend itself to reporting skills at multiple levels? For example, Smarter Balanced reports an overall ELA score and then subscores for each claim: Reading, Writing, Speaking & Listening, and Research & Inquiry.

Wrapup

Compared with curricula, standards and content specifications are relatively short documents. The Common Core State Standards total 160 pages, much less than the textbook for a single grade. But standards have a disproportionate impact on all learning activities within the state, college, or class where they are used. Careful attention to the selection or construction of standards is a high-impact effort.

02 August 2018

Quality Assessment Part 1: Quality Factors

Flask

As I wrap up my service at the Smarter Balanced Assessment Consortium I am reflecting on what we've accomplished over the last 5+ years. We've assembled a full suite of assessments; we built an open source platform for assessment delivery; and multiple organizations have endorsed SmarterBalanced as more rigorous and better aligned to state standards than prior state assessments.

So, what are the characteristics of a high-quality assessment? How do you go about constructing such an assessment? And what distinguishes an assessment like Smarter Balanced from a typical quiz or exam that you might have in class?

That will be the subject of this series of posts. Starting from the achievement standards that guide construction of both curriculum and assessment I will walk through the process Smarter Balanced and other organizations use to create standardized assessments and then indicate the extra effort required to make them both standardized and high quality.

But, to start with, we must define what quality means — at least in the context of an assessment.

Goal of a Quality Assessment

Nearly a year ago the Smarter Balanced member states released test scores for 2017. In most states the results were flat — with little or no improvement from 2016. It was a bit disappointing but what surprised me at the time was the criticism directed at the test. "The test must be flawed," certain critics said, "because it didn't show improvement."

This seemed like a strange criticism to direct at the measurement instrument. If you stick your hand in an oven and it doesn't feel warm do you wonder why your hand is numb or do you check the oven to see if it is working? Both are possibilities but I expect you would check the oven first.

The more I thought about it, however, the more I realized that the critics have a point. Our purpose in deploying assessments is to improve student learning, not just to passively measure learning. The assessment is a critical part of the eduational feedback loop.

Smarter Balanced commissioned an independent study and confirmed that the testing instrument is working properly. Nevertheless, there are more things that the assessment system can do support better learning.

Features of a Quality Assessment

So, we define a quality assessment as one that consistently contributes to better student learning. What are the features of an assessment that does this?

  • Valid: The test must measure the skills it is intended to measure. That requires us to start with a taxonomy of skills — typically called achievement standards or state standards. The quality of the standards also matter, of course, but that's the subject of a different blog post.. A valid test should be relatively insensitive to skills or characteristics it is not intended to measure. For example, it should be free of ethnic or cultural bias.
  • Reliable: The test should consistently return the same results for students of the same skill level. Since repeated tests may not be composed of the same questions, the measures must be calibrated to ensure they return consistent results. And the test must accurately measure growth of a student when multiple tests are given over an interval of time.
  • Timely: Assessment results must be provided in time to guide future learning activities. Summative assessments, big tests near the end of the school year, are useful but they must be augmented with interim assessments and formative activities that happen at strategic times during the school year.
  • Informative: If an assessment is to support improved learning, the information it offers must be useful for guiding the next steps in a student's learning journey.
  • Rewarding: Test anxiety has been the downfall of many well-intentioned assessment programs. Not only does anxiety interfere with the reliability of results but inappropriate consequences to teachers can encourage poor instructional practice. By its nature, the testing process is demanding of students. Upon completion, their effort should be rewarded with a feeling that they've achieved something important.

Watch This Space

In the coming weeks, I will describe the processes that go into constructing quality assessments. Because I'm a technology person, I'll include discussions of how data and technology standards support the work.

09 June 2018

A Brief History of Copyright

In the early 2000s I began writing a book titled Frictionless Media. The subject was business models for digital and online media. My thesis was that digital media is naturally frictionless — naturally easy to copy and transmit. Prior media formats had natural friction, they required specialized equipment and significant expense to copy. Traditional media business models are based on that natural friction. In order to preserve business models, publishers have attempted to introduce artificial friction through mechanisms like Digital Rights Management. They would be better off adapting their business models to leverage that frictionlessness to their advantage. My ideas were inspired by experience at Folio Corporation where we had invented a sophisticated Digital Rights Management system for textual publications. We found that the fewer restrictions publishers imposed on their publications the more successful they were.

I didn’t finish the manuscript before the industry caught up with me. Before long, most of my arguments were being made by dozens of pundits. Nevertheless, the second chapter, "A Brief History of Copyright," remains as relevant as ever. In 2018 I updated it to include recent developments such as Creative Commons.

31 July 2017

Why Assessment?

Folio Corporation LogoMost departments or ministries of education state the purposes of assessment. I'm particularly fond of New Zealand's statement:

The primary purpose of assessment is to improve students’ learning and teachers’ teaching as both respond to the information it provides. Assessment for learning is an ongoing process that arises out of the interaction between teaching and learning.

I like this because it captures the feedback process and acknowledges that both students and educators should respond to that feedback. It also encompasses the various goals of assessment — to inform individual student learning, to measure the effectiveness of the learning system, and to serve as evidence of student skills.

Today I'm writing about the purposes of assessment and the value of standardized assessment.

Inform Individual Student Learning

The most important use of assessment is to improve individual learning. When used properly, assessments improve individual learning in three ways. 1) Exercising and demonstrating skills reinforces student understanding and helps retention. 2) Student attention to the assessment results can increase motivation and direct their choice of learning activities. 3) Educator attention to assessment results can direct their assignment of learning activities or inform interventions.

All of these involve educational feedback loops. However these impacts are only achieved if the right assessment is used for the right purpose. For example, many high-stakes assessments were developed primarily to comply with regulations, such as The No Child Left Behind Act (replaced by ESSA). The reports required by these regulations focus on the percentage of students at each institution that achieve standards for grade-level competency. A test focused on that type of report centers on the threshold of competency. It can indicate with great reliability whether a student is above or below the threshold but may not be reliable for other insight. Such a threshold test is a poor choice for informing learning activities, diagnosing areas of weakness, or measuring growth.

More advanced tests include questions at a variety of skill levels centered on the expected competency level. These tests indicate the student's competency on a continuous scale. Accordingly, they can indicate how far ahead or behind the student is, not just whether they are above or below a certain threshold. By comparing scores from successive tests, you can measure student growth over a period of time. Advanced tests include questions designed to get measure greater depths of knowledge. Such tests offer more reliable detail about student skills in specific areas.

One objection to advanced tests is that it takes more questions and more time to measure skills at this level of detail. The use of computer adaptive testing can shorten the test while maintaining reliability.

Measure the Effectiveness of the Learning System

When standardized assessments are used to measure the effectiveness of the learning system, individual student results are aggregated to indicate the fraction that are achieving competency levels. Typically results are compared with previous years to see if schools are improving. This Delaware Press Release is a typical example of the public statements made each year.

Assessments like these are based on the premise that if you measure performance and report on it, then performance will improve. Unfortunately, education has proven to be a stubborn counter example of this premise. Sixteen years after the No Child Left Behind act mandated standardized testing and established remedies for underperforming schools there has been limited progress. This leads some to call for abandonment of standardized assessment altogether. But if we don't measure performance, we will never know if we are succeeding.

These are our contemporary challenges: discovering the factors under that contribute to better learning and investing the resources needed to improve those factors. Continuing to measure performance will support gathering evidence of principles of effective teaching and learning.

Less frequently applied but equal in importance is using assessments to evaluate parts of the learning system. For example, assessment data can be used to compare different curricula or textbooks, for continuous improvement of online learning systems, and to evaluate the effectiveness of professional development programs.

Provide Evidence of Student Achievement

Since 1905, the primary measure of student achievement in the U.S. has been the Carnegie Unit. This measure uses the time a student spends in the classroom as proxy for how much they have learned. A century later, in 2005, New Hampshire began converting to a competency-based system where student skill is measured directly rather than by proxy. Other states have programs that allow competency measures as an alternative to seat time. Such measures depend on high quality assessments that are aligned to specific and relevant standards of achievement.

Standardized and High Quality Assessment

It has become common, in recent years, to complain about achievement standards and the associated standardized assessments. A typical protest might be, "My child is not standardized." To be sure, our goal should not be to achieve sameness among children and this is not the purpose of achievement standards. Rather, we recognize that people need to achieve a basic competency level in language arts and mathematics in order to function and achieve in our society. The standards are intended to reflect that basic competency with the hope that students and educators will build a wide variety of skill and achievement on that core foundation.

All of these uses — informing learning activities, measuring program effectiveness, and providing evidence of achievement — depend on the quality of the assessment. An assessment will provide poor guidance if it is sensitive to the wrong factors, is unreliable, or is tuned to the wrong skill level. I've written before that personalized learning is currently the most promising approach to improving learning. Choosing high quality assessments to inform personalization is essential to the success of such programs.

That should be our demand — that states, districts, and schools give us evidence of the quality of the assessments they use.

30 June 2017

Reducing Income Inequality when Productivity Exceeds Consumption

Prototype apple harvester by Abundant Robotics.Among the last bastions of labor demand is Agricultural Harvesting. Every fall, groups of migrant workers follow the maturation of fruit and vegetables across the different climates. But even those jobs are going away. In one case, Abundant Robotics is developing robots that use image recognition to locate ripe apples and delicate manipulators to harvest fruit without bruising it.

In my last post I described how creatively destructive innovations like these have increased productivity in the United States until it exceeds basic needs by nearly four times. If it weren't for distribution problems, we could eliminate poverty altogether.

The problem is that when production exceeds consumption, prices fall and the economy goes into recession. As I wrote previously the U.S. and most of the word economy have relied on a combination of advertising, increased standards of living, and fiscal policy to stimulate demand sufficiently to keep up with productivity. But these stimulus methods have a side effect of increasing wage disparity.

The impact is mostly on the unskilled labor force as these jobs are the easiest to automate or export. Even though the income pie keeps growing, the slice owned by the lowest quintile shrinks. Free trade exacerbates the spread between skilled and unskilled labor as do attempts to stimulate consumption through government spending, low interest rates, and tax cuts. (Please see that previous post for details on all of this.)

Conventional Solutions and Their Limits

This is a gnarly problem. Potential remedies tend to have side effects that can make the situation worse, or at least limit the positive impact. In this post I'll name some of the conventional solutions and then attempt to frame up the kind of innovation that will be required to properly solve the problem.

Infrastructure Investment

Investments in infrastructure are a good short-term stimulus. Constructing or upgrading roads, bridges, energy, and communications infrastructure employs a lot of people of all skill levels. As most infrastructure is government-funded, it's a convenient fiscal stimulus. Trouble is that in the long run these infrastructure improvements result in greater overall productivity thereby reducing the labor demand.

Progressive Taxes

A common method of wealth redistribution is a progressive tax. Presumably the wealthy can afford to pay a larger fraction of their income than those with lower incomes. Progressive taxes are effective tools but, in the U.S., we have pretty much reached the limit of how much a progressive system can help low income households. Those earning less than the income tax threshold already pay no taxes. For a single parent with one child, the marginal tax threshold for 2016 was approximately $21,000 before accounting for federal and state health care benefits which amount to approximately $9,400 more.

Since the lowest tax bracket is already zero we can consider increasing the tax rate in upper tax brackets. While that may increase the amount paid by the wealthy it doesn't directly benefit the poor. Indeed, the resulting economic slowdown may worsen the situation.

Refundable Tax Credits

Through tax credits you can reduce the lowest effective income tax rate to negative values — paying money rather than collecting it. In the U.S. the Earned Income Tax Credit (ETC) serves this role with progressive benefits for the number of children in the household. Because the ETC is designed as a percentage of income, the benefit increases as the individual earns more money before being phased out at higher income levels. This is intended to incentivize workers to find and improve their employment even while drawing government benefits.

The downside of tax credits is that they are tied in with the very complicated process of filing income tax returns. The Government Accounting Office and IRS indicate that between 15% and 25% of eligible households don't collect credits to which they are entitled. Nevertheless, this is a promising approach because tax credits are unrelated to productivity and they have a direct impact on income inequality.

Increased Minimum Wage

Recently there have been increased calls for a $15 minimum wage. Many cities and some states have already passed laws to increase wages to that level. Critics of the minimum wage point out that increased wages can lead to increased prices — thereby reducing the benefit to low-income workers. And indexing minimum wage to the cost of living, as certain states and municipalities have done, may eliminate entry level jobs and accelerate inflation.

Many are watching to see how experiments in Seattle, New York City, and San Francisco will work out. Two recent studies of Seattle's minimum wage offer early indicators. One, from the University of Washington indicates an increase of 3% in total wages in low-paying jobs (the desired outcome) but a reduction, by 9%, of total hours worked (not so desirable). A study by UC Berkeley confirms the increase in total wages and indicates no reduction in overall employment though it does show a decline in employment of workers by limited-service restaurants (fast food). Hours worked was not among the data in the Berkeley study.

These are preliminary results but they tend to confirm the expected pattern. An increased minimum wage incentivizes greater automation and, thereby reduces the total labor demand. For example, fast food restaurants are deploying self-serve kiosks in place of human cashiers. So, while the employed benefit from higher wages, many jobs may be eliminated.

Idling Portions of the Workforce

A recession naturally reduces the number of workers until production is a closer match to demand. Unfortunately, those who lose their jobs are predominantly low-income workers, people who are least able to tolerate job loss. Better options preserve household income while reducing hours worked. These include mandatory vacation time, a shortened work week, and more holidays. When couples elect to have only one spouse be employed that also reduces the workforce.

Education

Bar graph showing decline in jobs requiring high school or less and increas in jobs requiring a postsecondary degree Increasing educational achievement is the intervention most dear to my heart. Not only does a better education enable better wages, but it also results in better health, greater happiness, more active citizenship, and reduced violence. Those with higher educational attainment make better financial decisions. All of this results in a better quality of life.

As more and more routine jobs are automated, the opportunities for those without a postsecondary degree or certificate will continue to diminish. A recent study at Georgetown University indicates that, by 2020, 65% of jobs in the U.S. will require postsecondary education. That's up from only 28% in 1973.

Certainly greater educational achievement is a part of the solution as it moves more people into higher-wage jobs. But those jobs pay higher wages precisely because they are more productive. So, as more people achieve higher levels of education, productivity will continue to increase.

Working With Market Forces

Nobel Laureate Milton Friedman observed that the communication function of markets is as important as the commerce function. When demand rises then prices rise. Rising prices signal suppliers to produce more goods. On the other hand, excess goods result in falling prices signaling manufacturers to reduce production. No command economy has achieved the signaling efficiency of the market.

The same signaling occurs in labor markets. Higher wages for more-skilled jobs will encourage people to seek the necessary education and training to qualify for those jobs. But the counter-point is our contemporary concern, when there is excess labor available, especially for low-skill jobs, then wages may fall below the point where workers can earn an adequate living. Some workers will retrain, and many programs will help pay for that. But retraining takes time, interest, and an affinity for the new field.

With worker productivity in the U.S. soon to crest 4 times basic needs the natural market signal is for less production – but that would mean unemployment. To date, we have interfered with that signal by artificially propping up demand. Massive advertising, high consumer debt, low interest rates, the housing bubble, planned obsolescence; all are symptoms of the interference. Besides, the overconsumption caused by this stimulus is also damaging to the environment.

As productivity continues to increase we will have to allow the signal to get through – it will inevitably anyway. The challenge is finding ways to match production to demand while sustaining employment and wages especially for the most vulnerable.

Framing the Problem

The innovation we need is this:

A way to distribute abundant resources more equitably; while preserving incentives to learn, work, and make a difference; and allowing market signals to balance production and consumption.

We can't look to the past because the challenge of abundance is different from any faced by previous generations. It's going to require a generous sharing of ideas, some experimentation, and development of greater trust between parties.

I'm optimistic that there is a solution because never before in history has society had such plentiful resources as today.

31 March 2017

Cut Taxes or Increase Spending - Is the Debate Obsolete?

Photo of the U.S. Capital with full moon overhead.

As the Trump administration turns its attention to a tax reform plan debate swirls about the best way to stimulate the economy. Traditionally, Democrats have advocated for increased government spending while Republicans have fought for reduced taxes. Both methods succeed in stimulating and both have their roots in the theories of prominent economists. But it may be that both strategies, and the theories that support them, are obsolete in a day when production is many times basic needs.

Government spending advocates cite the work of John Maynard Keynes. Prior to Keynes, neoclassical economists theorized that free markets should naturally balance the economy toward full employment. Keynes observed that economies tended to swing between boom and bust cycles and advocated government intervention through fiscal policy (government taxing and spending) and monetary policy (central bank regulation of the money supply) to moderate the swings. Keynesian theory was influential in addressing the Great Depression and remained dominant following World War II into the 1970s.

Among the expectations of early Keynesian economics was that high inflation and high unemployment should not co-exist. Economist Milton Friedman challenged that notion and was proven right when "stagflation" emerged in the 1970s. Friedman theorized that stagflation and related poor economic conditions result from excessive or malinformed government intervention. The solution, he said, was to free the market through reduced regulation and lower taxes. This school of thought is generally known as "supply-side" or "monetarist". President Reagan successfully employed that approach early in his presidency launching a sustained period of economic growth that continued through the Bush and Clinton administrations.

Today, Keynesian economics is associated with greater regulation, increased government spending, and with an overall trust in government interventions. Meanwhile, monetarist economics is associated with free markets, reduced taxes, and with an overall trust in the market's ability to self-balance. In fact, both schools of thought are much more nuanced than these broad strokes. On the Keynesian side, it matters a lot where the government spends its money. On the monetarist side, it matters a great deal which taxes are reduced and how regulations are tuned. Earnest theorists on both sides have a healthy respect for the other theory.

But the nuance is quickly lost in the morass of political debate. Indeed, I fear that most political Keynesians choose that theory because it justifies their existing desire to increase government spending. And most monetarists choose supply-side theory for it's justification of reduced taxes and regulation. In each case I think they first choose their preferred intervention and then select a theory to justify it.

Through the latter half of the 20th century U.S. government economic focus was pretty much what Keynes described - moderating the boom and bust cycle toward more stable continuous growth. During slow cycles this meant adding economic stimulus, through increased spending and reduced taxes. When inflation started to get out of hand, government would slow things by increasing taxes, reducing spending, and reduced interest rates. Reagan met the stagflation challenge (high inflation and high unemployment) with an unusual combination of reduced taxes (to stimulate hiring) and increased interest rates (to slow inflation). Nevertheless, Reagonomics still used the same tools, just in different ways.

Our contemporary challenge is a new one. Since roughly 2001 the economy has required continuous stimulation to maintain growth. Radical new stimuli such as Quantitative Easing and zero interest rates have been used. Previously experts avoided those stimuli because of their potential to provoke high inflation yet inflation remains at historically low levels and it seems that, without continuous stimulation, the economy will slow to a crawl.

Production compared to Basic Needs

Output per hour of all persons 1947 to 2010

The new economic challenge is due principally to the rapid increase in workforce productivity. According to the U.S. Bureau of Labor Statistics individual worker productivity has more than quadrupled since World War II. Overall productivity per person in 2012 was 412% that of 1947.

Productivity growth becomes even more striking when compared with basic needs. In 2014 U.S. per capita GDP was $54,539. The basic needs per capita that same year was approximately $13,908. So, per-capita production exceeds basic needs by nearly four times. And while the basic needs side of this equation includes the whole population, the productivity side only accounts for those employed, it doesn't include unemployed workers and people choosing not to seek paid employment. So productive capacity compared with basic needs would be even higher.

If it weren't for problems of distribution this would be a great thing! For the first time in history, society has sufficient capacity to provide comfortable housing, plenty of food, health care, entertainment, and leisure time for all. The challenge is that, in a market economy, productivity increases disproportionally benefit those who are already at the higher end of the wage scale.

Disproportionate Benefits

Creative destruction is the term economists us to describe the transformation of an industry by innovation. It is usually associated with the elimination of jobs due to new technology but any innovation that increases individual productivity qualifies. Some examples: The backhoe replaces the jobs of several ditch diggers with that of a more-skilled heavy equipment operator. Computerized catalogs reduce the demand on librarians. Industrial robots replace factory workers. The common feature of such innovations is that they substantially increase the productivity of individual workers. Frequently, these innovations also result in jobs moving upscale — requiring more skill or training and with correspondingly higher pay.

Creatively destructive innovations have led to enormous productivity increases in recent decades thereby reducing the demand for labor. As with any market, when supply increases or demand declines the value also declines. In this case the value of routine jobs has declined dramatically. Here's how economist Dr. David Autor describes it.

And so the things that are most susceptible to computerization or to automation with computers are things where we have explicit procedures for accomplishing them. Right? They’re what my colleagues and I often call “routine tasks.” I don't mean routine in the sense of mundane. I mean routine in the sense of being codifiable. And so the things that were first automated with computers were military applications like encryption. And then banking and census-taking and insurance, and then things like word processing and office clerical operations. But what you didn’t see computers doing a lot of — and still don't, in fact — are tasks that demand flexibility and don't follow well-understood procedures. I don’t know how to tell someone how do you write a persuasive essay, or come up with a great new hypothesis, or develop an exciting product that no one has seen before. ... What we’ve been very good at doing with computers is substituting them for routine, codifiable tasks. The tasks done by workers on production lines, the tasks done by clerical workers, the tasks done by librarians, the tasks done by kind of para-professionals, like legal assistants who go into the stacks for you. And so we see a big decline in clerical workers. We see a decline in production workers. We see a decline even in lower-level management positions because they’re all kind of information processing tasks that have been codified.

Recent creative destruction has predominantly affected lower-middle-class jobs and manufacturing jobs. While increased productivity has made our nation more wealthy as a whole, large sectors of the labor force have been left behind. This may be the biggest factor behind the slow recovery from the 2008 recession. Automation substituted for jobs that were eliminated during the recession; those jobs are not coming back.

The decline in U.S. manufacturing employment has been balanced, in part, by growth in the service sector. This makes sense; growth in productivity has resulted in greater overall wealth. On average, people in the U.S. have more money to spend on eating out, recreation, vacations, and health care. But again, the benefits are not evenly distributed. As workers displaced from manufacturing have moved into the service sector, wages in that area have stagnated.

Disproportionate Impact of Globalization

Economists have consistently advocated for free trade. The math is incontrovertible; when regions or countries with different costs of production trade goods and services, all communities benefit as each is able to specialize and all benefit from the overall productivity increase.

Only recently have economists begun to study how free trade impacts sectors of the economy rather than the economy as a whole. Unsurprisingly, the impact to the U.S. has disproportionally affected manufacturing and routine labor. Here's another quote from Dr. Autor:

When we import those labor-intensive goods, we’re going to reduce demand for blue-collar workers, who are not doing skill-intensive production.  Now we benefit because we get lower prices on the goods we consume and we sell the things that we're good at making at a higher price to the world. So that raises GDP but simultaneously it tends to make high-skilled and highly educated labor better off, raise their wages, and it tends to make low-skilled manually intensive laborers worse off because there is less demand for their services — so there's going to be fewer of them employed or they're going to be employed at lower wages. So the net effect you can show analytically is going to be positive. But the redistributional consequences are, many of us would view that as adverse because we would rather redistribute from rich to poor than poor to rich. And trade is kind of working in the redistributing from poor to rich direction in the United States. The scale of benefits and harms are rather incommensurate. ...

We would conservatively estimate that more than a million manufacturing jobs in the U.S. were directly eliminated between 2000 and 2007 as a result of China's accelerating trade penetration in the United States. Now that doesn't mean a million jobs total. Maybe some of those workers moved into other sectors. But we've looked at that and as best we can find in that period, you do not see that kind of reallocation. So we estimate that as much as 40 percent of the drop in U.S. manufacturing between 2000 and 2007 is attributable to the trade shock that occurred in that period, which is really following China's ascension to the WTO in 2001.

Manufacturing Output Versus Employment

During the campaign, Donald Trump and Bernie Sanders both advocated rethinking free trade. Perhaps we can use tariffs or government incentives to return manufacturing back to the U.S. As it turns out, that's already happening even without incentives. As labor costs increase in Asia the offshoring advantage is diluted. Many manufacturers are, indeed, opening new U.S. plants. The trouble is that returning manufacturing doesn't result in substantial job or wage growth. These are highly automated plants, employing a fraction of the workers whose jobs were eliminated when manufacturing went overseas. For example, Standard Textile just opened a new plant in Union, SC to make towels for Marriott International. Due to automation, the plant only created 150 new jobs. A generation ago the same plant would have employed more than 1000 people. And many of the new jobs are more highly skilled — designing, operating, and maintaining automated machinery.

Creative destruction and globalization are working together here. Both increase overall GDP, both increase individual worker productivity, both increase total wealth, and both disproportionately benefit skilled upper-middle-class workers over blue collar and middle-management workers. Any benefit from manufacturing returning to the U.S. will be blunted by the increase in automation reducing labor needs and shifting what remains to more skilled jobs.

Demand-Side Economics

So far, we have looked at the supply side of labor. The massive increase in productivity over the last six decades has been driven by innovative technology with global trade as an accelerant. As noted before, when the supply of labor exceeds demand then the value decreases. When supply exceeds demand across the economy as a whole then you get a recession.

From the end of World War II through the rest of the 20th century we succeeded in driving demand to keep up with supply. Advertising grew tremendously as an important demand driver. Television programs established new norms: two cars per family, a large home in the suburbs, annual luxury vacations, and designer clothing labels to name a few. Home appliances like air conditioning and a dishwasher went from luxury to necessity.

Government has participated in driving demand. Housing programs made home ownership much more accessible. So much so that it resulted in the 2007 real estate bubble. Likewise, the Federal Reserve has kept interest rates down ensuring that consumer credit remains accessible and people can buy ahead of income.

In the 21st Century we seem to have reached the limits of demand stimuli to compensate for ever increasing productivity. Smaller cars like the Mini Cooper or Fiat 500 have become stylish. Even the wealthy are choosing to reduce consumption — buying smaller homes or moving into the city. The result is that it takes increasingly strong stimuli to keep the economy moving. For the recession of 2008 the government spent unprecedented amounts of money borrowing directly from the Federal Reserve to do so. Despite this pressure, interest and inflation rates remain at historically low values.

Increase Spending or Cut Taxes?

And so we return to the contemporary debate: Should government increase spending or cut taxes to stimulate the economy? When government cuts taxes, individuals and companies have more disposable income. Presumably they will spend some of that income and save part. When government increases spending, it chooses directly where that money will be spent. Both theories depend on "trickle-down" effects even though that has traditionally been associated with tax cuts. In each case, the direct beneficiary of government policy employs more people and purchases more goods and services; those employees and suppliers also do more business and the impact "trickles" through the economy. The primary differentiator is whether you have greater trust in government (increase spending) or the market (cut taxes) to determine who is at the top of the trickle-down pyramid.

The question is really obsolete. Regardless of which stimulus you choose, demand stimuli are increasingly unable to keep up with increased productive capacity. As a country, we already produce nearly four times basic needs and the multiplier will continue to grow. Meanwhile, the twin pressures of Creative Destruction and Globalization will continue to drive the greater benefit of demand stimulus to those who already earn higher wages. Under either strategy, wage disparity will continue to worsen despite attempts by policymakers to direct tax breaks or government spending toward lower income households.

It seems that we will need a greater economic innovation than either of these 20th century solutions. In my next blog post I will write about some promising ideas. More effective education for all students is, of course, an essential component but insufficient by itself.


Estimating Basic Needs Per Capita: The Self-Sufficiency Standard is an measure of the income necessary to meet basic needs without assistance. Values are in terms of household. National averages aren't published so we have to make an approximation starting with samples of two cities. The cost of living index for Milwaukee, WI is 101.9% of the national average. Rochester, NY is exactly 100.0%. The 2014 Average household size in the U.S. was 2.54. We round up to 3 - two adults and one child. For Milwaukee the 2016 Self-Sufficiency Standard for that household is $43,112 annually. For Rochester, the 2010 Self-Sufficiency Standard for the same family is $40,334. Per-capita values are $14,371 and $13,445 respectively. Averaging the values comes to $13,908 approximate U.S. basic needs per capita in 2014. To be sure, there's a lot of variability across region, household size, medical needs, and so forth. I also mixed figures across 2010-2016. Nevertheless, this is a good enough working figure for comparing to per capita production in the same timeframe.