Of That

Brandt Redd on Education, Technology, Energy, and Trust

01 November 2019

Themes manifest as iNACOL Becomes Aurora

Arrows representing systems integration.

In 2010 I took on the responsibility of forming an Education Technology Strategy for the Bill & Melinda Gates Foundation. That same year, I also attended the iNACOL Virtual Schools Symposium (VSS). A year later, I presented at the symposium and I've been pleased to present or contribute in some way most years since.

As my colleagues and I at the Gates Foundation worked on a theory of technology and education education, something quickly became clear. Technology doesn't drive educational improvement; it's simply an enabler. In the early part of this decade there were numerous 1:1 student:computer initiatives. Most failed to show measurable improvement and many turned into fiascos as teachers were tasked with finding something useful to do with their new computers or tablets.

At the foundation we turned to personalized learning, a theory that was based on promising evidence and one that has gained more support since then. With that as basis we looked to where technology could help. The result was support for key projects including Common Education Data Standards, the Learning Resource Metadata Initiative, and Profiles of Next-Generation Learning.

The great folks at iNACOL observed the same patterns and so they pivoted. VSS became, simply, the iNACOL Symposium and their emphasis shifted to personalized and competency-based education with online and blended learning as enablers. This year, they completed the transition, renaming the whole organization to The Aurora Institute. In their words:

[Our] organization has evolved significantly to become a leading nonprofit organization with a deep reach into practitioners creating next-generation learning models. Our focus has grown to examine systems change and education innovation, facilitating the future of learning through personalized learning and student-centered approaches to next-generation learning.

Serving Educators and Students

A theme that spontaneously emerged at the symposium this year is that we must do for the educators what we want for the students. It was first expressed by Dr. Brooke Stafford-Brizard in her opening keynote. As she advocated that we care for the mental health of the children she said, "Across all of our partners who have successfully integrated whole child practice, there isn’t one who didn’t start with their adults." She proceeded to show examples where the a school mental health programs were designed to support both staff and students.

With that as precedent, the principle kept reappearing throughout the symposium.

  • If we expect personalized instruction for the students we must offer personalized professional development for their teachers.
  • Establish the competency set we expect of educators and provide opportunities to master those competencies.
  • Actionable feedback to educators is critical to the success of any learning innovation just as actionable feedback to students is critical to their learning.
  • Create an environment of trust and safety among the staff of your institution - then project that to the students.
  • Growth mindset is as important to educators as it is for the students they teach.

Continuous Improvement

Both themes — technology as enabler, and caring for the educators — are simply signposts on a path of continuous improvement. We must follow the evidence and go where it leads us.

07 March 2019

A Support System for High-Performing Schools

Arrows representing systems integration.

Charter schools operated by Charter Management Organizations (CMOs) tend to outperform other charter schools and public schools. The National Study of Charter Management Organization Effectiveness from 2011 was the first rigorous study of CMO effectiveness and it showed that CMO-operated schools were better than other options. A 2017 study by Stanford University's Center for Research on Education Outcomes found that students enrolled in CMO-operated schools in New York City substantially outperformed their peers in conventional public schools and independent charter schools.

This improvement is to be expected. A basic premise of CMO operations is to study what works, and carry successful practices to other schools in the network.

Some conventional public schools are following a similar pattern. Their solution providers don't necessarily manage the school, like a CMO would. Instead, providers offer an integrated set of services backed by an evidence-driven theory of effective teaching. Here is the ecosystem I expect to emerge in the next few years:

  • Component and Curriculum Suppliers
  • Educational Solution Providers
  • Schools (and other learning institutions)

This same basic model applies to primary, secondary, and higher education though large universities and big districts have the capacity to be their own solution providers. Let's look at the components:

Schools, Districts, and other Learning Institutions

The school is where the teaching and learning occurs. It's where the supply chain of standards, curriculum, educational training, assessments, learning science, and everything else finally meets the student.

Many schools are implementing the same kinds of programs as charters: online curricula, blended learning, teacher dashboards, etc. But the complexity of integration grows exponentially with the number of components to combine. Building an integrated whole is beyond the capacity of most schools and all but the largest districts. The same pattern exists in higher education. Large universities can deliver an integrated solution but community colleges have a harder time.

Component and Curriculum Suppliers

On the supply side, there's a rich, complex, and rapidly growing market of component and curriculum suppliers. They include conventional textbook publishers, online curriculum developers, assessment providers, Learning Management Systems (LMS), Student Information Systems (SIS), and more.

Beyond these well-defined categories there's a host of other components, each designed to address a particular need in the educational economy. For example, Learnosity builds tools for creating and embedding high-quality assessments. Gooru offers a learning map, helping students know where they are in their learning progression. EdConnective offers live, virtual coaching for teachers. In 2018, education technology investment grew to a record $5.23 billion in the U.S. and a breathtaking $16.34 billion worldwide. We can expect many more components and materials to be produced from that level of investment.

Many of these components are raw - requiring significant integration effort before they can become part of an integrated learning solution. Despite this, developers of these components attempt to sell them directly to schools, districts, and states.

Educational Solution Providers

Summit Public Schools is a CMO that consistently achieves high rankings. Summit Learning also offers their online curriculum to public schools. But, separating the curriculum from the balance of the solution hasn't been so successful. In November 2018, Brooklyn students held a walkout and parents created a website to protest "Mass Customized Learning." It's not that the materials were bad; they were well-proven in other contexts. But, separated from the balance of the Summit program the student experience suffered.

An important new category in the education supply chain are Educational Solution Providers. CMOs belong to this category but solution providers to conventional schools don't take over management like a CMO would. Rather, they provide an integrated set of services that includes training and coaching for staff and leadership.

The best solution providers start with an evidence-based learning theory. They then assemble a comprehensive solution based on the theory and selected from the rich menu provided by the component market. A complete solution includes:

  • Training and Coaching Services
  • Professional Development
  • Curriculum (conventional or online)
  • Assessment (ideally curriculum-embedded)
  • Secure Student Data Systems with Educator Dashboards
  • Effectiveness Measures
  • Continuous Improvement

An important job for solution providers is to integrate the components so that they work seamlessly together in support of their learning theory. Training and professional development should embody the same theory that is being expressed to the students. LMS, SIS, dashboards, and all other online systems should function together as one solution even if the provider is sourcing the components from an array of suppliers. In order to do this, the solution provider must have their own curriculum experts for the content side and a talented technology staff focused on systems integration.

Players in this nascent category include The Achievement Network, CLI Solutions Group, and The National Institute for Excellence in Teaching. I think we can expect new entrants in the next few years. Successful CMOs may also cross over to providing services to conventional public schools.

Wrapup

The educational component and curriculum market is rich and rapidly growing with record levels of investment. But, schools don't have the capacity to integrate these components effectively and they need a guiding theory to underpin the selection of components and how they are to be integrated. The emerging category of Educational Solution Provider fills an important role in the ecosystem.

Are you aware of other existing or emerging solution providers? Please let me know in the comments!

11 February 2019

Public-Private Partnership for Public Works

SR 99 tunnel cross section visualization

On February 28, 2001 I was at Microsoft Headquarters in Redmond Washington when the Nisqually Earthquake hit. I was using Microsoft's scalability lab to perform tests on Agilix software. I remember standing in the doorway and asking the someone down the hall, "Is this really an earthquake?" It obviously was, but not having experienced one before my mind was still disbelieving.

Nine years later we moved to Seattle where I developed an education technology strategy at the Bill & Melinda Gates Foundation. At the time, politicians were still trying to figure out what should replace the Alaskan Way Viaduct which had been damaged in the earthquake, and which engineers predicted could collapse should another earthquake occur.

Last week, the Washington SR 99 tunnel replaced the viaduct; 18 years after the earthquake threatened its predecessor. Ironically, the tunnel opening was accompanied by a snowstorm that paralyzed the Northwest making the tunnel one of the few clear roads in the area.

Funding of Public Works

Grand Central Terminal

A few years back I visited New York's Grand Central Terminal and wondered at the great investments made in public works in the early 20th century. The terminal building is beautiful, functional, and built to last. It's been going for more than a century and will probably continue for a century or two more. I wondered why it is so hard to find contemporary investments in public works of such grandeur. However, upon doing some research I found that Grand Central was funded entirely by private investors. Even today, the building is privately owned though the railroad it serves has now been merged into the MTA, a public benefit corporation.

When we visited Seoul, Korea in 2015 we spent five days getting around on the excellent Seoul Metropolitan Subway. It is fast, efficient, clean, and among the largest subway systems in the world with more than 200 miles of track. It features wireless internet throughout, most platforms are protected by automated doors greatly improving safety. Yet, the whole network has been built since 1971. The subway is built and operated by Seoul Metro, Korail, and Metro 9. Seoul Metro and Korail are Korean public corporations; these are corporations where the government owns a controlling interest. Metro 9 is a private venture.

This past December we visited Brisbane, Australia. Brisbane traffic has been mediated through the construction of several bypass tunnels including the Airport Link. The tunnels have been built in relatively short time through public-private partnerships.

As I researched these projects I saw a consistent pattern. The most successful public works projects seem to involve some form of cooperation between government and private enterprise. Funding is more easily obtained and project management is better when a private organization participates and stands to benefit from the long-term success of the project. But government support is also needed to represent the public interest, to streamline access to land and permits, and to ensure that profit-taking isn't excessive. Consider the U.S. Transcontinental Railroad. It was built in six years by three companies with a combination of government land grants, private funding, and some government subsidy bonds.

Less-Successful Examples

Less-successful operations seem to be entirely publicly sponsored and managed. Private companies contract to do the work but they aren't invested beyond project completion. For example, the Boston Big Dig was the "most expensive highway project in the US, and was plagued by cost overruns, delays, leaks, design flaws, charges of poor execution and use of substandard materials, criminal arrests, and one death." While the project was built by private contractors, public agencies were exclusively responsible for sponsorship, oversight, funding, and success.

Similarly, the Florida High Speed Corridor was commissioned by a state constitutional amendment, theoretically obligating the state to build the rail system. While still in the planning stages, the project got bogged down in cost overruns, environmental studies, lawsuits, and declining public support. Ultimately, the project was canceled in 2011. In 2018, however, Brightline, launched service between Miami, Fort Lauderdale, and West Palm Beach with an extension to Orlando being planned. Brightline is privately funded and operated.

Education

The same principles seem to apply in education. In the U.S. the biggest challenge to traditional public education are charter schools. Studies, including this one from the Center on Reinventing Public Education show that charter schools managed by Charter Management Organizations (CMOs) perform better than conventional public schools or independently-managed charter schools. Most CMOs are not-for-profit but they still represent a private, non-government entity. Based on the success of CMOs, some school districts are also considering outside management or support firms. In higher education there is a long tradition of government funding for a mix of public and private universities. Like the successful public works, the greatest success seem to occur when public and private interests are combined and aligned toward a common goal. In these successes, government represents the public interest. The worst outcomes seem to occur when government fails to represent public interests and is either corrupted to serve private needs or excessively focused on politics and party issues.

Organizing for Success

I haven't done a comprehensive search of public works projects. My selection of examples is simply based on projects I happen to be aware of. Nevertheless, it seems that the greatest potential for success is achieved when public and private interests are aligned in a partnership that leverages the strengths of both models and ensures that both groups benefit. public-private partnerships, state-owned enterprises, and public benefit corporations are different ways of achieving these ends.

The SR 99 tunnel in Seattle was bored by Bertha which, at the time, was the largest-ever tunnel boring machine. Early in the process, the machine broke down and it took two years to dig a recovery pit and make repairs. At the time, two state senators sponsored a bill to cancel the project. Despite this setback, and significant cost overruns, the project was ultimately a success. So, we can add persistence to see things through as another key to success.

Though the contract with Seattle Tunnel Partners will conclude when the tunnel project is complete, the organization has achieved a high degree of cooperation with the Washington department of transportation. Public-private cooperation and alignment of interests are behind many of the most successful public projects. And the private interest is often the source of the persistence needed to see things through.

10 January 2019

Quality Assessment Part 9: Frontiers

This is the final segment of a 9-part series on building high-quality assessments.

Mountains

A 2015 survey of US adults indicated that 34% of those surveyed felt that standardized tests were merely fair at measuring students' achievement; 46% think that the way schools use standardized tests has gotten worse; and only 20% are confident that tests have done more good than harm. The same year, the National Education Association surveyed 1500 members (teachers) and found that 70% do not feel that their state test is "developmentally appropriate.".

In the preceding eight parts of this series I described all of the effort that goes into building and deploying a high-quality assessment. Most of these principles are implemented to some degree in the states represented by these surveys. What these opinion polls tell us is that regardless of their quality, these assessments aren't giving valuable insight to two important constituencies: parents and teachers.

The NEA article describes a hypothetical "Most Useful Standardized Test" which, among other things, would "provide feedback to students that helps them learn, and assist educators in setting learning goals. This brings up a central issue in contemporary testing. The annual testing mandated by the Every Student Succeeds Act (ESSA), is focused on school accountability. This was also true of its predecessor, No Child Left Behind (NCLB). Both acts are based on the theory of measuring school performance, reporting that performance, and incentivising better school performance. States and testing consortia also strive to facilitate better performance by reporting individual results to teachers and parents. But facilitation remains a secondary goal of large-scale standardized testing.

The frontiers in assessment I discuss here shift the focus to directly supporting student learning with accountability being a secondary goal.

  • Curriculum-Embedded Assessment
  • Dynamically-Generated Assessments
  • Abundant Assessment

Curriculum-Embedded Assessment

The first model involves embedding assessment directly in the curriculum. Of course, nearly all curricula have embedded assessments of some sort. Math textbooks have daily exercises to apply the principles just taught. English and social studies texts include chapter-end quizzes and study questions. Online curricula intersperse the expository materials with questions, exercises, and quizzes. Some curricula even include pre-built exams. But these existing assessments lack the quality assurance and calibration of a high-quality assessment.

In a true Curriculum-Embedded Assessment, some of the items that appear in the exercises and quizzes would be developed with the same rigor as items on a high-stakes exam. They would be aligned to standards, field tested, and calibrated before appearing in the curriculum. In addition to contributing to the score on the exercise or quiz, the scores of these calibrated items would be aggregated into an overall record of the student's mastery of each skill in the standard.

Since the exercises and quizzes would not be administered in as controlled an environment as a high-stakes exam, the scores would not individually be as reliable as in a high-stakes environment. But by accumulating many more data points, and doing so continuously through the student's learning experience, it's possible to assemble an evaluation that is as reliable or more reliable than a year-end assessment.

Curriculum-Embedded Assessment has several advantages over either a conventional achievement test or the existing exercises and quizzes:

  • Student achievement relative to competency is continuously updated. This can offer much better guidance to students, educators, and parents than existing programs.
  • Student progress and growth can be continuously measured across weeks and months, not just years.
  • Performance relative to each competency can be reliably reported. This information can be used to support personalized learning.
  • Data from calibrated items can be correlated to data from the rest of the items on the exercise or quiz. Over time, these data can be used to calibrate and align the other items, thereby growing the pool of reliable and calibrated assessment items.
  • As Curriculum-Embedded Assessment is proven to offer data as reliable as year-end standardized tests, the standardized tests can be eliminated or reduced in frequency.

Dynamically-Generated Assessments

As described in my post on test blueprints, high-quality assessments begin with a bank of reviewed, field-tested, and calibrated items. Then, a test producer selects from that bank a set of items that match the blueprint of skills to be measured. For Computer-Adaptive Tests, the test is presented to a simulated set of students to determine how well it can measure student skill in the expected range.

In order to provide more frequent and fine-grained measures of student skills, educators prefer shorter interim tests to be used more frequently during the school year. Due to demand from districts and states, the Smarter Balanced Assessment Consortium will more than double the number of interim tests it offers over the next two years. Most of the new tests will be focused on just one or two targets (competencies) and have four to six questions. They will be short enough to be given in a few minutes at the beginning or end of a class period.

But what if you could generate custom tests on-demand to meet specific needs of a student or set of students? An teacher would design a simple blueprint — the skills to be measured and the degree of confidence required on each. Then the system could automatically generate the assessment, the scoring key, and the achievement levels based on the items in the bank and their associated calibration data.

Dynamically-generated assessments like these could target needs specific to a student, cluster of students, or class. With a sufficiently rich item bank, multiple assessments could be generated on the same blueprint thereby allowing multiple tries. And it should reduce the cost of producing all of those short, fine-grained assessments.

Abundant Assessment

Ideally, school should be a place where students are safe to make mistakes. We generally learn more from mistakes than from successes because failure affords us the opportunity to correct misconceptions and gain knowledge whereas success merely confirms existing understanding.

Unfortunately, school isn't like that. Whether primary, secondary, or college; school tends to punish failures. At the college level, a failed assignment is generally is unchangeable and a failed class, or low grade goes on the permanent record. Consider a student that studies hard all semester, gets reasonable grades on homework, but then blows the final exam. Perhaps they were sick on exam day, or perhaps the questions were confusing and different from what they expected, or perhaps the pressure of the exam just messed them up. Their only option is to repeat the whole class — and even then their permanent record will show the class repetition.

Why is this? Why do schools amplify the consequences to such small events? It's because assessments are expensive. They cost a lot to develop, to administer, and to score. In economic terms, assessments are scarce. For schools to offer easy recovery from failure they would have to develop multiple forms for every quiz and exam. They would have to incur the cost of scoring and reporting multiple times. And they would have to select the latest score and ignore all others. To date, such options have been cost-prohibitive.

"Abundant Assessment" is the prospect making assessment inexpensive — "abundant" in economic terms. In such a framework, students would be afforded many tries until they succeed or are satisfied with their performance. Negative consequences to failure would be eliminated and the opportunity to learn from failure would be amplified.

This could be achieved by a combination of collaboration and technology. Presently, most quizzes and exams are written by teachers or professors for their class only. If their efforts were pooled into a common item bank, then you could rapidly achieve a collection large enough to generate multiple exams on each topic area. Technological solutions would provide dynamically-generated assessments (as described in the previous section), online test administration, and automated scoring. All of this would dramatically reduce the labor involved in producing, administering, scoring, and reporting exams and quizzes.

Abundant assessment dramatically changes the cost structure of a school, college, or university. When it is no longer costly to administer assessments then you can encourage students to try early and repeat if they don't achieve the desired score. Each assessment, whether an exercise, quiz, or exam can be a learning experience with students encouraged to learn quickly from errors.

Wrapup

These three frontiers are synergistic. I can imagine a student, let's call her Jane, studying in a blended learning environment. Encountering a topic with which she is already familiar, Jane jumps ahead to the topic quiz. But the questions involve concepts she hasn't yet mastered and she fails. Nevertheless, this is a learning experience. Indeed, it could be reframed as a formative assessment as she now goes back and studies the material knowing what will be demanded of her in the assessment. After studying, and working a number of the exercises, Jane returns to the topic assessment and is presented with a new quiz, equally rigorous, on the same subject. This time she passes.

Outside the frame of Jane's daily work, the data from her assessments and those of her classmates are being accumulated. When the time comes, at the end of the year, to report on school performance, the staff are able to produce reliable evidence of student and school performance without the need for day-long standardized testing.

Most importantly, throughout this experience Jane feels confident and safe. At no point is she nervous that a mistake will have any long-term consequence. Rather, she knows that she can simply persist until she understands the subject matter.

06 November 2018

Quality Assessment Part 8: Test Reports

This is part 8 of a 9-part series on building high-quality assessments.

Bicycle

Since pretty much the first Tour de France cyclists have assumed that narrow tires and higher pressures would make for a faster bike. As tire technology improved to be able to handle higher pressures in tighter spaces the consensus standard became 23mm width and 115 psi. And that standard held for decades. This was despite the science that says otherwise.

Doing the math indicates that a wider tire will have a shorter footprint, and a shorter footprint loses less energy to bumps in the road. The math was confirmed in laboratory tests and the automotive industry has applied this information for a long time. But tradition held in the Tour de France and other bicycle races until a couple of teams began experimenting with wider tires. In 2012, Velonews published a laboratory comparison of tire widths and by 2018 the average moved up to 25 mm with some riders going as wide as 30mm.

While laboratory tests still confirm that higher pressure results in lower rolling resistance, high pressure also results in a rougher ride and greater fatigue for the rider. So teams are also experimenting with lower pressures adapted to the terrain being ridden and they find that the optimum pressure isn't necessarily the highest that the tire material can withstand.

You can build the best and most accurate student assessment ever. You can administer it properly with the right conditions. But if no one pays attention to the results, or if the reports don't influence educational decisions, then all of that effort will be for naught. Even worse, correct data may be interpreted in misleading ways. Like the tire width data, the information may be there but it still must be applied.

Reporting Test Results

Assuming you have reliable test results (the subjects of the preceding parts in this series), there are four key elements that must be applied before student learning will improve:

  • Delivery: Students, Parents, and Educators must be able to access the test data.
  • Explanation: They must be able to interpret the data — understand what it means.
  • Application: The student, and those advising the student, must be able to make informed decisions about learning activities based on assessment results.
  • Integration: Educators should correlate the test results with other information they have about the student.

Delivery

Most online assessment systems are paired with online reporting systems. Administrators are able to see reports for districts, schools, and grades sifting and sorting the data according to demographic groups. This may be used to hold institutions accountable and to direct Title 1 funds. Parents and other interested parties can access public reports like this one for California containing similar information.

Proper interpretation of individual student reports has greater potential to improve learning than the school, district, and state-level reports. Teachers have access to reports for students in their classes and parents receive reports for their children at least once a year. But teachers may not be trained to apply the data, or parents may not know how to interpret the test results.

Part of delivery is designing reports so that the information is clear and the correct interpretation is the most natural. To experts in the field, well-versed in statistical methods, the obvious design may not be the best one.

The best reports are designed using a lot of consumer feedback. The designers use focus groups and usability tests to find out what works best. In a typical trial, a parent or educator would be given a sample report and asked to interpret it. The degree to which they match the desired interpretation is an evaluation of the quality of the report.

Explanation

Even the best-designed reports will likely benefit from an interpretation guide. A good example is the Online Reporting Guide deployed by four western states. The individual student reports in these states are delivered to parents on paper. But the online guide provides interpretation and guidance to parents that would be hard to achieve in paper form.

Online reports should be rich with explanations, links, tooltips, and other tools to help users understand what each element means and how it should be interpreted. Graphs and charts should be well-labeled and designed as a natural representation of the underlying data.

An important advantage of online reporting is that it can facilitate exploration of the data. For example, a teacher might be viewing an online report of an interim test. She sees that a cluster of students all got a lower score. Clicking on the scores reveals a more detailed chart that shows how the students performed on each question. She might see that the students in the cluster all missed the same question. From there, she cold examine the student's responses to that question to gain insight into their misunderstanding. When done properly, such an analysis would only take a few minutes and could inform a future review period.

Application

Ultimately, all of this effort should result in good decisions being made by the student and made by others in their behalf. Closing the feedback loop in this way consistently results in improved student learning.

In part 2 of this series I wrote that assessment design starts with a set of defined skills, also known as competencies or learning objectives. This alignment to can facilitate guided application of test results. When test questions are aligned to the same skills as the curriculum, then students and educators can easily locate the learning resources that are best suited to student needs.

Integration

The best schools and teachers use multiple measures of student performance to inform their educational decisions. In an ideal scenario, all measures, test results, homework, attendance, projects, etc., would be integrated into a single dashboard. Organizations like The Ed-Fi Alliance are pursuing this but it's proving to be quite a challenge.

An intermediate goal is for the measures to be reported in consistent ways. For example, measures related to student skill should be correlated to the state standards. This will help teachers find correlations (or lack thereof) between the different measures.

Quality Factors

  • Make the reports, or the reporting system, available and convenient for students, parents, and educators to use.
  • Ensure that reports are easy to understand and that they naturally lead to the right interpretations. Use focus groups and usability testing to refine the reports.
  • Actively connect between test results and learning resources.
  • Support integration of multiple measures.

Wrapup

Every educational program, activity, or material should be considered in terms of it's impact on student learning. Effective reporting, that informs educational decisions, makes the considerable investment in developing and administering a test worthwhile.

16 October 2018

Quality Assessment Part 7: Securing the Test

This is part 7 of a 9-part series on building high-quality assessments.

A Shield

Each spring, millions of students in the United States take their annual achievement tests. Despite proctoring, some fraction of those students carry in a phone or some other sort of camera, take pictures of test questions, and post them on social media. Concurrently, testing companies hire a few hundred people to scan social media sites for inappropriately shared test content and send takedown notices to site operators.

Proctoring, secure browsers, and scanning social media sites are parts of a multifaceted effort to secure tests from inappropriate access. If students have prior access to test content, the theory goes, then they will memorize answers to questions rather than study the principles of the subject. The high-stakes nature of the tests creates incentive for cheating.

Secure Browsers

Most computer-administered tests today are given over the world-wide web. But if students were given unfettered access to the web, or even to their local computer, they could look up answers online, share screen-captures of test questions, access an unauthorized calculator, share answers using chats, or even videoconference with someone who can help with the test. To prevent this, test delivery providers use a secure browser, also known as a lockdown browser. Such a browser is configured so it will only access the designated testing website and it takes over the computer - preventing access to other applications for the duration of the test. It also checks to ensure that no unauthorized applications are already running, such as screen grabbers or conferencing software.

Secure browsers are inherently difficult to build and maintain. That's because operating systems are designed to support multiple concurrent applications and to support convenient switching among applications. In one case, the operating system vendor added a dictionary feature — users could tap any word on the screen and get a dictionary definition of that word. This, of course, interfered with vocabulary-related questions on the test. In this, and many other cases, testing companies have had to work directly with operating system manufacturers to get special features required to enable secure browsing.

Secure browsers must communicate with testing servers. The server must detect that a secure browser is in use before delivering a test and it also supplies the secure browser with lists of authorized applications that can be run concurrently (such as assistive technology). To date, most testing services develop their own secure browsers. So, if a school or district uses tests from multiple vendors, they must install multiple secure browsers.

To encourage a more universal solution. [Smarter Balanced] commissioned a Universal Secure Browser Protocol that would allow browsers and servers from different companies to work effectively together. They also commissioned and host a Browser Implementation Readiness Test (BIRT) that can be used to verify a browser - that it implements the required protocols and also the basic HTML 5 requirements. So far, Microsoft has implemented their Take a Test feature in Windows 10 that satisfies secure browser requirements and Smarter Balanced has released into open source a set of secure browsers for Windows, MacOS, iOS (iPad), Chrome OS (ChromeBook), Android, and Linux. Nevertheless, most testing companies continue to develop their own solutions.

Large Item Pools - An Alternative Approach

Could there be an alternative to all of this security effort? Deploying secure browsers on thousands of computers is expensive and inconvenient. Proctoring and social media policing cost a lot of time and money. And conspiracy theorists ask if the testing companies have something to hide in their tests.

Computerized-adaptive testing opens one possibility. If the pool of questions is big enough, the probability that a student encounters a question they have previously studied will be small enough that it won't significantly impact the test result. With a large enough pool, you could publish all questions for public review and still maintain a valid and rigorous test. I once asked a psychometrician how large the pool would have to be for this. He estimated about 200 questions in the pool for each one that appears on the test. Smarter Balanced presently uses a 20 to one ratio. Anther benefit of such a large item pool is that students can retake the test and still get a valid result.

Even with a large item pool, you would still need to use a secure browser and proctoring to prevent students from getting help from social media. That is, unless we can change incentives to the point that students are more interested in an accurate evaluation than they are in getting getting a top score.

Quality Factors

The goal of test security is to maintain the validity of test results; ensuring that students do not have access to questions in advance of the test and that they cannot obtain unauthorized assistance during the test. The following practices contribute to a valid and reliable test:

  • For computerized-adaptive tests have a large item pool thereby reducing the impact of any item exposure and, potentially allowing for retakes.
  • For fixed-form tests, develop multiple forms. As with a large item pool, multiple forms let you switch forms in the event that an item is exposed and also allows for retakes.
  • For online tests, use secure browser technology to prevent unauthorized use of the computer during the test.
  • Monitor social media for people posting test content.
  • Have trained proctors monitor testing conditions.
  • Consider social changes, related to how test results are used, that would better align student motivation toward valid test results.

Wrapup

The purpose of Test Security is to ensure that test results are a valid measure of student skill and that they are comparable to other students' results on the same test. Current best practices include securing the browser, effective proctoring, and monitoring social media. Potential alternatives include larger test item banks and better alignment of student and institutional motivations.

05 October 2018

Quality Assessment Part 6: Achievement Levels and Standard Setting

This is part 6 of a 9-part series on building high-quality assessments.

Two mountains, one with a flag on top.

If you have a child in U.S. public school, chances are that they took a state achievement test this past spring and sometime this summer you received a report on how they performed on that test. That report probably looks something like this sample of a California Student Score Report. It shows that "Matthew" achieved a score of 2503 in English Language Arts/Literacy and 2530 in Mathematics. Both scores are described as "Standard Met (Level 3)". Notably, in prior years Matthew was in the "Standard Nearly Met" category so his performance has improved.

The California School Dashboard offers reports of school performance according to multiple factors. For example, the Detailed Report for Castle View Elementary includes a graph of "Assessment Performance Results: Distance from Level 3".

Line graph showing performance of Lake Matthews Elementary on the English and Math tests for 2015, 2016, and 2017. In all three years, they score between 14 and 21 points above proficiency in math and between 22 and 40 points above proficiency in English.

To prepare this graph, they take the average difference between students' scale scores and the Level 3 standard for proficiency in the grade in which they were tested. For each grade and subject, California and Smarter Balanced use four achievement levels, each assigned to a range of scores. Here are the achievement levels for 5th grade Math (see this page for all ranges).

LevelRangeDescriptor
Level 1Less than 2455Standard Not Met
Level 22455 to 2527Standard Nearly Met
Level 32528 to 2578Standard Met
Level 4Greater than 2578Standard Exceeded

So, for Matthew and his fellow 5th graders, the Math standard for proficiency, or "Level 3" score, is 2528. Students at Lake Matthews Elementary, on average, exceeded the Math standard by 14.4 points on the 2017 tests.

Clearly, there are serious consequences associated with the assignment of scores to achievement levels. A difference of 10-20 points can make the difference between a school, or student, meeting or failing to meet the standard. Changes in proficiency rates can affect allocation of federal Title 1 funds, the careers of school staff, and even the value of homes in local neighborhoods.

More importantly to me, achievement levels must be carefully set if they are to provide reliable guidance to students, parents, and educators.

Standard Setting

Standard Setting is the process of assigning test score ranges to achievement levels. A score value that separates one achievement level from another is called a cut score. The most important cut score is the one that distinguishes between proficient (meeting the standard) and not proficient (not meeting the standard). For the California Math test, and for Smarter Balanced, that's the "Level 3" score but different tests may have different achievement levels.

When Smarter Balanced performed its standard setting exercise in October of 2014, it used the Bookmark Method. Smarter Balanced had conducted a field test that previous spring (described in Part 4 of this series). From those field test results, they calculated a difficulty level for each test item and converted that into a scale score. For each grade, a selection of approximately 70 items were sorted from easiest to most difficult. This sorted list of items is called an Ordered Item Booklet (OIB) though, in the Smarter Balanced case, the items were presented online. A panel of experts, composed mostly of teachers, went through the OIB starting at the beginning (easiest item), and set a bookmark at the item they believed represented proficiency for that grade. A proficient student should be able to answer all preceding items correctly but might have trouble with the items that follow the bookmark.

There were multiple iterations of this process on each grade, and then the correlation from grade-to-grade was also reviewed. Panelists were given statistics on how many students in the field tests would be considered proficient at each proposed skill level. Following multiple review passes the group settled on the recommended cut scores for each grade. The Smarter Balanced Standard Setting Report describes the process in great detail.

Data Form

For each subject and grade, the standard setting process results in cut scores representing the division between achievement levels. The cut scores for Grade 5 math, from table above, are 2455, 2528, and 2579. Psychometricians also calculate the Highest Obtainable Scale Score (HOSS) and Lowest Obtainable Scale Score (LOSS) for the test.

I am not aware of any existing data format standard for achievement levels. Smarter Balanced publishes its achievement levels and cut scores on its web site. The Smarter Balanced test administration package format includes cut scores, and HOSS and LOSS; but not achievement level descriptors.

A data dictionary for publishing achievement levels would include the following elements:

ElementDefinition
Cut ScoreThe lowest *scale score* included in a particular achievement level.
LOSSThe lowest obtainable *scale score* that a student can achieve on the test.
HOSSThe highest obtainable *scale score* that a student can achieve on the test.
Achievement Level DescriptorA description of what an achievement level means. For example, "Met Standard" or "Exceeded Standard".

Quality Factors

The stakes are high for standard setting. Reliable cut scores for achievement levels ensure that students, parents, teachers, administrators, and policy makers receive appropriate guidance for high-stakes decisions. If the cut scores are wrong - many decisions may be ill informed. Quality is achieved by following a good process:

  • Begin with a foundation of high quality achievement standards, test items that accurately measure the standards, and a reliable field test.
  • Form a standard-setting panel composed of experts and grade-level teachers.
  • Ensure that the panelists are familiar with the achievement standards that the assessment targets.
  • Inform the panel with statistics regarding actual student performance on the test items.
  • Follow a proven standard-setting process.
  • Publish the achievement levels and cut scores in convenient human-readable and machine-readable forms.

Wrapup

Student achievement rates affect policies at state and national levels, direct budgets, impact staffing decisions, influence real estate values, and much more. Setting achievement level cut scores too high may set unreasonable expectations for students. Setting them too low may offer an inappropriate sense of complacency. Regardless, achievement levels are set on a scale calibrated to achievement standards. If the standards for the skills to be learned are not well-designed, or if the tests don't really measure the standards, then no amount of work on the achievement level cut scores can compensate.

14 September 2018

Quality Assessment Part 5: Blueprints and Computerized-Adaptive Testing

This is part 5 of a 9-part series on building high-quality assessments.

Arrows in a tree formation.

Molly is a 6th grade student who is already behind in math. Near the end of the school year she takes her state's annual achievement tests in mathematics and English Language Arts. Already anxious when she sits down to the test, her fears are confirmed by the first question where she is asked to divide 3/5 by 7/8. Though they spent several days on this during the year, she doesn't recall how to divide one fraction by another. As she progresses through the test, she is able to answer a few questions but resorts to guessing on all too many. After twenty minutes of this she gives up and just guesses on the rest of the answers. When her test results are returned a month later she gets the same rating as three previous years, "Needs Improvement." Perpetually behind, she decides that she is, "Just not good at math."

Molly is fictional but she represents thousands of students across the U.S. and around the world.

Let's try another scenario. In this case, Molly is given a Computerized-Adaptive Test (CAT). When she gets the first question wrong, the testing engine picks an easier question which she knows how to answer. Gaining confidence she applies herself to the next question which she also knows how to answer. The system presents easier and harder questions as it works to pinpoint her skill level within a spectrum extending back to 4th grade and ahead to 8th grade. When her score report comes she has a scale score of 2505 which is below the 6th grade standard of 2552. The report shows her previous year's score of 2423 which was well below standard for Grade 5. The summary says that, while Mollie is still behind, she has achieved significantly more than a year's progress in the past year of school; much like this example of a California report.

Computerized-Adaptive Testing

A fixed-form Item Response Theory test presents a set of questions at a variety of skill levels centered on the standard for proficiency for the grade or course. Such tests result in a scale score, which indicates the student's proficiency level, and a standard error which indicates a confidence level of the scale score. A simplified explanation is that the student's actual skill level should be within the range of the scale score plus or minus the standard error. Because a fixed-form test is optimized for the mean, the standard error is greater the further the student is from the target proficiency for that test.

Computerized Adaptive Tests (CAT) start with a large pool of assessment items. Smarter Balanced uses a pool of 1,200-1,800 items for a 40 item test. Each question is calibrated according to its difficulty within the range of the test. The test administration starts with a question near the middle of the range. From then on, the adaptive algorithm tracks the student's performance on prior items and and then selects questions most likely to discover and increase confidence in the student's skill level.

A stage-adaptive or multistage test is similar except that groups of questions are selected together.

CAT tests have three important advantages over fixed-form:

  • The test can measure student skill across a wider range while maintaining a small standard error.
  • Fewer questions are required to assess the student's skill level.
  • Students may have a more rewarding experience as the testing engine offers more questions near their skill level.

When you combine more accurate results with a broader measured range and then use the same test family over time, you can reliably measure student growth over a period of time.

Test Blueprints

As I described in Part 2 and Part 3 of this series, each assessment item is designed to measure one or two specific skills. A test blueprint indicates what skills are to be measured in a particular test and how many items of which types should be used to measure each skill.

As an example, here's the blueprint for the Smarter Balanced Interim Assessment Block (IAB) for "Grade 3 Brief Writes":

Block 3: Brief Writes
ClaimTargetItemsTotal Items
Writing1a. Write Brief Texts (Narrative)46
3a. Write Brief Texts (Informational)1
6a. Write Brief Texts (Opinion)1

This blueprint, for a relatively short fixed-form test, indicates a total of six items spread across one claim and three targets. For more examples, you can check out the Smarter Balanced Test Blueprints. The Summative Tests, which are used to measure achievement at the end of each year, have the most items and represent the broadest range of skills to be measured.

When developing a fixed-form test, the test producer will select a set of items that meets the requirements of the blueprint and represents an appropriate mix of difficulty levels.

For CAT tests it's more complicated. The test producer must select a much larger pool of items than will be presented to the student. A minimum is five to ten items in the pool for each item in to be presented to the student. For summative tests, Smarter Balanced uses a ratio averaging around 25 to 1. These items should represent the skills to be measured in approximately the same ratios as they are represented in the blueprint. And they should represent difficulty levels across the range of skill to be measured. (Difficulty level is represented by the IRT b parameter of each item.)

As the student progresses through the test, the CAT Algorithm selects the next item to be presented. In doing so, it takes into account three factors: 1. Information it has determined about the student's skill level so far, 2. How much of the blueprint has been covered so far and what it has yet to cover, and 3. The pool of items it has to select from. From those criteria it selects an item that will advance coverage of the blueprint and will improve measurement of the student's skill level.

Data Form

To present a CAT assessment the test engine needs three sets of data:

  • The Test Blueprint
  • A Catalog of all items in the pool. The entry for each item must specify it's alignment to the test blueprint (which is equivalent to its alignment to standards), and its IRT Parameters.
  • The Test Items themselves.

Part 3 of this series describes formats for the items. The item metadata should include the alignment and IRT information. The manifest portion of IMS Content Packaging is one format for storing and transmitting item metadata.

To date, there is no standard or commonly-used data format for test blueprints. Smarter Balanced has published open specifications for its Assessment Packages. Of those, the Test Administration Package format includes the test blueprint and the item catalog. IMS CASE is designed for representing achievement standards but it may also be applicable to test blueprints.

IMS Global has formed an "IMS CAT Task Force" which is working on interoperable standards for Computerized Adaptive Testing. They anticipate releasing specifications later in 2018.

Quality Factors

A CAT Simulation is used to measure the quality of a Computerized Adaptive Test. These simulations use a set of a few thousand simulated students each assigned a particular skill level. The system then simulates each student taking the test. For each item, the item characteristic function is used to determine whether a student at that skill level is likely to answer correctly. The adaptive algorithm uses those results to determine which item to present next.

The results of the simulation are used to see how well the CAT measured the skill levels of the simulated students by comparing the test scores against the skill levels of the simulated students. Results of a CAT simulation are used to ensure that the item pool has sufficient coverage, that the CAT algorithm satisfies the blueprint, and to find out which items get the most exposure. This feedback is used to tune the item pool and the configuration of the CAT algorithm to achieve optimal results across the simulated population of students.

To build a high-quality CAT assessment:

  • Build a large item pool with items of difficulty levels spanning the range to be measured.
  • Design a test blueprint that focuses on the skills to be measured and correlates with the overall score and the subscores to be reported.
  • Ensure that the adaptive algorithm effectively covers the the blueprint and also focuses in on each student's skill level.
  • Perform CAT simulations to tune the effectiveness of the item pool, blueprint, and CAT algorithm.

Wrapup

Computerized adaptive testing offers significant benefits to students by delivering more accurate measures with a shorter, more satisfying test. CAT is best suited to larger tests with 35 or more questions spread across a broad blueprint. Shorter tests, focused on mastery of one or two specific skills, may be better served by conventional fixed-form tests.

01 September 2018

Quality Assessment Part 4: Item Response Theory, Field Testing, and Metadata

This is part 4 of a 9-part series on building high-quality assessments.

Drafting tools - triangle, compass, ruler.

Consider a math quiz with the following two items:

Item A:

x = 5 - 2 What is the value of x?

Item B:

x2 - 6x + 9 = 0 What is the value of x?

George gets item A correct but gets the wrong answer for item B. Sally has the wrong answer for A but answers B correctly. Using traditional scoring, George and Sally each get 50%.

A more sophisticated quiz might assign 2 points to item A and 6 points to item B (recognizing that B is harder than A). Under such a scoring system, George would get 25% and Sally would get 75%.

But the score is still short on meaning. George scored 25% of what? Sally scored 75% of what?

An even more sophisticated model should acknowledge that knowing how to solve quadratics (item B) is evidence that the student can also perform subtraction (item A). Such a model would position George somewhere between first grade (single-digit subtraction) and High School (solving quadratics). That same model would indicate that Sally either guessed correctly on item B or made a mistake on item A that's not representative of her skill. Due to the conflicting evidence, we are less sure about Sally's skill level than George's. For both students, more items would be required to gain greater confidence in their skill levels.

Item Response Theory

Item Response Theory or IRT is a statistical method for describing how student performance on assessment items relates to their skill in the area the item was designed to measure.

The "three parameter logistic model" (3PL) for IRT describes the probability that a student of a certain skill level will answer the item correctly. Student proficiency is represented by θ (theta) and the three item parameters are a, b, and c. They represent the following factors:

  • a = Discrimination. This value indicates how well the item discriminates between proficient students and those who have not yet learned this skill.
  • b = Difficulty. This value indicates how difficult an item is for the student to answer correctly.
  • c = Guessing. The probability that a student might guess the correct response. For a four-item multiple-choice question, this would be 0.25 because the student has a one-in-four chance of guessing the right answer.

From these parameters we can create an item characteristic curve. The formula is as follows:

formula: p=c+(1-c)/(1+e^(-a(θ-b))

This is much easier to understand in graph form. So I loaded it into the Desmos graphing calculator.

The vertical (y) axis indicates the probability that a student will answer the item correctly. The horizontal (y) axis is student proficiency (represented by θ in the equation). You can move the sliders to change the a, b, and c parameters and see how different items would be represented in an item characteristic curve.

In addition to this "three-parameter" model, there are other IRT models but they all follow this same basic premise: The function represents the probability that a student of given skill (represented by θ, theta) will answer the question correctly. At least one parameter of the function represents the difficulty of the question. For items scored on multi-point scale, there are difficulty parameters (typically d1, d2, etc.) representing the difficulty thresholds for each point value.

Scale Scores

The difficulty parameter b, and the student skill value θ, are on the same, logistic, scale and center on the skill level being measured. For example, if an item is written for grade 5 math, a b parameter of 0 means that the average 5th grade student should be able to answer the question correctly 50% of the time.

Most assessments convert from this theta score into a scale score which is a consistent score reported to educators, students, and parents. For Smarter Balanced, the scale score ranges from 2000 to 3000 and represents skill levels from Kindergarten to High School Graduation. Theta scores are converted to scale scores using a polynomial function.

Field Testing

So how do we come up with the a, b, and c parameters for a particular item? Based on the item type and potential responses we can predict c (guessing) fairly well but our experience at Smarter Balanced has shown that authors are not very good at predicting b (difficulty) or c (discrimination). To get an objective measure of these values we use a field test.

In Spring 2014 Smarter Balanced held a field test in which 4.2 million students completed a test - typically in either English Language Arts or Mathematics. Some students took both. For the participating schools and students, this was a practice test - gaining experience in administering and taking tests. Since the items were not yet calibrated, we could not reliably score the tests. For Smarter Balanced it offered critical data on more than 19,000 test items. For each item we gained more than 10,000 scored responses from students representing the target grades across all demographics.

Psychometricians used these data, from students taking the test, to calculate the parameters (a, b, and c) for each item in the field test. The process of calculating IRT parameters from field test data is called calibration. Once items were calibrated we examined the parameters and the data to determine which items are approved for use in tests. For example, if a is too low then the question likely has a flaw. It may not measure the right skill or the answer key may be incorrect. Likewise, if the b parameter is different across demographic groups than the item may be sensitive to gender, cultural, or ethnic bias. Items from the field test that met statistical standards were approved and became the initial bank of items from which Smarter Balanced produces tests.

Each year Smarter Balanced does an embedded field test. Each test that a student takes has a few new "field test" items included. These items do not contribute to the student's test score. Rather, the students' scored responses are used to calibrate the items. This way the test item bank is being constantly renewed. Other organizations like ACT and SAT follow the same practice of embedding field test questions in regular tests.

To understand more about IRT, I recommend A Simple Guide to IRT and Rasch by Ho Yu.

Item Metadata

The IRT parameters, alignment to standards, and other critical information are collected as metadata about each item. In most cases, metadata is represented as a set of name-value pairs. There are many formats for representing metadata and also many dictionaries of field definitions. Smarter Balanced uses the metadata structure from IMS Content Packaging and draws field definitions from The Learning Resource Metadata Initiative (LRMI), from Schema.org, and from Common Education Data Standards (CEDS).

Here are some of the most critical metadata elements for assessment items with links to their definitions in those standards:

  • Identifier: An number that uniquely identifies this item.
  • PrimaryStandard: An identifier of the principal skill the item is intended to measure. The skill would be described in an Achievement Standard or Content Specification.
  • SecondaryStandard: Optional identifiers of additional Achievement Standards or Content Specifications that the item measures.
  • InteractionType: The type of interaction (multiple choice, matching, short answer, essay, etc.).
  • IRT Parameters: The a, b, and c parameters or another parameter set for the Item Response Theory function.
  • History: A record of when and how the item has been used to estimate how much it has been exposed.

Quality Factors

States, schools, assessment consortia, and assessment companies all maintain banks of assessment items from which they construct their assessments. There are a number of efforts underway to pool resources from multiple entities into large, joint item banks. The value of items in any such bank is multiplied tenfold if the items have consistent and reliable metadata regarding alignment to standards and IRT parameters.

Here are factors to consider related to IRT Calibration and Metadata:

  • Are all items field-tested and calibrated before they are used in an operational test?
  • Is alignment to standards and content specifications an integral part of item writing?
  • Are the identifiers used to record alignment consistent across the entire item bank?
  • Is field testing an integral part of the assessment design?
  • Are IRT parameters consistent and comparable across the entire bank?
  • When sharing items or an item bank across multiple organizations, do all participants agree to contribute data (field testing and operational use) back to the bank?

Wrapup

Field testing can be expensive, inconvenient, or both. But without actual data from student performance we have no objective evidence that a particular assessment item measures what it's intended to measure at the expected level of difficulty.

The challenges around field testing combined with the lack of training in IRT and related psychometrics have been kept these measures from being used in anything other than large-scale, high stakes tests. Nevertheless, it's concerning to me that final exams and midterms of great consequence are rarely, if ever, calibrated and validated. Greater collaboration among institutions, among curriculum developers, or both could achieve sufficient scale for calibrated tests to become more common.

23 August 2018

Quality Assessment Part 3: Items and Item Specifications

This is part 3 of a 9-part series on building high-quality assessments.

Transparent cylindrical vessel with wires leading to an electric spark inside.

Some years ago I remember reading my middle school science textbook. The book was attempting to describe the difference between a mixture and a compound. It explained that water is a compound of two parts hydrogen and one part oxygen. However, if you mix two parts hydrogen and one part oxygen in a container, you will simply have a container with a mixture of the two gasses, they will not spontaneously combine to form water.

So far, so good. Next, the book said that if you introduced an electric spark in the mixed gasses you would, "start to see drops of water appear on the inside surface of the container as the gasses react to form water." This was accompanied by an image of a container with wires and an electric spark.

I suppose the book was technically correct; that is what would happen if the container was strong enough to contain the violent explosion. But, even as a middle school student, I wondered how the dangerously misleading passage got written and how it survived the review process.

The writing and review of assessments requires the same or better rigor than writing textbooks. An error on an assessment item affects the evaluation of all students who take the test.

Items

In the parlance of the assessment industry, test questions are called items. The latter term is intended include more complex interactions than just answering questions.

Stimuli and Performance Tasks

Oftentimes, an item is based on a stimulus or passage that sets up the question. It may be an article, short story, or description of a math or science problem. The stimulus is usually associated with three to five items. When presented by computer, the stimulus and the associated items are usually presented on one split screen so that the student can refer to the stimulus while responding to the items.

Sometimes, item authors will write the stimulus; this is frequently the case for mathematics stimuli as they set up a story problem. But the best items draw on professionally-written passages. To facilitate this, the Copyright Clearance Center has set up the Student Assessment License as a means to license copyrighted materials for use in student assessment.

A performance task is a larger-scale activity intended to allow the student to demonstrate a set of related skills. Typically, it begins with a stimulus followed by a set of ordered items. The items build on each other usually finishing with an essay that asks the student to draw conclusions from the available information. For Smarter Balanced this pattern (stimulus, multiple items, essay) is consistent across English Language Arts and Mathematics.

Prompt or Stem

The prompt, sometimes called a stem, is the request for the student to do something. A prompt might be as simple as, "What is the sum of 24 and 62." Or it might be as complex as, "Write an essay comparing the views of the philosophers Voltaire and Kant regarding enlightenment. Include quotes from each that relate to your argument." Regardless, the prompt must provide required information, clearly describe what the student is to do, and how they are to express their response.

Interaction or Response Types

The response is a student's answer to the prompt. Two general categories of items are selected response and constructed response. Selected response items require the student to select one or more alternatives from a set of pre-composed responses. Multiple choice is the most common selected response type but others include multi-select (in which more than one response may be correct), matching, true/false, and others.

Multiple choice items are particularly popular due to the ease of recording and scoring student responses. For multiple choice items, alternatives are the responses that a student may select from, distractors are the incorrect responses, and the answer is the correct response.

The most common constructed response item types are short answer and essay. In each case, the student is expected to write their answer. The difference is the length of the answer; short answer is usually a word or phrase while essay is a composition of multiple sentences or paragraphs. A variation of short answer may have a student enter a mathematical formula. Constructed responses may also have students plot information on a graph or arrange objects into a particular configuration.

Technology-Enhanced items are another commonly-used category. These items are delivered by computer and include simulations, composition tools, and other creative interactions. However, all technology-enhanced items can still be categorized as either selected response or constructed response.

Scoring Methods

There are two general ways of scoring items, deterministic scoring and probabilistic scoring.

Deterministic scoring is indicated when a student's response may be unequivocally determined to be correct or incorrect. When a response is scored on multiple factors there may be partial credit for the factors the student addressed correctly. Deterministic scoring is most often associated with selected response items but many constructed response items may also be deterministically scored when the factors of correctness are sufficiently precise; such as a numeric answer or a single word for a fill-in-the-blank question. When answers are collected by computer or are easily entered into a computer, deterministic scoring is almost always done by computer.

Probabilistic scoring is indicated when the quality of a student's answer must be judged on a scale. This is most often associated with essay type questions but may also apply to other constructed response forms. When handled well, a probabilistic score may include a confidence level — how confident is the scoring person or system that the score is correct.

Probabilistic scoring may be done by humans (e.g. judging the quality of an essay) or by computer. When done by computer, Artificial Intelligence techniques are frequently used with different degrees of reliability depending on the question type and the quality of the AI.

Answer Keys and Rubrics

The answer key is the information needed to score a selected-response item. For multiple choice questions, it's simply the letter of the correct answer. A machine scoring key or machine rubric is an answer key coded in such a way that a computer can perform the scoring.

The rubric is a scoring guide used to evaluate the quality of student responses. For constructed response items the rubric will indicate which factors should be evaluated in the response and what scores should be assigned to each factor. Selected response items may also have a rubric which, in addition to indicating which response is correct, would also give an explanation about why that response is correct and why each distractor is incorrect.

Item Specifications

An item specification describes the skills to be measured and the interaction type to be used. It serves as both a template and a guide for item authors.

The skills should be expressed as references to the Content Specification and associated Competency Standards (see Part 2 of this series). A consistent identifier scheme for the Content Specification and Standards greatly facilitates this. However, to assist item authors, the specification often quotes relevant parts of the specification and standards verbatim.

If the item requires a stimulus, the specification should describe the nature of the stimulus. For ELA, that would include the type of passage (article, short-story, essay, etc.), the length, and the reading difficulty or text complexity level. In mathematics, the stimulus might include a diagram for Geometry, a graph for data analysis, or a story problem.

The task model describes the structure of the prompt and the interaction type the student will use to compose their response. For a multiple choice item, the task model would indicate the type of question to be posed, sometimes with sample text. That would be followed by the number of multiple choice options to be presented, the structure for the correct answer, and guidelines for composing appropriate distractors. Task models for constructed response would include the types of information to be provided and how the student should express their response.

The item specification concludes with guidelines about how the item will be scored including how to compose the rubric and scoring key. The rubric and scoring key focus on what evidence is required to demonstrate the student's skill and how that evidence is detected.

Smarter Balanced content specifications include references to the Depth of Knowledge that should be measured by the item, and guidelines on how to make the items accessible to students with disabilities. Smarter Balanced also publishes specifications for full performance tasks.

Data Form for Item Specifications

Like Content Content Specifications, Item Specifications have traditionally been published in document form. When offered online they are typically in PDF format. Like Content Specifications, there are great benefits to be achieved by publishing content specs in a structured data form. Doing so can integrate the content specification into the item authoring system — presenting a template for the item with pre-filled content-specification alignment metadata, pre-selected interaction time, and guidelines about stimulus and prompt alongside the places where the author is to fill in the information.

Smarter Balanced has selected the IMS CASE format for publishing item specifications in structured form. This is the same data format we used for the content specifications.

Data Form for Items

The only standardized format for assessment items in general use is IMS Question and Test Interoperability (QTI). It's a large standard with many features. Some organizations have chosen to implement a custom subset of QTI features known as a "profile." The soon-to-be-released QTI 3.0 aims to reduce divergence among profiles.

A few organizations, including Smarter Balanced and CoreSpring have been collaborating on the Portable Interactions and Elements (PIE) concept. This is a framework for packaging custom interaction types using Web Components. If successful, this will simplify the player software and support publishing of custom interaction types.

Quality Factors

A good item specification will likely be much longer than the items it describes. As a result, producing an item specification also consumes a lot more work than writing any single item. But, since each item specification will result in dozens or hundreds of items, the effort of writing good item specifications pays huge dividends in terms of the quality of the resulting assessment.

  • Start with a good quality standards and content specifications
  • Create task models that are authentic to the skills being measured. The task that the student is asked to perform should be as similar as possible to how they would manifest the measured skill in the real world.
  • Choose or write high-quality stimuli. For language arts items, the stimulus should demand the skills being measured. For non-language-arts items, the stimulus should be clear and concise so as to reduce sensitivity to student reading skill level.
  • Choose or create interaction types that are inherently accessible to students with disabilities.
  • Ensure that the correct answer is clear and unambiguous to a person who possesses the skills being measured.
  • Train item authors in the process of item writing. Sensitize them to common pitfalls such as using terms that may not be familiar to students of diverse ethnic backgrounds.
  • Use copy editors to ensure that language use is consistent, parallel in structure, and that expectations are clear.
  • Develop a review, feedback, and revision process for items before they are accepted.
  • Write specific quality criteria for reviewing items. Set up a review process in which reviewers apply the quality criteria and evaluate the match to the item specification.

Wrapup

Most tests and quizzes we take, whether in K-12 or college, are composed one question at a time based on the skills taught in the previous unit or course. Item specifications are rarely developed or consulted in these conditions and even the learning objectives may be somewhat vague. Furthermore, there is little third-party review of such assessments. Considering the effort students go through to prepare for and take an exam, not to mention the consequences associated with their performance on those exams, it seems like institutions should do a better job.

Starting from an item specification is both easier and produces better results than writing an item from scratch. The challenge is producing the item specifications themselves, which is quite demanding. Just as achievement standards are developed at state or multi-state scale, so also could item specifications be jointly developed and shared broadly. As shown in the links above, Smarter Balanced has published its item specifications and many other organizations do the same. Developing and sharing item specifications will result in better quality assessments at all levels from daily quizzes to annual achievement tests.