Of That

Brandt Redd on Education, Technology, Energy, and Trust

16 October 2018

Quality Assessment Part 7: Securing the Test

This is part 7 of a 10-part series on building high-quality assessments.

A Shield

Each spring, millions of students in the United States take their annual achievement tests. Despite proctoring, some fraction of those students carry in a phone or some other sort of camera, take pictures of test questions, and post them on social media. Concurrently, testing companies hire a few hundred people to scan social media sites for inappropriately shared test content and send takedown notices to site operators.

Proctoring, secure browsers, and scanning social media sites are parts of a multifaceted effort to secure tests from inappropriate access. If students have prior access to test content, the theory goes, then they will memorize answers to questions rather than study the principles of the subject. The high-stakes nature of the tests creates incentive for cheating.

Secure Browsers

Most computer-administered tests today are given over the world-wide web. But if students were given unfettered access to the web, or even to their local computer, they could look up answers online, share screen-captures of test questions, access an unauthorized calculator, share answers using chats, or even videoconference with someone who can help with the test. To prevent this, test delivery providers use a secure browser, also known as a lockdown browser. Such a browser is configured so it will only access the designated testing website and it takes over the computer - preventing access to other applications for the duration of the test. It also checks to ensure that no unauthorized applications are already running, such as screen grabbers or conferencing software.

Secure browsers are inherently difficult to build and maintain. That's because operating systems are designed to support multiple concurrent applications and to support convenient switching among applications. In one case, the operating system vendor added a dictionary feature — users could tap any word on the screen and get a dictionary definition of that word. This, of course, interfered with vocabulary-related questions on the test. In this, and many other cases, testing companies have had to work directly with operating system manufacturers to develop special features required to enable secure browsing.

Secure browsers must communicate with testing servers. The server must detect that a secure browser is in use before delivering a test and it also supplies the secure browser with lists of authorized applications that can be run concurrently (such as assistive technology). To date, most testing services develop their own secure browsers. So, if a school or district uses tests from multiple vendors, they must install multiple secure browsers.

To encourage a more universal solution. [Smarter Balanced] commissioned a Universal Secure Browser Protocol that would allow browsers and servers from different companies to work effectively together. They also commissioned and host a Browser Implementation Readiness Test (BIRT) that can be used to verify a browser - that it implements the required protocols and also the basic HTML 5 requirements. So far, Microsoft has implemented their Take a Test feature in Windows 10 that satisfies secure browser requirements and Smarter Balanced has released into open source a set of secure browsers for Windows, MacOS, iOS (iPad), Chrome OS (ChromeBook), Android, and Linux. Nevertheless, most testing companies continue to develop their own solutions.

Large Item Pools - An Alternative Approach

Could there be an alternative to all of this security effort? Deploying secure browsers on thousands of computers is expensive and inconvenient. Proctoring and social media policing cost a lot of time and money. And conspiracy theorists ask if the testing companies have something to hide in their tests.

Computerized-adaptive testing opens one possibility. If the pool of questions is big enough, the probability that a student encounters a question they have previously studied will be small enough that it won't significantly impact the test result. With a large enough pool, you could publish all questions for public review and still maintain a valid and rigorous test. I once asked a psychometrician how large the pool would have to be for this. He estimated about 200 questions in the pool for each one that appears on the test. Smarter Balanced presently uses a 20 to one ratio. Anther benefit of such a large item pool is that students can retake the test and still get a valid result.

Even with a large item pool, you would still need to use a secure browser and proctoring to prevent students from getting help from social media. That is, unless we can change incentives to the point that students are more interested in an accurate evaluation than they are in getting getting a top score.

Quality Factors

The goal of test security is to maintain the validity of test results; ensuring that students do not have access to questions in advance of the test and that they cannot obtain unauthorized assistance during the test. The following practices contribute to a valid and reliable test:

  • For computerized-adaptive tests, have a large item pool thereby reducing the impact of any item exposure and potentially allowing for retakes.
  • For fixed-form tests, develop multiple forms. As with a large item pool, multiple forms let you switch forms in the event that an item is exposed and also allows for retakes.
  • For online tests, use secure browser technology to prevent unauthorized use of the computer during the test.
  • Monitor social media for people posting test content.
  • Have trained proctors monitor testing conditions.
  • Consider social changes, related to how test results are used, that would better align student motivation toward valid test results.

Wrapup

The purpose of Test Security is to ensure that test results are a valid measure of student skill and that they are comparable to other students' results on the same test. Current best practices include securing the browser, effective proctoring, and monitoring social media. Potential alternatives include larger test item banks and better alignment of student and institutional motivations.

No comments:

Post a Comment