Of That

Brandt Redd on Education, Technology, Energy, and Trust

06 November 2018

Quality Assessment Part 8: Test Reports

This is part 8 of a 10-part series on building high-quality assessments.


Since pretty much the first Tour de France cyclists have assumed that narrow tires and higher pressures would make for a faster bike. As tire technology improved to be able to handle higher pressures in tighter spaces the consensus standard became 23mm width and 115 psi. And that standard held for decades. This was despite the science that says otherwise.

Doing the math indicates that a wider tire will have a shorter footprint, and a shorter footprint loses less energy to bumps in the road. The math was confirmed in laboratory tests and the automotive industry has applied this information for a long time. But tradition held in the Tour de France and other bicycle races until a couple of teams began experimenting with wider tires. In 2012, Velonews published a laboratory comparison of tire widths and by 2018 the average moved up to 25 mm with some riders going as wide as 30mm.

While laboratory tests still confirm that higher pressure results in lower rolling resistance, high pressure also results in a rougher ride and greater fatigue for the rider. So teams are also experimenting with lower pressures adapted to the terrain being ridden and they find that the optimum pressure isn't necessarily the highest that the tire material can withstand.

You can build the best and most accurate student assessment ever. You can administer it properly with the right conditions. But if no one pays attention to the results, or if the reports don't influence educational decisions, then all of that effort will be for naught. Even worse, correct data may be interpreted in misleading ways. Like the tire width data, the information may be there but it still must be applied.

Reporting Test Results

Assuming you have reliable test results (the subjects of the preceding parts in this series), there are four key elements that must be applied before student learning will improve:

  • Delivery: Students, Parents, and Educators must be able to access the test data.
  • Explanation: They must be able to interpret the data — understand what it means.
  • Application: The student, and those advising the student, must be able to make informed decisions about learning activities based on assessment results.
  • Integration: Educators should correlate the test results with other information they have about the student.


Most online assessment systems are paired with online reporting systems. Administrators are able to see reports for districts, schools, and grades sifting and sorting the data according to demographic groups. This may be used to hold institutions accountable and to direct Title 1 funds. Parents and other interested parties can access public reports like this one for California containing similar information.

Proper interpretation of individual student reports has greater potential to improve learning than the school, district, and state-level reports. Teachers have access to reports for students in their classes and parents receive reports for their children at least once a year. But teachers may not be trained to apply the data, or parents may not know how to interpret the test results.

Part of delivery is designing reports so that the information is clear and the correct interpretation is the most natural. To experts in the field, well-versed in statistical methods, the obvious design may not be the best one.

The best reports are designed using a lot of consumer feedback. The designers use focus groups and usability tests to find out what works best. In a typical trial, a parent or educator would be given a sample report and asked to interpret it. The degree to which they match the desired interpretation is an evaluation of the quality of the report.


Even the best-designed reports will likely benefit from an interpretation guide. A good example is the Online Reporting Guide deployed by four western states. The individual student reports in these states are delivered to parents on paper. But the online guide provides interpretation and guidance to parents that would be hard to achieve in paper form.

Online reports should be rich with explanations, links, tooltips, and other tools to help users understand what each element means and how it should be interpreted. Graphs and charts should be well-labeled and designed as a natural representation of the underlying data.

An important advantage of online reporting is that it can facilitate exploration of the data. For example, a teacher might be viewing an online report of an interim test. They that a cluster of students all got a lower score. Clicking on the scores reveals a more detailed chart that shows how the students performed on each question. They might see that the students in the cluster all missed the same question. From there, they could examine the student's responses to that question to gain insight into their misunderstanding. When done properly, such an analysis would only take a few minutes and could inform a future review period.


Ultimately, all of this effort should result in good decisions being made by the student and made by others in their behalf. Closing the feedback loop in this way consistently results in improved student learning.

In part 2 of this series, I wrote that assessment design starts with a set of defined skills, also known as competencies or learning objectives. This alignment can facilitate guided application of test results. When test questions are aligned to the same skills as the curriculum, then students and educators can easily locate the learning resources that are best suited to student needs.


The best schools and teachers use multiple measures of student performance to inform their educational decisions. In an ideal scenario, all measures, test results, homework, attendance, projects, etc., would be integrated into a single dashboard. Organizations like The Ed-Fi Alliance are pursuing this but it's proving to be quite a challenge.

An intermediate goal is for the measures to be reported in consistent ways. For example, measures related to student skill should be correlated to the state standards. This will help teachers find correlations (or lack thereof) between the different measures.

Quality Factors

  • Make the reports, or the reporting system, available and convenient for students, parents, and educators to use.
  • Ensure that reports are easy to understand and that they naturally lead to the right interpretations. Use focus groups and usability testing to refine the reports.
  • Actively connect between test results and learning resources.
  • Support integration of multiple measures.


Every educational program, activity, or material should be considered in terms of its impact on student learning. Effective reporting, that informs educational decisions, makes the considerable investment in developing and administering a test worthwhile.