Of That

Brandt Redd on Education, Technology, Energy, and Trust

18 December 2012

Game Design and the Zone of Proximal Development

It's an experience many parents have had: From time to time my kids invite me to play video games with them. We pick a multiplayer game like MarioCart or Halo and they proceed to beat me silly. I keep trying to show moderately credible performance with little success. And I wonder, "How long would I need to play this game to achieve some degree of proficiency?"

Part of the problem is that parents like me jump right into the multiplayer mode competing against our kids while simultaneously trying to learn the game mechanics. Most of these games have a single-player "campaign" mode. The campaign is designed both to be consistently entertaining and to step-by-step make you better at playing the game. The two primary rewards of the campaign are progressive discovery of the story (narrative) and progressive mastery of new skills.

The Flow Channel in Game Design

In his book, The Art of Game Design, Jesse Schell describes "flow", a concept he adopted from psychologist Mihalyi Csikszentmihalyi. An individual is in a "flow state" when he or she is entirely involved in the task. "The rest of the world seems to fall away and we have no intrusive thoughts." It's a state of sustained focus and enjoyment.

Figure 1: Flow Channel
In figure 1 we see four player states on a graph of player skill vs game challenges. This can apply to a variety of games from physical sports to puzzles to first-person shooters. In this example a player is at state P1 where the player's skill is balanced with the challenge of playing. With practice, the player increases in skill and advances to state P2 in which the game becomes easy enough that the player is bored. P3 represents condition when the game is too difficult; perhaps the opponent is a lot more skilled or the game is too hard. In this case the player becomes anxious about their performance. Both states P2 and P3 can be rebalanced. The game can become more challenging (P2 to P4) or the player can gain skill (P3 to P4). But if the player remains in Anxiety or Boredom for very long they'll abandon the game because both anxiety and boredom lead to frustration.
Figure 2: Growth Path in the Flow Channel
One goal of game design is to keep the player in the "flow channel." Here the player experiences the flow state continuously while both skill and difficulty gradually increase. But it's usually not a straight line centered in the flow channel. It's more a zig-zag with alternating rewards of easier wins and greater challenges. These match up to Dan Pink's "mastery" and "purpose" motivators.

The Zone of Proximal Development

Russian psychologist Lev Vygotsky defined the Zone of Proximal Development (ZPD) as the area between the tasks that a learner can do unaided and the tasks a learner cannot do at all. So, tasks in the ZPD are those that the learner can do with assistance – with scaffolding. Vygotsky claims that all learning occurs within the ZPD.
Figure 3: Zone of Proximal Development (ZPD)

There are many ways to put this into practice. For example, the Lexile Framework and other text complexity measures allow the matching of reading materials to the student's reading ability. Texts that are close to students' abilities increase confidence while more challenging texts increase skill levels. Texts that are too easy (boring) or too hard (anxiety producing) can be avoided. Mathematics is a structured subject where concepts build upon each other. Therefore, concepts in a student's ZPD are those that build on concepts the student already understands.

Integration

The Flow Channel offers a model to maximize player engagement and enjoyment. The Zone of Proximal Development is a model for optimizing learning productivity. Similarity between the two isn't surprising. Csikszentmihalyi studied and built on Vygotsky's work.

One of the most important things that educators can learn from Flow is that boredom contributes just as much to frustration as anxiety does. Conventional schooling does a lot of redundant work to ensure that most students "get" each concept. The boredom that results from such redundancy means that students rarely experience Flow in their schoolwork. It's also inefficient because students spend a lot of time below their ZPD in which case they aren't learning. Staying within the Flow Channel/ZPD can ensure that effective learning occurs and simultaneously keep the student motivated and rewarded.

It brings whole new meaning to, "Go with the Flow!"

05 December 2012

As We May Teach

"My education was very similar to that of my parents. Theirs didn't differ a lot from my grandparents'. My children's schooling has been enhanced by media, word processing and the internet but the experience isn't fundamentally different from my own. They still go to a classroom, sit at relatively small desks and try to pay attention to a teacher in front of a board. The transformation of primary and secondary education in the United States is beginning now and will be well underway within a decade."

Four years ago I wrote the essay, "As We May Teach" to the Meridian School board of trustees where I was serving. I recently re-read the essay and it's just as relevant today as it was then; so I've posted it here.

The examples I used remain valid but we now have many more. Since then, "Blended Learning" has emerged as the term to describe the Cirrus High School experience I use in the essay. Rather than attempt to list the numerous new examples, I recommend you check my colleague, Scott Benson's "Running List of Blended Learning Resources."

In the essay I reference Clayton Christen's prediction that 5% of high school teaching would be online by 2012 and 50% by 2018. The 2012 edition of iNACOL's "Keeping Pace with K-12 Online and Blended Learning" report estimates that 5% of US K-12 students are taking part in at least one online course. So, four years after the prediction we seem to be on track.

I've elaborated on several themes from the essay on this blog:

Unique to this essay is the application of Business Process Automation to the education space. It's a useful lens that's compatible with other approaches to educational improvement.

30 November 2012

Learning from Data - An Automotive Example

Monday saw me driving 800 miles home from a family Thanksgiving celebration. Due to my wife's change in plans, my only companion for the drive was our small dog (who had a narrow escape the day before). I needed something to keep my attention. So I decided to perform an experiment in data collection. I learned a lot even from a small data sample.

The vehicle I was driving was a 2010 Subaru Forester. Some friends have the same vehicle and have been pleased with getting around 27 MPG on the highway. We typically get only 23-24 MPG on the highway and I had been wondering why.

Among the features of this car is an average gas mileage display that's tied to the trip odometer. So, sampling the gas mileage is as simple as setting the cruise control, resetting the trip odometer, driving a set distance and reading out the result. As I was crossing the relatively flat plains of Idaho (speed limit 75) this seemed to be a good opportunity to gather some data.

Over a period of several hours, I took a bunch of samples following the above method and using my GPS to track altitude changes. I abandoned samples where the altitude change was more than a few hundred feet. The result is 27 good samples. I've posted the raw data here in case you want to play with them. Most of the samples are for 20 mile segments but some are as long as 40 and some are as short as 5 miles.

As you can imagine, the lower-speed samples got a bit tedious. But I was curious enough that I even took a side trip on a remote road (off the freeway) to get samples below 45 MPH. There's considerable variability in those results as you can see in the plot below. Halfway through the trip I refilled with fuel. I switched from regular (87 octane) to premium (92 octane) to see how that might affect mileage.

2010 Subaru Forester Fuel Economy vs. Speed
The results certainly aren't what I expected. EPA city and highway ratings have always led me to expect relatively flat miles per gallon with city being much poorer due to stop-and-go driving. Instead, I got a nearly linear downward slope. Using Excel's curve-fitting feature and found that a polynomial curve worked better than a line. The formula is embedded in the graph above.

More data points would be required to really validate this curve but it certainly fits within the margin of error of my samples. Therefore we can make some cost estimates using this formula. Notable is that peak economy is between 40 and 45 MPH – much slower than I had expected. From this I was able calculate the cost of each hour saved on this long drive.

Here's a table of results for a 800 mile drive in a 2010 Subaru Forester with average fuel cost of $3.759 per gallon. Time saved and additional cost are from a baseline speed of 55 MPH. Note that the distance is cancelled out in the Cost Per Hour Saved so those numbers are accurate regardless of the length of the trip.

SpeedMPGFuelCostTime  Hrs
Saved
Addl
Cost
Cost Per
Hr Saved
5533.1$90.8314.55
6032.1$93.7113:331.21$2.89$2.38
6530.7$97.8812.312.24$7.05$3.15
7029.0$103.6511.433.12$12.82$4.11
7527.0$111.5510.673.88$20.72$5.34
8024.6$122.4510.004.55$31.62$6.96
8422.4$134.319.525.02$43.48$8.66

Here are a few things I've learned from this:

  • My friend with the other Forester drives slower on the highway than I do.
  • I had not known how sensitive vehicle gas mileage is to speed.
  • Everything I have read led me to expect no benefit from higher octane fuel once the vehicle's requirements have been met. In the case of the Forester, higher octane actually reduced fuel economy. This observation is confirmed by the official EPA ratings.
  • I would love to see tables like the one above before purchasing my next car.
I learned a lot from this tiny data sample and my future driving habits will be changed accordingly. Now imagine what we could learn if there were a large public database of fuel economy data. Car manufacturers could optimize for specific driving patterns. Consumers would be better informed about fuel economies to expect. There are fleet tracking devices like this one that are already reporting that data but it's locked up in private databases. If anonymous fuel economy data (speed, distance, altitude and MPG) were released there's a lot we could learn about fuel economy under a variety of conditions.

I can't wrap this post up without relating it to education. A relatively small data sample taught me a lot and will impact my future driving behavior. In the same way, it doesn't take a lot of data fed back to students and teachers before they see opportunities to improve. And when we grow from little data to big data, revolutionary changes are on the horizon.

19 November 2012

A Post-LMS Framework for Personalized Learning

In the last few weeks I presented at iNACOL VSS and attended Educause. I’ve also met with the Shared Learning Collaborative team and the CEDS Stakeholders group. Educause included meetings with the Next Generation Learning Challenges organizers and grantees. All in all it’s been a concentrated opportunity to meet with vendors, standards developers and visionaries in the personalized, blended and online learning spaces.

There’s a new pattern emerging on how the technical components of a personalized learning system fit together and it’s a departure from the past. This model seems to apply both in K-12 and postsecondary education.

This new framework is being driven by three trends:
  • Innovative creators of courseware and learning systems need greater control over the learning environment than can be achieved in a Learning Management System (LMS).
  • Student Information Systems (SIS) and Portals are taking over responsibility for student/teacher communications, gradebooks and consolidating analytics into student and teacher dashboards.
  • Students and Teachers are seeking a coherent and seamless experience without separate credentials and logins for each of the systems they use.

A New Framework

The figure below shows the interaction of three systems. Each system may be hosted by a different provider but they’re integrated in such a way that the student should browse between them seamlessly.
The Student Information System (SIS) is generally integrated into the school’s portal. This is the site a student browses for consolidated access to all school information. It’s provided and managed by the school. The portal links to courses in which the student is enrolled.

In this new model, courses are an integrated experience delivered by learning systems custom-adapted to the subject matter. At a basic level, a course is a sequence of learning and assessment activities such as exposition (video, audio, text), virtual labs, exercises, quizzes exams and so forth. Key to personalization is that the selection and order of activities is adapted according to individual student needs.

Traditionally, the same learning system that hosts the course also hosts the activities. This is reasonably simple with conventional media types such as text and video. It gets more complicated with interactive media and assessments. The most innovative learning activities may be separately hosted because they are supported by custom services. These could include interactive labs, intelligent tutoring systems, virtual worlds and games.

Conspicuously absent in this new model is the Learning Management System (LMS). For the last decade or so, the framework has been that schools select and deploy an LMS – ideally with single sign-on and data integration with their SIS – but all too often as an independent system. The idea was that courseware publishers and instructional designers would install the course materials into the LMS using content packaging formats like SCORM and Common Cartridge. But this hasn't happened very much – especially with the most innovative courses. Cutting edge learning systems like DreamBoxAleks or Read 180 can’t be packaged up and installed into an LMS. The environment is too constraining.

While LMSs are capable of much more, most actual LMS use is in support of teacher-student and student-student communications, not for delivery of instruction. And that communication function is being taken over by the SIS and portal. Contemporary SIS systems have expanded beyond enrollment and course-level data to include full gradebook functionality. Meanwhile, portals are including teacher and student dashboards, online forums, chatrooms and other communication features.

So the new model is composed of Portal/SIS, Learning Systems and Activities often supplied by different organizations. And it’s not just three systems that need to be integrated. A single school will likely have many learning systems. A single student is likely to use different learning systems for different subjects. And a single course may integrate activities from a variety of sources.

Student ID

In order for the student and teacher experiences to be coherent there needs to be a clean handoff between these systems. In the diagram I've shown this as Student ID flowing to the right and Student Data flowing both ways. Student ID may include authentication, authorization and/or provisioning.
  • Authentication, often provided by Single Sign-On (SSO) is the real-time indication of who the student is.
  • Authorization is a real-time assertion that the student should be granted access to a system or resource.
  • Provisioning is the transfer of teacher and student enrollment data so that a learning system or activity can grant access and coordinate a cohort of teachers and students. This may on-demand (coordinated with authentication or authorization) or it may be a periodic batch update.
Depending on features of each component, these work together in different ways. For example, an SIS may transfer provisioning data to a learning system. Then, at runtime the SIS uses an SSO protocol to authenticate the student to the learning system. At this point the learning system knows the identity of the student and the provisioning of the classes, therefore it can internally decide whether to authorize access.

On the other hand, the learning system may use an authorization protocol to grant student access to a learning activity without authentication or provisioning. In this case, the activity provider doesn't know the identity of the student, it only knows that a trusted agent (the school) has indicated that the student should be granted access.

Student ID protocols can transfer three levels of information depending on the needs of the systems:
  • Personally Identifiable Information (PII): This might include the students name, grade, enrollment information and so forth. It's sensitive information governed by FERPA regulations.
  • Persistent Identifier: This is just enough information that a learning system or activity can identify repeat visitors. The system doesn't have any personal information about the student but knows this is the same student as in a previous visit.
  • Authorization Ticket: This is just a trusted indication that a student should be able to access content. The learning system or activity is not assisted in coordinating repeat visits.

Student Data

Most of the student data flow is upstream as student activities and performance are reported to the Learning System and the SIS/Portal. Traditionally that data has been simple scores and grades. But systems are beginning to collect richer information like frequency of access, time on task and clickstream data. these are used in analytics such as student and teacher dashboards. This same data can also be reported downstream for the use of adaptive learning systems and custom activities.

Protocols

The difficulty is that there isn't much consistency in the protocols used for Student ID or Student Data. To their credit, the builders of SISs, Learning Systems and Tools all have APIs for integration with other systems. But in most cases APIs are custom to the application. And upstream systems aren't necessarily prepared to collect the rich data that downstream systems are prepared to share.

Here's a survey of what is available or under development:

SAML and OAuth are two commonly-used protocols for authentication and authorization. The SSO subset of SAML has become common due to its use by Google Apps. OAuth is an authorization protocol that can optionally carry personal information or a persistent ID according to needs. Shibboleth is an open source reference implementation of SAML.

IMS Learning Tools Interoperability (LTI) supports the interaction of Learning Systems and Activities. It incorporates OAuth for the authorization step. LTI v1.0 (also known as Basic LTI or BLTI) coordinates the authorization of the activity (called a Learning Tool) seamlessly embedding it in the Learning System. Later versions of LTI support reporting of simple student performance data. Most mainstream LMSs support LTI 1.0 or better.

LearnSprout and Clever are two companies supporting data integration with SISs. This allows builders of Learning Systems to write to one API (either LearnSprout's or Clever's) and gain integration into a number of prominent SISs. However, they are limited to the data types supported by the SIS.

The Shared Learning Collaborative (SLC) is building a web-scale common student data layer that can be used by the SIS, Portal, Analytics, Dashboard, Learning System and Activities. A rich set of data types is pre-supported and applications can store custom data for persistence and sharing. It also supplies a common student identity framework including authentication services. So, in the SLC instead of handing off student data between systems, they all rely on the same underlying service.
The SLC Approach to the New Model
The new model divides the functions once concentrated in the LMS. Today, custom systems integration must be done to achieve a seamless experience. But protocols and services are under development that should simplify that in the future.

17 November 2012

Video: Feedback Loops for More Effective and Personalized Learning

Last month I presented at the iNACOL VSS conference. I posted my slides and resources here.
I experimented with using a Bluetooth microphone and PowerPoint's recording feature to generate a voiceover video of the presentation.


I apologize in advance. The audio quality is fairly poor. It's especially bad at the beginning but improves later. I think I was near the range limit of the microphone. And PowerPoint's recording/video feature is still buggy. In a couple of slides, the sound drops out entirely.
Flaws aside, I'm pleased with how the subject came together. Quality feedback loops are a key component in personalized learning solutions. In researching this topic I found a lot of relevant research that can guide the development and selection of products.

06 November 2012

Election Technology Update

It's election day in the U.S. and most of us are fatigued by the campaigns and will be relieved to have them over. Barring an electoral college anomaly there will be more voters who are pleased with the result than dismayed by it (it's a tautology).

I wrote about my misgivings with touchscreen direct entry voting systems in 2009. Things haven't improved since then. The big risk is indetectable vote manipulation. Of course all voting systems, whether electronic or paper, are subject to some form of manipulation. The key is to set up protocols so that manipulations leave evidence. For example, paper balloting systems often count the number of ballots cast and compare that with the number of ballots counted. The number of ballots cast is transmitted to the counting location through a different means from the transmission of the ballots themselves.

In 2010 I wrote about King County, Washington's vote-by-mail system. In addition to mailing ballots, voters can deliver them to dropboxes conveniently located around the county. Not only do they save postage, dropboxes appear to be a more secure delivery method as observers from both major parties watch the sealing and collection of ballot boxes. Other observers watch the opening and counting processes.

As a paper and optical scan method, King County's is among the more secure – once the ballot is delivered to a dropbox. The glaring weakness is privacy. Vote-by-mail opens the door to voter coercion because there's no inspector and booth to ensure privacy when the vote is actually cast. It's entirely possible for others to pre-fill the ballot and simply ask the voter to sign – with intimidation if necessary.

Of course, manipulation of this sort doesn't scale well. Sure it can happen in pockets but widespread, coordinated vote manipulation would be hard to achieve as the more voters are intimidated, the greater the likelihood that someone complains. Therefore it's reasonable to assume that deliberate manipulation will be a small fraction of total votes cast.

This leads to an interesting conclusion: Though we aspire to make every vote count, there's some degree of error regardless of the way votes are cast and counted. Sometimes it's deliberate fraud, manipulation or intimidation. Sometimes it's poorly designed ballots, miscalibrated voting machines or natural disasters. There are two ways to deal with this. Our current system presumes that if the difference in votes is within the margin of error, democracy is preserved regardless of which candidate takes office. That presumption was tested in the 2000 U.S. election.

The alternative is to require another election if the vote is within the margin of error. That approach carries a tremendous cost in terms of time, money and extended uncertainty. Despite misgivings, I have to agree with those who wrote the constitution. I may not like the outcome when the vote is close. I may even believe that the count is inaccurate. But I do believe that Democracy is preserved.



22 October 2012

Learning Maps, Common IDs and the Common Core

Today we presented at the iNACOL VSS conference on "Learning Maps, Common IDs and the Common Core. Here are primary resources associated with that presentation:
Update: 7:55pm: In addition to the above primary sources, I've written the following on the same subject:

Also I corrected the link to LearningRegistry.org.

Many thanks to the panelists: Maureen Wentworth, Michael Jay and Sharren Bates.

18 October 2012

Things Every Education Tech Entrepreneur Should Know

This weekend I'm volunteering as a coach for Startup Weekend Edu in Seattle. Preparing for this got me to thinking about things people building education technology should know. The following list isn't comprehensive but it's a good starting point. Follow the links to learn more about these topics.

Theories of Change
You need to have a good theory of how your technology will improve education. There's a lot of money to be made in record keeping and ERP-type applications. But the things that interest me and I hope interest you are those that directly improve student learning. And you need to be specific about the expected improvement. Do you want students to learn more in the same amount of time or take less time to learn a skill? Are you seeking better comprehension and retention? What about "deeper learning" – getting beyond recall and demonstrating the ability to apply concepts or solve problems.

Most ed tech theories of change start with Bloom's Two Sigma Problem. In a 1984 paper, Benjamin Bloom discussed how they had achieved two standard deviations improvement in student learning through a combination of Mastery Learning and one-on-one tutoring. Noting that 1:1 student-teacher ratios are impractical, Bloom's challenge is to find scalable ways to achieve the same results.

The following resources should stimulate your theoretical juices:
  • A 2011 Metastudy by Kurt VanLehn gives a progress report of Intelligent Tutoring Systems and an update on progress toward Bloom's Two Sigma Problem. In particular, see page 210 (the 15th of the paper) in which VanLehn explains that about half of Bloom's two sigma gains were due to changes in Mastery Learning parameters.
  • Personalized Learning is "instruction that is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary." This definition is from the National Education Technology Plan which is an excellent read so long as you skip the executive summary.
  • Cognitive scientists talk about the Zone of Proximal Development. Game designers talk about Gameplay Progression. They are similar concepts and they both involve motivation and increasing skill levels. In fact, the motivational reward from this form of gameplay is achievement of greater skill.
  • Feedback loops are an essential component of Personalized Learning. (From an earlier post in this blog.)
  • The Puzzle of Motivation: Dan Pink explains the growing science of motivation without which, even the best instruction may fail.
Building Blocks
A number of organizations including the federal government, technology standards groups, associations and foundations have assembled building blocks to support innovative education technology. Some of these can improve time-to-delivery, some help interoperability between applications and some ensure that your application is based on tested learning theories:
  • The Personalized Learning Model is a framework that some of us at the Gates Foundation have used to talk about how key components in a learning system work together. It's very similar to frameworks used by others in the community.
  • The Learning Resource Metadata Initiative is a metadata schema for identifying learning resources (text, video, virtual labs, assessments, etc.) and aligning them to education standards like the Common Core.
  • The Learning Registry is a system for sharing metadata about learning resources. It's synergistic with LRMI and other metadata formats.
  • MyData Button is a federal government initiative to allow students or their parents to download their student data so that it can be used by other systems.
  • The Postsecondary Electronic Standards Council (PESC) defines data models and protocols for exchanging data among postsecondary institutions. PESC standards cover admissions applications, test score reporting, student aid applications and reporting, digital transcripts and more.
  • IMS Global defines educational content standards (where SIF and PESC concentrate on student and institutional data). IMS standards like QTI and Common Cartridge define how to package assessment items and courseware for exchange between systems. My favorite IMS standard is Learning Tools Interoperability which is a protocol that allows rich, custom learning tools to be integrated into other learning environments.
  • Ed-Fi is a data model and set of tools to support teacher and student dashboards indicating student progress.
  • The Shared Learning Collaborative (SLC) "is an alliance of states, foundations, educators, content providers, developers and vendors who are passionate about using technology to improve education." It's an ambitious multistate project that leverages many of the technologies listed above into a coherent whole. Vendor outreach programs are at dev.slcedu.org

Product and Service Categories
There are a handful of existing education technology product and service categories with new ones emerging. Here are key categories with some examples. Note that the examples I've listed just happen to be well-known systems. It's far from a comprehensive list and I don't necessarily endorse these products. In each category there are emerging products that may be more innovative than the ones I name.
  • Learning Management Systems (LMS) manage class interactions such as syllabus, assignments, learning materials, quizzes, forums, gradebook and so forth. While LMSs are capable of delivering a rich online learning experience, most deployments are supplementary to conventional classroom learning and only a fraction of their capabilities are used. Well-known examples include BlackBoard, Desire2Learn, Moodle, eCollege, Sakai, BrainHoney and Canvas but there are numerous others.
  • Instructional Improvement Systems are an emerging concept. Like an LMS, an IIS manages student learning. However, an IIS uses accumulated student data as well as effectiveness data about learning resources to customize the learning experience to individual student needs. To support continuous improvement, the IIS should place equal emphasis on data collection and data use. Most action in the IIS space is being driven by state-level RFPs often with Race to the Top funding.
  • Public Education Datasets are available from the National Center for Education Statistics and other federal and state education agencies. The Digest of Education Statistics is a compilation of many government and privately-sourced datasets. Other public datasets include EdFacts and IPEDS. There some interesting opportunities to consuming existing public data and analyzing it in new ways.
There's much more that could be added but I think I've reached the point of diminishing returns. Please use the comments to point at other important theories, building blocks or initiatives.

12 October 2012

Tips For Using the Common Core XML

The Common Core State Standards (CCSS) have been out for a couple of years now and adoption efforts are progressing. To facilitate use of the CCSS in learning applications and with metadata frameworks like LRMI, they have recently posted canonical identifiers and machine-readable XML for the standards. I wrote about that in a recent post.

Update: 1 September 2014
The CCSSO has updated the CoreStandards.org website and most of the links in this post no longer work. However, the Common Core XML files still exist and can be found with the developer information here. Sometime in the near future, I'll re-write this post to describe the new provisions that the CCSSO has made for the techie crowd.

Today I'm going into the nuts and bolts of how a developer can make use of the XML. There are some useful features that aren't obvious at first glance. For background, I recommend that you read the announcement memo that accompanied the release of the XML on the corestandards.org website.

Canonical Identifiers

The canonical identifiers for the common core state standards are available in .csv form here: http://corestandards.org/assets/E0607_ccss_identifiers.csv. The first column lists the URLs that were formerly on the Corestandards.org website. They are included to support conversions for legacy applications.

Notable is there is an exact 1:1 mapping between the three ID types and there are 1844 IDs in the table. So if two applications are using different forms of IDs (e.g. one uses GUIDs and another uses URIs) the translation is a deterministic table lookup. A closer examination will show that there's a simple algorithmic conversion between the "dot notation" identifiers and the corresponding URIs. I've written functions in c# to do the translation and posted them here. It should be easy to port them to Java or any other language.

Hierarchy

The standards for Mathematics and ELA/Literacy follow different hierarchies that are suited to the way the standards are written and are intended to be used. The Dot Notation and URL forms of the identifiers can be parsed into the corresponding hierarchies as shown in the following examples.

Math Example

Dot Notation: CCSS.Math.Content.HSA-SSE.A.1b
URL: http://corestandards.org/Math/Content/HSA/SSE/A/1/b

InitiativeCCSS(Common Core State Standards)
FrameworkMath
SetContent(Options are 'Content' and 'Practice')
GradeHSA(High School Algebra)
DomainSSE(Seeing Structure in Expressions)
ClusterA
Standard1
Componentb

You can reference the math standards at the Component, Standard and Cluster levels. Thus, the following are all valid CCSS URI Identifiers:
If you add an ".xml" suffix to the URL then you get the computer-readable XML version of each:
The XML at the cluster and standard levels includes all child items. So the cluster includes all standards in that cluster and all components within that standard.

While the canonical IDs don't include Domain or Grade levels, you can retrieve all standards within a domain or grade by hacking the URL as follows:
And, you can retrieve all of the math standards in one XML document with this URL:

ELA/Literacy Example

Dot Notation: CCSS.ELA-Literacy.W.9-10.3d
URL: http://www.corestandards.org/ELA-Literacy/W/9-10/3/d

InitiativeCCSS(Common Core State Standards)
FrameworkELA-Literacy
Set
(Optional, not used in this example)
Strand+DomainW(Writing)
Grade9-10
Standard3
Componentd

You can reference the literacy standards at the Component, Standard and Grade levels. Thus, the following are all valid CCSS URI Identifiers:
If you add an ".xml" suffix to the URL then you get the computer-readable XML version of each:
And, you can retrieve all of the ELA/Literacy standards in one XML document with this URL:

04 October 2012

CEDS and the Four-Layer Framework for Data Standards - Updated

About a year ago I posted a Four-Layer Framework for Data Standards. It was developed as Common Education Data Standards (CEDS) working groups were discussing the space in which CEDS operates and what makes its contribution unique. Today I'm updating the framework document – adding some clarity but mostly reconciling terminology with that used by CEDS.

In the June CEDS stakeholders' meeting the group emphasized that CEDS works strictly at layers 1 and 2 (Data Dictionary and Logical Data Model) leaving serialization and protocol to other standards organizations. This leads a unique approach (at least unique to the education standards space) in which the focus is on alignment instead of compliance.

To support this strategy, CEDS has posted the Align and Connect tools. The Align tool allows State Education Agencies, software vendors and other organizations to post their data models and show how their elements align to CEDS. Organizations can choose to make their data models public; in which case Align can be used to report the degree of alignment between two data models. The new Connect tool addresses the sharing of metric definitions like graduation rate, student financial aid repayment or college-going rate. Metrics like these are not in the data model, they are derived from that data. And different organizations may combine the data in different ways. Connect supports the sharing and eventual standardization of these metric definitions.

Another question I've gotten is how the four-layer framework overlaps with the OSI 7-layer model. Layers 1-3 (Data Dictionary, Logical Data Model and Serialization) in the four layer model map to the Application layer (layer 7) which is at the top of the OSI model. All other layers in OSI are combined into the Protocol layer in the four-layer model.

The latest four-layer document is here. It's released into the public domain under a CC0 disclaimer.

29 September 2012

Schema.org, LRMI and the Learning Registry

Try this:


     1. Browse to google.com
     2. Search for "potato salad"
     3. Experiment with the recipe search tool that appears on the left.

Most people are aware of Google Shopping – Google set up a way for merchants to list the details of things they have for sale. More recently, Google published a way to mark up recipes so that the search engine can tell what's the title, what are the ingredients, what's a photo of the recipe and so forth. We call this, "metadata" or data about the data.

Get ready for a lot more of this kind of thing. The developers of, Bing, Google, Yahoo! and Yandex have cooperated on a common metadata vocabulary at Schema.org. It's a perfect example of coopetition – the search engines are cooperating how metadata should be embedded in web pages. That way webmasters don't have to code four different kinds of metadata. Meanwhile, the search providers will compete on what they do with that metadata.

The results of this are already emerging, try searching for a movie title, for example, or for a type of restaurant. Any of the major search engines will give you a nicely structured result.

We wanted to do the same for learning resources – videos, exercises, simulations, learning games and so forth. Wouldn't it be great if a teacher or student could search for "fractions" and get a search tool that allows the results to be filtered by age, grade level, subject or learning objective? Conveniently, the Schema.org folks had expressed their interest in new submissions, so long as they represent an industry consortium. So, a bunch of organizations got together and launched LRMI. You can learn much more about the co-sponsors, advisory groups and the specification itself on LRMI website.


Schema.org embeds the metadata right in the webpage (using HTML microdata). That makes sense for search engines, but it means that only the publisher of the webpage can post metadata about it. Yes, there's such a thing as third-party microdata but the search engines don't pay attention to it. Plus there are other kinds of data that need to be shared between learning solutions. Conveniently, there's a complementary alternative.

The Learning Registry is a peer network of LR servers that exchange metadata similar to the way email providers exchange mail messages. In the LR architecture, metadata consists of assertions. Here's are some example assertions rendered into plain English:

The first of these is one of the common-core standards. That data is now available in XML form on the web. The second of these is an LRMI statement. Of course, both are plain-English renditions of what could be machine-formatted.

LR assertions also include their provenance, that is the name of the organization making the assertion, when the assertion was made and a digital signature. This lets users of the LR have confidence in the origin of statements and filter for reliable sources.

You might have noticed that in describing the Learning Registry I used an LRMI example! That's because these are compatible technologies and the groups are coordinating with each other. This diagram helps show the relationship between the efforts:
You'll see that "Schema.org" appears twice in the diagram. That's because Schema.org defines both a vocabulary (a set of metadata tags) and a way of sharing that metadata. LRMI is an addition to the Schema.org vocabulary that enhances descriptions of educational content.

Schema.org and the Learning Registry offer two complementary ways of distributing metadata. They can even be bridged – there are experimental web crawlers that will extract HTML microdata from a page and inject it as assertions into the Learning Registry.

I could write a whole lot more about both of these efforts but far better to link to existing resources:





29 August 2012

Common Identifiers for the Common Core

In order to do personalized learning at scale, with a mix activities and assessments from a variety of sources, we need to agree upon a common set of learning objectives. The Gates Foundation and the Shared Learning Collaborative, have endorsed the Common Core State Standards.

Side note: As we are in the election season, there is a lot of rhetoric around national curriculum and federal mandates. The Common Core is only a set of commonly agreed upon learning objectives. It's not a curriculum (national or otherwise) and it was developed by a voluntary cooperation among states with the federal government staying clear.

As we were developing the LRMI project, we anticipated the need to be able to reference the Common Core as well as other learning objectives. Unfortunately, the Common Core didn't specify a standard set of references. At the time (approximately 12 months ago) there were at least five different and incompatible ways to reference the Common Core. So, we turned to the coordinators  the National Governor's Association (NGA) and Council of Chief State School Officers (CCSSO), and with the help of Student Achievement Partners they developed a consistent set of identifiers for the Common Core.

In the next few weeks, they will be updating the corestandards.org website so that the URL identifiers will link directly the the specified standards. Also, the standards will be available in machine-readable XML format to facilitate a variety of learning applications.

It's a great step and will make a big difference. But in the process we identified another issue. Frequently a particular standard in the common core will include more than one learning objective. Here's an example:
CCSS.Math.Content.6.NS.B.3
Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation.
There are at least four learning objectives in this standard. The developers of assessments (such as PARCC and SBAC) have a problem with that. A typical assessment item will only test one one of these skills. Without finer-grained identifiers, they can't show complete coverage of the standards. Similar problems exist for learning activities and student records.

A few weeks ago, the NGA, CCSSO, SBAC, PARCC and SETDA announced a collaborative project to address this issue. By the end of the year, they expect to publish an open set of learning objectives based on a fine-grained parsing of the common core. They will also define a standard data format for publishing standards like these. That will be based on the data models proposed for Common Education Data Standards 3.0.

Of course, this still isn't the whole picture. Not all states are adopting the Common Core. The Common Core only addresses Mathematics and English/Literacy, the 50 states still have standards for other subjects. Other countries have their own standard learning objectives.

There needs to be a way for developers of educational standards to publish and share those standards.

The Achievement Standards Network, operated by JES & Co., maintains an open database of all 50 states' existing standards, plus the Common Core, plus those published by the American Association for the Advancement of Science and many others. Under a new grant, they will be enhancing the database to accept the new standardized identifiers and they'll incorporate the learning objectives defined by the granularity project.

Concurrently, the Learning Registry is emerging as a distributed system for sharing Achievement Standards Data, Learning Objectives, Cross-References between standards and an index of learning activities that are aligned to standard objectives.

It may seem like chaos but this is more like a dance. And very shortly we should have a coherent foundation for developers of learning tools and instructional systems.

There are a lot of links up there. Here are repeat links to the three most important announcements:

23 August 2012

ACT Scores: Most HS Graduates Aren't Prepared for College

The ACT just released it's annual report on "The Condition of College & Career Readiness." Curious to me is how different news outlets spin the results:

The scores are indeed flat and have been for five years. The AP achieves a positive spin by noting that the number of students taking the test has increased by 17% in those five years. If you assume that those who take the test are the top students, then the 17% addition represents the lowest performers on the exam and things have improved. However, the 17% addition could also indicate that more students are taking both the ACT and SAT rather than selecting one or the other.



Of primary concern to us at the Gates Foundation is the low rate of college readiness. Of those who take the exam (a subset of all high school students) only 25% are prepared for college in all four subject areas (English, Reading, Mathematics and Science). The goals of our U.S. College Ready team are to elevate the standard of high school graduation to mean college ready and to increase the graduation rate beyond 80%. The ACT report reminds us just how far away we are from that goal.

Is that a worthy goal? In an earlier post I noted that society is turning to education as the solution to poverty. I offer two additional facts to support this argument:
(Edited 2012-08-23 to include the Forbes headline)

13 July 2012

Education by the Numbers

If a picture is worth 1,000 words, this post is equivalent to a 5,000 word essay:





Largest State Budget Shortfalls on Record
With the exception of the last image, I generated these from NCES statistics using Excel. I'm releasing them under a CC0 waiver. Use them at will.

The last image was generated by the Center on Budget and Policy Priorities and posted to Flickr. I'm linking to that copy. You can do the same.

02 July 2012

Learning - Everything Works, But How Well?

In a recent Freakonomics Post, Roger Pielke Jr. writes about the perils of "False Positive Science." We constantly fight the fallacy of equating correlation with causation. But false positive science involves a more subtle error. In the search to find statistically significant results, researchers often try many different analytical alternatives. Their papers rarely list all of the failed models, only the one that achieves statistical significance is used. Joseph Simmons and colleagues write, "It is unacceptably easy to publish 'statistically significant' evidence consistent with any hypothesis." And this mistake is more difficult for the reader to detect than the correlation/causation fallacy.

Credit: Randall Munroe - xkcd.com
When it comes to research into educational achievement, another issue comes into play. Since humans are natural learners, just about everything works. In his book, Visible Learning, John Hattie gives this rigorous treatment. Over 15 years, Hattie and his staff studied over 800 meta-analyses representing hundreds of thousands of studies into what affects student learning. For every study, they converted the results into a common effect size scale.

Roughly speaking, the effect sizes used in Visible Learning are the amount of improvement a student would make in a year scaled to one standard deviation on a standardized test. By mapping all effects onto a common effect size scale you can compare the relative value of different techniques and theories.

Among Hattie's observations is the following:
Almost everything works. Ninety percent of all effect sizes in education are positive. Of the ten percent that are negative, about half are "expected" (e.g., effects of disruptive students); thus about 95 percent of all things we do have a positive influence on achievement. When teachers claim that they are having a positive effect on achievement or when a policy improves achievement this is almost a trivial claim: virtually everything works. One only needs a pulse and we can improve achievement. (Hattie, Visible Learning, p. 15)
On Hattie's scale, a child simply living for a year with no schooling achieves an effect size of 0.15. "Maturation alone can account for much of the enhancement of learning." Being present in a classroom with a teacher results in effect sizes between 0.15 and 0.40. So, for an innovation to be interesting, it must result in an effect size substantially higher than 0.40. (Hattie, p. 16).

From the book, here are some selected influences with their rank and effect sizes.

RankDomainInfluenceEffect Size
1StudentSelf-report grades1.44
2StudentPiagetian programs1.28
3TeachingProviding formative evaluation0.90
4TeacherMicro teaching0.88
5SchoolAcceleration0.88
6SchoolClassroom behavioral0.80
7TeachingComprehensive interventions for learning disabled0.77
8TeacherTeacher clarity0.75
9TeachingReciprocal teaching0.74
10TeachingFeedback0.73
11TeacherTeacher-student relationships0.72
22CurriculaPhonics instruction0.60
25TeachingStudy skills0.59
29TeachingMastery learning0.58
31HomeHome environment0.57
32HomeSocioeconomic status0.57
42SchoolClassroom management0.52
45HomeParental involvement0.51
51StudentMotivation0.48
56TeacherQuality of teaching0.44
59SchoolSchool size0.43
62TeachingMatching style of learning0.41
81StudentDrugs (e.g. for ADHD)0.33
91SchoolDesegregation0.28
92SchoolMainstreaming0.28
100TeachingIndividualized instruction0.23
106SchoolClass size0.21
107SchoolCharter schools0.20
129CurriculaWhole language0.06
133SchoolOpen vs. traditional0.01
134SchoolSummer vacation-0.09
135HomeWelfare policies-0.12
136SchoolRetention-0.16
137HomeTelevision-0.18
138SchoolMobility-0.34

There's a ton of stuff to chew on here. Far more than I can do justice in a blog post. Hattie has between one half and five pages for each of the 138 effects and there is nuance that the numbers don't capture. I'll just make a few observations:
  • The top five influences all involve adapting the experience according to individual student needs.
  • Charter schools, something I favor, have an unimpressive effect size of 0.20. But charters were intended to enable experimentation. So we should expect them to average similar to conventional public schools but with a much larger standard deviation. Recent studies seem to confirm that expectation. And studies are starting to identify what factors distinguish the high-performing charters from other schools.
  • Smaller schools help somewhat while the impact of smaller classes is minimal. That's probably because most small-class initiatives dilute their impact by with a consequential reduction in teacher experience.
  • Feedback loops, among my favorite topics, appear at #10 with an effect size of 0.73.
  • Home and socioeconomic status have a huge impact. But other factors are bigger so it should be possible to overcome the achievement gap in the school.
  • Phonics Instruction has an effect size of 0.60 while Whole Language has one tenth that effect. There's much to be said for Whole Language and I tend to agree with its constructivist roots but not at the expense of phonics.
Of course, the observation that nearly everything works doesn't eliminate the other perils of false positive science and the correlation/causation fallacy. All three of these make it possible to latch on to ones's favorite intervention while claiming to be evidence driven. To defend against this, we must seek 2-5 times improvement in learning performance and replicable results. It also helps to be careful, honest and humble.

30 May 2012

Motivating Students - Opportunity Isn't Enough

In his book, Disrupting Class, Clayton Christensen and his co-authors identify four objectives that U.S. society has asked of public education. Each one is incremental; that is, each adds new expectations while still retaining the previous objectives.
This latest goal is both transformative and controversial. There's little doubt that society is looking to our educational programs to relieve poverty. But many educators are skeptical about their ability or responsibility to address the poverty problem. Regardless, schools are no longer judged exclusively by their top achievers or even average scores. Frequently, the focus is on the bottom performers -- often at the expense of average or high-achieving students. 

The new goal is based on three important observations:
  • Educational attainment predicts financial prosperity.
  • Financial prosperity of parents predicts children's educational attainment.
  • The most important predictor of educational attainment is the educational attainment of the parents.
    (source here)
When a school fully embraces the "Eliminate Poverty" goal it must accept responsibility for student motivation. Previously schools were expected to offer opportunity. If a student didn't take advantage of that opportunity it was their problem -- or the family's. Now, schools must motivate students to achieve, not just give them the opportunity. The trouble is, schools don't know a lot about motivation.

Dan Pink has been studying what motivates us. There's a body of research into motivation that goes back to the 1960s and earlier. But organizations have been slow to apply this knowledge because it's counter-cultural. It turns out that carrot and stick type motivators like financial incentives, fines, privileges and so forth are effective for repetitive, mechanical type work. But when it comes to cognitively demanding tasks, bigger incentives actually impair performance.

The candle problem, studied by Sam Gluksburg is an early study that showed this effect. More recently, the Federal Reserve Bank sponsored a study of incentives which concluded:

As long as the task involved only mechanical skill, bonuses worked as they would be expected: the higher the pay, the better the performance. But, once the task called for ‘even rudimentary cognitive skill,’ a larger reward led to poorer performance.
(Dan Pink, Drive, pg 60 quoting a study by Dan Ariely et. al.)

For cognitively demanding tasks, Dan Pink has identified three motivators that are effective:
  • Autonomy
  • Mastery
  • Purpose
This has all kinds of implications. For example, in order to place greater emphasis on student achievement, states and districts are considering merit-based pay systems for teachers. But teaching and learning are cognitively demanding activities so merit pay is unlikely to be a functional motivator. Meanwhile teachers are complaining about reduced freedom in the classroom -- a loss of autonomy.

But, the subject of this post is student motivation. Since the advent of the Carnegie Unit, students have received academic credit for seat time. Keeping students in seats is a mechanical task so schools and vice principals are compensated according to attendance rates. We even have truancy laws that require that children spend a certain number of hours in school and punish them when they don't comply. As with teachers, we're using traditional incentives to motivate a mechanical task -- simply being present.

Of course, presence doesn't equate to learning. So how can we use autonomy, mastery and purpose to motivate student learning? There are numerous opportunities and great teachers naturally apply them. Here are three examples:

Automony: In studies of whether changing instruction to match learning styles helps students learn better, researchers find that simply offering students a choice of activities resulted in better performance. This makes intuitive sense because when students choose their own activities, they're invested in the outcome and should perform better regardless of whether the activity is a better match to an individual's learning style.

Mastery: Khan Academy includes a learning map that graphically displays the topics that students have mastered and their progress toward achievement goals. It grants badges for particularly important achievements.

Purpose: How often do parents and teachers hear the question, "Why do I have to learn this?" Lack of purpose is a strong demotivator. Educurious encourages authentic learning by posing real problems to the students and connecting them with real-world experts.

We have more than 50 years of research indicating that motivations are much more complex than carrots and sticks. Yet we keep resorting to these familiar tools despite their ineffectiveness against cognitively-demanding tasks. As schools take greater responsibility achievement, not just opportunity, it will be important to apply the right motivators.