Of That

Brandt Redd on Education, Technology, Energy, and Trust

21 February 2013

Winds of Change: Higher Productivity in Higher Education

Note: This first appeared last week as a guest post on the Next Generation Learning Challenges Blog. I highly recommend both the blog and the NGLC website.

My first lecture hall experience was American Heritage at Brigham Young University. The course was required for all freshmen and more than 500 of us at a time attended two lectures a week. In a third “lab” period we met with a TA. The professor was charismatic and the instructional design team supplied him with carousels full of colorful slides. Still, a large fraction of the class was asleep at any given time.

Large lecture hall courses are one common method of increasing productivity in higher education. Another is weed-out courses – those designed to convince students that they should choose another, less expensive major. For me the weed-out subject was Discrete Structures. This Computer Science subject is rich with metaphors like trees, maps, chains and links. It can be taught through story, modeling, manipulatives and real-world application. But our version was deliberately dry with an emphasis on precise vocabulary and obscure notational forms. The pass rate hovered near 50% and hundreds of students were convinced that they weren't capable of understanding computer science.

Higher education in the United States is sandwiched between twin pressures, increasing societal needs and expectations on one side with flat or declining funding on the other. To meet this challenge, institutions will have to dramatically increase productivity. But traditional productivity boosts like large lecture halls, weed-out courses or greater admissions selectivity won’t be enough this time around. What’s required is fundamental change to the way we support learning. We need a more personalized approach.

Societal Needs and Expectations

Employment projection is from the Bureau of Labor
Statistics Job Outlook
. Supply is based on National
Center for Education Statistics data on annual
Computer Science BS degrees awarded
. Attrition
is based on a 40-year career span.
While the U.S. unemployment rate hovers around 8%, there is a shortage of engineers and technicians. In 2012, the unemployment rate for software developers was only 2.8%. An Association for Computing Machinery study indicates that the United States will need more than 150,000 new computer scientists each year through 2020 yet our collective colleges and universities only produce 40,000 degree holders to fill those jobs. Healthcare workers are also in short supply. In 2012 the unemployment rate for physicians was 0.8%. For Physical Therapists it was 2.0% and for Registered Nurses, 2.6%.

At a recent Technology Alliance conference it was noted that colleges and universities in Washington State produce less than half as many engineers, technicians and software developers as the state’s employers consume. The rest have to be imported from other states or countries. A speaker from the University of Washington pointed out that they have increased introductory Computer Science enrollment from roughly 1200 to over 2000 per year. But the Microsoft representative responded that they have 3,600 engineering and computer science openings and they’re competing with Amazon, Boeing and many others to fill those spots.

Of 4.3 million freshmen who started college in 2004, only 2.2 million (or 51%) graduated within six years. This isn’t a perfectly accurate figure. Because of the way records are kept, it’s hard to count students who transfer and complete at a different institution. But inadequate record keeping is another symptom that institutions haven’t focused enough on ensuring their students are successful. Higher completion rates will save a lot of wasted student time.

As we move into the 21st century the fraction of unskilled jobs continues to diminish while those requiring advanced skills increase. It’s no longer appropriate to sort students by “aptitude.” We must give students the support and guidance they need to master advanced subjects.

The Funding Landscape

Education is the largest item in most state budgets. In California it accounts to between 52% and 55% of the state general fund. With the recession hitting state revenues and the expiration of stimulus supplements, state fiscal support for higher education dropped by 4.7% between 2011 and 2012, remaining flat in 2013. Overall, annual support has dropped by 10.8% since 2008. On a per-student basis, state and local financing dropped 24% in the 10 years preceding 2011.

At the same time, tuition is rising much faster than inflation. Tuition and fees at U.S. public universities rose 4.8% for the 2012 school year to an average of $8,655. At nonprofit private colleges tuition and fees rose 4.2% to $29,956. In addition to drops in public funding, the cost to provide education is increasing and the recession has diminished private endowments.

Total student debt in the U.S. now exceeds $1 trillion making it higher than the nation’s credit card dept. Student loans aren't a big problem if they are correlated with significantly higher earning potential. But loan approval is not connected with choice of academic major or the graduation rate of the institution.

Personalized Learning

The demands on higher education are greater than ever. We need more graduates – especially in certain fields. We need better completion rates. We need to support students in tackling challenging subjects. Moreover, we have to do this with flat or declining budgets.

The Bill & Melinda Gates Foundation has assembled representatives from a dozen colleges and universities that are trying new approaches with promising results. The Personalized Learning Network, as it's called, includes innovators like Western Governors University and American Public University; pioneering programs at Arizona State University and UC Berkeley; and NGLC grantees like the Kentucky Community & Technical College System, Rio Salado College and Southern New Hampshire University.

Recently I had the privilege of meeting with this group. There’s a lot of variation in their personalized learning programs but they share these common features:

  • Mastery Learning and Independent Pacing: Students have to master the current topic before moving to the next step. Self-pacing grants this freedom and ensures that there aren't gaps in understanding due to bad days or illness. And students don’t waste time on topics that they already understand.
  • High Expectations: The institutions make a commitment to support all students sufficiently so that they can master the material.
  • Feedback: Students and instructors are constantly informed about conceptual understanding and progress through the material.
  • Adaptive Learning: The learning system adapts according to individual student actions and performance.
  • Individual Attention: The programs facilitate abundant 1:1 time between students and faculty.
  • Motivation: Systems and attitudes that foster student motivation include interesting activities, student autonomy, recognizing good performance and avoiding frustration either due to anxiety or boredom.

All of this is enabled through strategic use of technology. Most use some form of blended online and in-person learning. The key point is not to simply add technology but to apply technology in the service of personalized learning.

Personalized learning programs should be able to address higher education pressures for better success and completion rates. But can they also help educate more students at lower cost? I believe so. Technology can automate many tasks that cost a lot of educator time. Video lectures are a personalization technology because they allow students to view on demand and replay as needed. Not only do they save the time in class but they also save the instructor time preparing the lecture. Objective assignments can be graded automatically and feedback given instantly to the student. Feedback to instructors can help them optimize their interactions with students. Subjective grading, while still consuming human time, can also be made more efficient. All of these factors help institutions increase capacity and reduce per-student costs.

Equally important are the savings offered to students. Immediate feedback helps students learn concepts more efficiently and avoids time wasted on misconceptions. Students can advance immediately upon understanding a concept and get credit for things they learned previously. And authentic learning activities support a better and more complete understanding of each topic. In one study by Carnegie Mellon’s Open learning Initiative they were able to teach students the same material in half the time with better retention.

Changing higher education is like turning a glacier. Features like accreditation, tenure, financial aid, credit transfer, and faculty autonomy interlock to form a seemingly insurmountable barrier protecting the status quo. But the twin pressures of increased expectations and diminishing funding result in an unprecedented incentive for change. Like the Maginot Line, traditional barriers won’t be overcome but simply bypassed.

13 February 2013

The Common Core State Standards for Literacy are Two Dimensional

The ideal school librarian would know every student in the school – what their interests are, what their current reading level is and what their teachers will be teaching next. With this knowledge, she would use her comprehensive knowledge of the school's book collection to suggest books or activities that would be both enjoyable and yet challenging to the student's abilities. That is, books that are in the student's Zone of Proximal Development.

It's not really possible for a librarian to have such a comprehensive view of both students and the book collection. But under Race to the Top grants, several states are developing Instructional Improvement Systems that, among other things, will support recommendations like these. Such systems operate at the intersection of student data and content data. And to support them, inBloom (formerly the Shared Learning Collaborative) is deploying student and content data services.

The Common Core State Standards (CCSS) and the Learning Resource Metadata Initiative (LRMI) work together to support the content data side when teaching reading and writing. The CCSS for ELA-Literacy have two dimensions to their basic structure. The grid below shows one way to view the Common Core Standards for Reading. Making up the horizontal dimension are Anchor Standards 1-9. These describe specific skills that the student should be able to apply when reading. The vertical dimension is Anchor Standard 10, the requirement that the other nine anchor skills should be demonstrated against texts of increasing difficulty as the student advances from Kindergarten to 12th grade. Notably, grades 9 and 10 share a level as do grades 11 and 12.

Common Core State Standards for Reading Literature
Here's an example of how this works: Anchor Standard for Reading number 6 states:
On the diagram this is marked with a vertical gridline. One of the horizontal gridlines is Reading Literature Grade 4 Standard 10. It's statement is:
  • CCSS.ELA-Literacy.RL.4.10 By the end of the year, read and comprehend literature, including stories, dramas, and poetry, in the grades 4–5 text complexity band proficiently, with scaffolding as needed at the high end of the range.
I've marked the intersection of these two with the identifier, "RL.4.6". The statement for Reading Literature Grade 4 Standard 6 is:
  • CCSS.ELA-Literacy.RL.4.6 Compare and contrast the point of view from which different stories are narrated, including the difference between first- and third-person narrations.
Notice how this last statement is a refinement of anchor standard 6 targeted at a Grade 4 skill level. So, a source text or learning activity that satisfies RL.4.6 would have a text complexity level in the grade 4-5 text complexity band and it would at least use a first-person or third-person narration. Ideally the activity would include both narration forms and give the student a chance to contrast the two.

So, what are these text complexity bands and how do we tell whether a text is within a particular band? In other words, how do we place a text or learning activity on the vertical dimension?

Appendix A of the Common Core State Standards for English Language Arts describes a three-factor model for measuring text complexity. The qualitative factor refers to levels of meaning, structure and demands for prior knowledge on the part of the reader. "Reader and Task" considerations involve matching texts to the reader's needs or interests and the learning tasks that will be associated with the text. The quantitative factor is a numerical measure that is calculated (usually by computer) from word length and frequency, sentence length, vocabulary and text cohesion. A supplement to Appendix A lists six approved scales for indicating quantitative text complexity for the Common Core. The table below indicates which levels are appropriate for certain grade ranges.

Common
Core Band
ATOSDegress of
Reading
Power®
Flesch-
Kincaid
The Lexile
Framework®
Reading
Maturity
SourceRater
2nd-3rd2.75-5.1442-541.98-5.34420-8203.53-6.130.05-2.48
4th-5th4.97-7.0352-604.51-7.73740-10105.42-7.920.84-5.75
6th-8th7.00-9.9857-676.51-10.34925-11857.04-9.574.11-10.66
9th-10th9.67-12.0162-728.32-12.121050-13358.41-10.819.02-13.93
11th-CCR11.20-14.1067-7410.34-14.201185-13859.57-12.0012.30-14.50

The grid diagram also includes an example of how a source text might be fully aligned to the common core literacy standards. In this case, To Kill a Mockingbird is shown as an appropriate text for teaching standards 1-7 at grades 9 or 10. So, the LRMI metadata for To Kill a Mockingbird would include alignment to standards RL.9-10.1RL.9-10.2RL.9-10.3RL.9-10.4RL.9-10.5RL.9-10.6, and RL.9-10.7.

On the vertical dimension, To Kill a Mockingbird is positioned toward the middle of the grades 9-10 range. So, it would be considered moderately advanced for grade 9 and moderately easy for grade 10. To Kill a Mockingbird is rated an 870 on the Lexile scale. A quick glance at the table shows that 870 is in the 4th-5th grade range. The book is positioned higher on the grid than the raw Lexile number would indicate due to qualitative factors such as the complex moral dilemmas posed by the text.

The LRMI metadata schema is designed to be flexible enough to represent all of these dimensions. The AlignmentObject type represents the relationship between a text or learning activity and a node in a framework or taxonomy. The most obvious and common way this is use is with a alignmentType of "teaches" or "assesses" and the target node being a statement in the Common Core State Standards. In the To Kill a Mockingbird example, the "teaches" alignmentType would be used with targets of the six standards (RL.9-10.1 to RL.9-10.7). Any one of these six standards also implicitly brackets the vertical, text complexity dimension. In order to more finely position a resource, LRMI also defines a "textComplexity" alignmentType. Publishers of at least two of the quantitative frameworks listed above are in the process writing guidelines for their use with LRMI. It's also possible to use LRMI to indicate non-quantitative factors. To do so, we would need to define taxonomies for qualitative and "reader and task" factors with appropriate identifiers.


In these examples I've used Common Core Standards for Reading but the writing standards have a similar two-dimensional structure. Overall, it's a rich framework with great promise for improving student literacy.

We have achievement standards (CCSS) and data standards (LRMI). There are emerging services like inBloom that build on these standards. I expect very soon a combination of CCSS, LRMI, open libraries of content and custom recommendation engines will offer students custom reading lists and writing activities tailored to their individual learning needs.

05 February 2013

Personal Rapid Transit and Driverless Cars

An ULTra PRT vehicle on a test track. (Wikimedia Commons)
As a teenager in the 1970s I remember reading about Personal Rapid Transit in a number of places including this Popular Science article. Unlike conventional transit like light rail or bus systems, a PRT system uses small, individually switched cars on a specially designed guideway. Upon entering a station, you select your destination on a console. Within a few seconds a 3-6 passenger car arrives and whisks you directly to your destination. At least that was the dream.

Of the dozens of proposals and prototypes, the Morgantown PRT that links WVU campuses is the only one of that era ever to be deployed at scale. The rest were either cancelled entirely or were defeatured into automated people mover systems like you find at many airports.

In 2011 two new systems opened, ULTra PRT at London's Heathrow Airport and the 2getthere system in Masdar UAE. Both are relatively small systems each with fewer than five passenger stations and fewer than 25 vehicles. But the new systems also represent an important departure from previous PRT designs. Both use battery powered vehicles with autonomous control. They run on rubber tires and steer themselves so there's no switching gear on the guideway. They are powered by batteries that automatically recharge when the cars wait at stations. This contrasts with previous PRT designs that used powered guiderails and a central control and switching system.

The primary barrier to PRT systems has been the cost of the tracks or "guideways." It's estimated that it would cost beween $30 million and $40 million a mile to expand the Morgantown system. That's because the guideway has to incorporate precision guide curbs, power transmission, track switches and even a heating system to melt snow and ice to keep it safe in bad weather.

In contrast, the ULTra guideway is estimated to cost between $7 and $15 million per mile. That's because it's a simple concrete pathway with no active systems.

Which brings me to Driverless Cars.

In essence, the ULTra and 2getthere systems are self-driving electric cars in which the environment has been constrained enough to simplify the self-guidance problem. High curbs make it easier for the cars to center themselves in lanes, dedicated roadways minimize pedestrian and obstacle avoidance. Strategically placed charging stations let them be electrically powered using batteries of modest capacity.

Meanwhile, Google's self-driving cars have driven themselves more than 300,000 miles accident-free, on conventional roads, without special infrastructure. Like many, I've wondered why Google is building such cars. They're in the information business, not transportation. A talk by Big Data guru, Ed Lazowska clued me in. Before the Google people let a car drive a route by itself, they first have a human drive the car over the same route. During the trip, its sensors scan the environment, picking out landmarks and obstacles, measuring road conditions and fine-tuning its GPS map of the roadway. Google is interested in supplying data to enable driverless cars and they're doing research to determine what data is needed.

A recent Freakonomics post on the subject suggested that driverless cars will arrive incrementally starting with the already common cruise control, adding adaptive cruise control, collision avoidance and self-parking before fully driverless operation arrives.

But I'm afraid that calling these "driverless cars" is the 21st Century equivalent of calling automobiles "horseless carriages". In each case the focus is on what's missing (the driver or the horse) instead of what new capacity has been introduced. "Horseless carriage" doesn't exactly describe a vehicle capable of sustaining 65 miles per hour with a range of over 300 miles. Nor does it conjure images of the megacities it enables or the endless parking lots it requires.

Consider this possibility: driverless technology enables the PRT dream on existing infrastructure. Instead of dedicated guideways costing tens to hundreds of millions, a PRT system built on driverless technology would rely on GPS and 3G data networks, both of which are already in place. Initial deployments can be restricted to certain neighborhoods that meet high standards of traffic signals, lane markings and crosswalk protection. Even a system restricted to certain lanes and certain streets would offer PRT of greater scale and capacity than anything yet deployed. Yet the investment to get started is regulatory permission, a few vehicles and some signage.

Fancy stations aren't required – only some curb space. Cars would be summoned using smartphones. And it wouldn't just be a peoplemover. Cargo, also, could be sent unattended. Grocery stores could use the same infrastructure for home (or corner) delivery. In the long run, even mail delivery and garbage collection could be automated.

We can learn something from this:

There are some fundamental principles at work here that can be applied to other large-scale problems:
  • Infrastructure is usually the most expensive component. Whenever possible, use infrastructure that's already in place and share infrastructure with other projects.
  • Push control (or decision making) as close as possible to the application or beneficiary.
  • Inform the distributed control with global data.
  • Build systems that can be scaled incrementally; where adding capacity is a matter of buying more of the same rather than periodic large investments to get to the next capacity threshold.
Consider the above principles applied to education. (You know I can't resist.) Existing infrastructure includes the internet, inexpensive computers and tablets, content development tools, video standards and so forth. Personalized learning relies on giving more control to the student and teacher to adapt learning to individual needs while being informed by common standards. And web-scale technologies are required if systems are to grow to support millions of students.

The "Wouldn't it be Cool" Department

Walt Disney World has the most heavily used monorail system in the world. They also have a well-maintained private road system connecting their resorts and theme parks. Wouldn't it be cool if Disney deployed a PRT system (based on driverless car technology) to connect their resorts and parks together? Such an attraction would enhance the Disney experience while proving the viability of the concept to the world.