31 March 2017

Cut Taxes or Increase Spending - Is the Debate Obsolete?

Photo of the U.S. Capital with full moon overhead.

As the Trump administration turns its attention to a tax reform plan debate swirls about the best way to stimulate the economy. Traditionally, Democrats have advocated for increased government spending while Republicans have fought for reduced taxes. Both methods succeed in stimulating and both have their roots in the theories of prominent economists. But it may be that both strategies, and the theories that support them, are obsolete in a day when production is many times basic needs.

Government spending advocates cite the work of John Maynard Keynes. Prior to Keynes, neoclassical economists theorized that free markets should naturally balance the economy toward full employment. Keynes observed that economies tended to swing between boom and bust cycles and advocated government intervention through fiscal policy (government taxing and spending) and monetary policy (central bank regulation of the money supply) to moderate the swings. Keynesian theory was influential in addressing the Great Depression and remained dominant following World War II into the 1970s.

Among the expectations of early Keynesian economics was that high inflation and high unemployment should not co-exist. Economist Milton Friedman challenged that notion and was proven right when "stagflation" emerged in the 1970s. Friedman theorized that stagflation and related poor economic conditions result from excessive or malinformed government intervention. The solution, he said, was to free the market through reduced regulation and lower taxes. This school of thought is generally known as "supply-side" or "monetarist". President Reagan successfully employed that approach early in his presidency launching a sustained period of economic growth that continued through the Bush and Clinton administrations.

Today, Keynesian economics is associated with greater regulation, increased government spending, and with an overall trust in government interventions. Meanwhile, monetarist economics is associated with free markets, reduced taxes, and with an overall trust in the market's ability to self-balance. In fact, both schools of thought are much more nuanced than these broad strokes. On the Keynesian side, it matters a lot where the government spends its money. On the monetarist side, it matters a great deal which taxes are reduced and how regulations are tuned. Earnest theorists on both sides have a healthy respect for the other theory.

But the nuance is quickly lost in the morass of political debate. Indeed, I fear that most political Keynesians choose that theory because it justifies their existing desire to increase government spending. And most monetarists choose supply-side theory for it's justification of reduced taxes and regulation. In each case I think they first choose their preferred intervention and then select a theory to justify it.

Through the latter half of the 20th century U.S. government economic focus was pretty much what Keynes described - moderating the boom and bust cycle toward more stable continuous growth. During slow cycles this meant adding economic stimulus, through increased spending and reduced taxes. When inflation started to get out of hand, government would slow things by increasing taxes, reducing spending, and reduced interest rates. Reagan met the stagflation challenge (high inflation and high unemployment) with an unusual combination of reduced taxes (to stimulate hiring) and increased interest rates (to slow inflation). Nevertheless, Reagonomics still used the same tools, just in different ways.

Our contemporary challenge is a new one. Since roughly 2001 the economy has required continuous stimulation to maintain growth. Radical new stimuli such as Quantitative Easing and zero interest rates have been used. Previously experts avoided those stimuli because of their potential to provoke high inflation yet inflation remains at historically low levels and it seems that, without continuous stimulation, the economy will slow to a crawl.

Production compared to Basic Needs

Output per hour of all persons 1947 to 2010

The new economic challenge is due principally to the rapid increase in workforce productivity. According to the U.S. Bureau of Labor Statistics individual worker productivity has more than quadrupled since World War II. Overall productivity per person in 2012 was 412% that of 1947.

Productivity growth becomes even more striking when compared with basic needs. In 2014 U.S. per capita GDP was $54,539. The basic needs per capita that same year was approximately $13,908. So, per-capita production exceeds basic needs by nearly four times. And while the basic needs side of this equation includes the whole population, the productivity side only accounts for those employed, it doesn't include unemployed workers and people choosing not to seek paid employment. So productive capacity compared with basic needs would be even higher.

If it weren't for problems of distribution this would be a great thing! For the first time in history, society has sufficient capacity to provide comfortable housing, plenty of food, health care, entertainment, and leisure time for all. The challenge is that, in a market economy, productivity increases disproportionally benefit those who are already at the higher end of the wage scale.

Disproportionate Benefits

Creative destruction is the term economists us to describe the transformation of an industry by innovation. It is usually associated with the elimination of jobs due to new technology but any innovation that increases individual productivity qualifies. Some examples: The backhoe replaces the jobs of several ditch diggers with that of a more-skilled heavy equipment operator. Computerized catalogs reduce the demand on librarians. Industrial robots replace factory workers. The common feature of such innovations is that they substantially increase the productivity of individual workers. Frequently, these innovations also result in jobs moving upscale — requiring more skill or training and with correspondingly higher pay.

Creatively destructive innovations have led to enormous productivity increases in recent decades thereby reducing the demand for labor. As with any market, when supply increases or demand declines the value also declines. In this case the value of routine jobs has declined dramatically. Here's how economist Dr. David Autor describes it.

And so the things that are most susceptible to computerization or to automation with computers are things where we have explicit procedures for accomplishing them. Right? They’re what my colleagues and I often call “routine tasks.” I don't mean routine in the sense of mundane. I mean routine in the sense of being codifiable. And so the things that were first automated with computers were military applications like encryption. And then banking and census-taking and insurance, and then things like word processing and office clerical operations. But what you didn’t see computers doing a lot of — and still don't, in fact — are tasks that demand flexibility and don't follow well-understood procedures. I don’t know how to tell someone how do you write a persuasive essay, or come up with a great new hypothesis, or develop an exciting product that no one has seen before. ... What we’ve been very good at doing with computers is substituting them for routine, codifiable tasks. The tasks done by workers on production lines, the tasks done by clerical workers, the tasks done by librarians, the tasks done by kind of para-professionals, like legal assistants who go into the stacks for you. And so we see a big decline in clerical workers. We see a decline in production workers. We see a decline even in lower-level management positions because they’re all kind of information processing tasks that have been codified.

Recent creative destruction has predominantly affected lower-middle-class jobs and manufacturing jobs. While increased productivity has made our nation more wealthy as a whole, large sectors of the labor force have been left behind. This may be the biggest factor behind the slow recovery from the 2008 recession. Automation substituted for jobs that were eliminated during the recession; those jobs are not coming back.

The decline in U.S. manufacturing employment has been balanced, in part, by growth in the service sector. This makes sense; growth in productivity has resulted in greater overall wealth. On average, people in the U.S. have more money to spend on eating out, recreation, vacations, and health care. But again, the benefits are not evenly distributed. As workers displaced from manufacturing have moved into the service sector, wages in that area have stagnated.

Disproportionate Impact of Globalization

Economists have consistently advocated for free trade. The math is incontrovertible; when regions or countries with different costs of production trade goods and services, all communities benefit as each is able to specialize and all benefit from the overall productivity increase.

Only recently have economists begun to study how free trade impacts sectors of the economy rather than the economy as a whole. Unsurprisingly, the impact to the U.S. has disproportionally affected manufacturing and routine labor. Here's another quote from Dr. Autor:

When we import those labor-intensive goods, we’re going to reduce demand for blue-collar workers, who are not doing skill-intensive production.  Now we benefit because we get lower prices on the goods we consume and we sell the things that we're good at making at a higher price to the world. So that raises GDP but simultaneously it tends to make high-skilled and highly educated labor better off, raise their wages, and it tends to make low-skilled manually intensive laborers worse off because there is less demand for their services — so there's going to be fewer of them employed or they're going to be employed at lower wages. So the net effect you can show analytically is going to be positive. But the redistributional consequences are, many of us would view that as adverse because we would rather redistribute from rich to poor than poor to rich. And trade is kind of working in the redistributing from poor to rich direction in the United States. The scale of benefits and harms are rather incommensurate. ...

We would conservatively estimate that more than a million manufacturing jobs in the U.S. were directly eliminated between 2000 and 2007 as a result of China's accelerating trade penetration in the United States. Now that doesn't mean a million jobs total. Maybe some of those workers moved into other sectors. But we've looked at that and as best we can find in that period, you do not see that kind of reallocation. So we estimate that as much as 40 percent of the drop in U.S. manufacturing between 2000 and 2007 is attributable to the trade shock that occurred in that period, which is really following China's ascension to the WTO in 2001.

Manufacturing Output Versus Employment

During the campaign, Donald Trump and Bernie Sanders both advocated rethinking free trade. Perhaps we can use tariffs or government incentives to return manufacturing back to the U.S. As it turns out, that's already happening even without incentives. As labor costs increase in Asia the offshoring advantage is diluted. Many manufacturers are, indeed, opening new U.S. plants. The trouble is that returning manufacturing doesn't result in substantial job or wage growth. These are highly automated plants, employing a fraction of the workers whose jobs were eliminated when manufacturing went overseas. For example, Standard Textile just opened a new plant in Union, SC to make towels for Marriott International. Due to automation, the plant only created 150 new jobs. A generation ago the same plant would have employed more than 1000 people. And many of the new jobs are more highly skilled — designing, operating, and maintaining automated machinery.

Creative destruction and globalization are working together here. Both increase overall GDP, both increase individual worker productivity, both increase total wealth, and both disproportionately benefit skilled upper-middle-class workers over blue collar and middle-management workers. Any benefit from manufacturing returning to the U.S. will be blunted by the increase in automation reducing labor needs and shifting what remains to more skilled jobs.

Demand-Side Economics

So far, we have looked at the supply side of labor. The massive increase in productivity over the last six decades has been driven by innovative technology with global trade as an accelerant. As noted before, when the supply of labor exceeds demand then the value decreases. When supply exceeds demand across the economy as a whole then you get a recession.

From the end of World War II through the rest of the 20th century we succeeded in driving demand to keep up with supply. Advertising grew tremendously as an important demand driver. Television programs established new norms: two cars per family, a large home in the suburbs, annual luxury vacations, and designer clothing labels to name a few. Home appliances like air conditioning and a dishwasher went from luxury to necessity.

Government has participated in driving demand. Housing programs made home ownership much more accessible. So much so that it resulted in the 2007 real estate bubble. Likewise, the Federal Reserve has kept interest rates down ensuring that consumer credit remains accessible and people can buy ahead of income.

In the 21st Century we seem to have reached the limits of demand stimuli to compensate for ever increasing productivity. Smaller cars like the Mini Cooper or Fiat 500 have become stylish. Even the wealthy are choosing to reduce consumption — buying smaller homes or moving into the city. The result is that it takes increasingly strong stimuli to keep the economy moving. For the recession of 2008 the government spent unprecedented amounts of money borrowing directly from the Federal Reserve to do so. Despite this pressure, interest and inflation rates remain at historically low values.

Increase Spending or Cut Taxes?

And so we return to the contemporary debate: Should government increase spending or cut taxes to stimulate the economy? When government cuts taxes, individuals and companies have more disposable income. Presumably they will spend some of that income and save part. When government increases spending, it chooses directly where that money will be spent. Both theories depend on "trickle-down" effects even though that has traditionally been associated with tax cuts. In each case, the direct beneficiary of government policy employs more people and purchases more goods and services; those employees and suppliers also do more business and the impact "trickles" through the economy. The primary differentiator is whether you have greater trust in government (increase spending) or the market (cut taxes) to determine who is at the top of the trickle-down pyramid.

The question is really obsolete. Regardless of which stimulus you choose, demand stimuli are increasingly unable to keep up with increased productive capacity. As a country, we already produce nearly four times basic needs and the multiplier will continue to grow. Meanwhile, the twin pressures of Creative Destruction and Globalization will continue to drive the greater benefit of demand stimulus to those who already earn higher wages. Under either strategy, wage disparity will continue to worsen despite attempts by policymakers to direct tax breaks or government spending toward lower income households.

It seems that we will need a greater economic innovation than either of these 20th century solutions. In my next blog post I will write about some promising ideas. More effective education for all students is, of course, an essential component but insufficient by itself.

Estimating Basic Needs Per Capita: The Self-Sufficiency Standard is an measure of the income necessary to meet basic needs without assistance. Values are in terms of household. National averages aren't published so we have to make an approximation starting with samples of two cities. The cost of living index for Milwaukee, WI is 101.9% of the national average. Rochester, NY is exactly 100.0%. The 2014 Average household size in the U.S. was 2.54. We round up to 3 - two adults and one child. For Milwaukee the 2016 Self-Sufficiency Standard for that household is $43,112 annually. For Rochester, the 2010 Self-Sufficiency Standard for the same family is $40,334. Per-capita values are $14,371 and $13,445 respectively. Averaging the values comes to $13,908 approximate U.S. basic needs per capita in 2014. To be sure, there's a lot of variability across region, household size, medical needs, and so forth. I also mixed figures across 2010-2016. Nevertheless, this is a good enough working figure for comparing to per capita production in the same timeframe.

30 December 2016

The Challenge of Information Democracy

Folio Corporation LogoIn the 1990s I was a co-founder at Folio Corporation, an electronic publishing software company. Correlating with the growth of the internet we produced tools that let average individuals publish their content and search for the information they needed in vast pools. Such tools are common today but it was cutting edge at the time.

"Information Democracy" was the term we used to describe the concept. In previous generations, a select few were able to publish their words to a sizable audience. Likewise, only business leaders and rulers of countries could afford the research staff necessary to stay well-informed. We produced a video featuring James Earl Jones and held conferences anticipating the how greater access to media would spread liberty, increase productivity, and support a more moral society.

We weren't alone in our optimism. In Life After Television George Gilder wrote, "Television is not vulgar because people are vulgar; it is vulgar because people are similar in their prurient interests and sharply differentiated in their civilized concerns." Ever the optimist, Gilder anticipated that greater diversity of media channels would result in a gradual elevation of quality and subject matter.

We have, indeed, achieved a world where any organization can publish to the whole world, where individual citizens can create TV channels on YouTube, and where average researchers have better resources than national leaders had a generation ago. Unfortunately, unfettered access to media hasn't resulted in the utopia many of us expected. Today's challenge is distinguishing reliable information from deliberate deceit and the whole spectrum between.

Unreliable Information

The recent presidential election brought the issue of fake news to the the media's attention. It will probably take years to sort the origin and impact. One source seems to be entrepreneurial Macedonian teenagers making money with fake news sites and Google AdSense. The Washington Post claims it was a coordinated Russian effort to destabilize American democracy.

The Pizzagate episode offers a warning sign of how fake information can provoke violent response. In terms of death toll, Andrew Wakefield's fradulent MMR Vaccine paper was worse. Despite millions of dollars invested in follow-on studies and publicity campaigns, the anti-vaccine movement has contributed to thousands of illnesses and numerous childhood deaths.

In recent decades, most newspapers and magazines have reduced or eliminated their fact-checking departments. Fact-checking of this sort is a cost-center and with declining revenues due to internet media, publishers have sought to reduce costs. The decline in ante hoc (before publication) fact checking has been matched by a growth in post hoc efforts like FactCheck, and PolitiFact as well as fact-checking pages at major news publications. Post hoc fact checking builds a revenue center out of the effort by sensationalizing politician's and other publication's mistakes. The unfortunate result is that post hoc fact checking is selective, biased, and missed or ignored by those who prefer to believe an inaccurate story.


The Oxford English Dictionary (OED) named "Post-Truth" as it's word of the year for 2016. Their definition is "relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief." The OED editors acknowledge that "post-truth" conditions aren't new; only that use of the term increased rapidly in 2016 year in the context of the U.S. Presidential Election and the U.K. Brexit vote.

I prefer the term "Confirmation Bias", defined as "The tendency to interpret new evidence as confirmation of one's existing beliefs or theories. In extreme cases, conspiracy theorists tend to reject any evidence contrary to their opinion as part of the conspiracy while considering all evidence in support of their opinion as true and factual.

Information Democracy

2 by 2 matrix. Horizontal dimension is Access to Publish Media with left being Exclusive and right being Open. Vertical dimension is Trust in Media with top being High and bottom being Low. Upper-left quadrant labeled Information Hegemony. Upper-right, Information Democracy. Lower-Left, Propaganda. Lower-Right, Information Anarchy.

It's time to resort to that old standby - the 2x2 matrix. In the latter half of the 20th century, prior to the advent of the internet and world-wide web, U.S. society was in a state of high media trust but all publishing flowed through a relatively small set of media outlets. Opinion polls identified Walter Cronkite as "the most trusted man in America." This state of high trust and exclusive access is the upper-left quadrant, "Information Hegemony."

We optimists of the 1990s expected free access to media to provoke a shift to Information Democracy. Likewise, we anticipated that totalitarian states would be forced from state-controlled media to an information democracy model.

We didn't understand the importance of trust systems in that shift. As access was generalized, and economic forces pushed media to a more sensationalist orientation, trust declined and we have ended up with Information Anarchy. It's hard to say whether this is superior to the more trustworthy but also restricted hegemony that preceded our day. But this state isn't unprecedented. In 19th and early 20th centuries, there were a greater variety of newspapers and magazines each with strong biases and little distinction between fiction and fact. Journalistic objectivity as a value didn't become prominent until the mid-20th Century.

Tools for Discerning Truth

At present, the main tool most citizens use to judge media is whether a story matches their existing world view and opinions. Confirmation bias turns out to be a pretty good tool so long as one's world view is somewhat close to truth. And, of course, everyone thinks that their biases are the "true" ones. The problem is that when one relies exclusively on confirmation bias, they don't have a tool for correcting their biases - for getting closer to what's really true.

Trust is our second-best tool for judging content. It's also an excellent tool for correcting ones own biases. That's why the decline of trust, and the concurrent decline of trustworthiness, is such a problem today. The sensationalism of most media outlets concentrates on confirmation bias as a way to gain audience. Careful readers have to seek trustworthy journalists rather than organizations - at least until the trend turns.

Critical thinking isn't so much a tool as a discipline. It's something that our schools can and should teach and it's incorporated into good quality language curricula. As students are taught critical thinking they are taught to recognize and use good-quality arguments, to measure the credibility of facts based on origins and citations, and to compare and contrast writings from multiple authors.

Taking Personal Responsibility

I'm afraid that profit motive will prevent the media industry from solving this problem for us. Rather, we need to take individual responsibility for recognizing and tuning our own biases. We must bring the language of critical thinking into our vocabulary; asking about sources, seeking contrasting points of view, looking for supporting evidence, checking the logic of arguments, and discounting emotional appeals.

Clay Johnson wrote in The Information Diet, "The pattern here is simple: seek to get information directly from the sources, and when the information requires you to act, interact directly with those sources. An over-reliance on third party sources for information and action reduces your ability to know the truth about what's happening, and dilutes your ability to cause change." (Page 140)

The Information Age has given us unprecedented access to the original sources. We can take advantage of that. Institutions will follow the people, not the other way around.

10 November 2016

What I Would Tell Donald Trump about Education

I never thought Donald Trump would survive the first primary much less gain the nomination. By the time we reached the general election I gave up making predictions because, where Trump is concerned, I was always wrong. I don't expect this post to ever make it to the Trump transition team. But I could be wrong about that as well. Regardless, I hope it will help some of you in the community.

The Trump Policy Page on Education is pretty spare. During the campaigns, Trump spoke very little about education policy. In the primaries he made a few anti-Common Core remarks that seemed requisite of all Republican candidates. But those quotes date back to February. Mike Pence has been a strong advocate for school choice and that's reflected in the policy page. Their goal is to "provide school choice to every one of the 11 million school aged children living in poverty."

On the prospect that Trump's education strategy is still nascent, here's what I would tell him if I were asked:

Leave Standards to the States

The No Child Left Behind act required states to set educational achievement standards and measure the degree to which students meet those standards. It's successor, the Every Student Succeeds Act (ESSA) was passed in December 2015 with broad bipartisan support. ESSA maintains the emphasis on standards and accountability while returning responsibility to states to decide how to address underperforming schools.

Contrary to popular belief, the Common Core State Standards (CCSS) are not a federal mandate. They were created in a state-led cooperative effort with support from private foundations. The Obama Administration's, Race to the Top grants encouraged adoption of common standards among states without specifying any particular set. Those grants have mostly expired and there is no continuing federal support for the CCSS.

So, for Trump to eliminate the Common Core or to substitute other standards in their place would constitute more federal meddling in education, not less. Leave the development of standards to the states. Some will choose to collaborate on the CCSS, others will go their own way. We're in the third year of Common Core deployment. Within one or two more years we'll know whether it's been effective.

Ensure Title I Funds Really Benefit Economically Disadvantaged Students

This is a gnarly problem loaded with unintended consequences. Title I of ESSA (which is the latest reauthorization of the Elementary and Secondary Education Act) provides extra funding to schools and districts with a high proportion of children from low-income families. The goal is to close the achievement gap by offering more resources to schools that serve children with greater needs.

Unfortunately, as Marguerite Roza observed in Educational Economics the greater the distance between funding decisions and the students, the less effective they are at achieving the intended result. All too often, Title I funds are balanced by other funds being directed toward more mainstream schools and the most challenged schools remain with the fewest funds.

The Trump Campaign's proposal is to have specific money allocated to each economically disadvantaged child and for that money to move with the child to whatever school they choose. It's a promising strategy because it ties the funding decisions directly to the child but the concept won't work if there aren't good quality schools available for parents and their children to choose from.

Base Strategic Initiatives on Reliable Evidence

The theory behind the No Child Left Behind Act was to measure success and incentivize improvement. It's an approach that has worked in other domains but education has proven to be more challenging. That's because we still don't have a good model for effectively educating all students, at scale while preserving initiative, creativity, the arts, and joy.

We're making progress. And there's a growing body of evidence supporting some key strategies. They include:

Choose a Secretary of Education Who Understands the Landscape

Education doesn't need another shakeup right now. There are a lot of experiments underway that will yield great insights into what works. Some of these are at statewide scale like the competency-based New Hampshire High School Transformation or the Rhode Island Education Action Plan. Others are at district or school scale. We are rapidly learning what works and US Ed can shine a light on successful programs.

The Secretary of Education should have an optimistic outlook for US Education. They should have spoken at iNACOL, Educause, and SXSWEdu. They should know the education leaders at the Gates, Hewlett, and Dell foundations. Most of all, they have a humble attitude about the challenges ahead and the limited but important role of the federal government in US education.

26 February 2016

"Growth Mindset" is the Buzzword of 2016 - and That's a Good Thing

I first encountered the Growth Mindset nearly ten years ago in a New York Magazine article titled "How Not to Talk to Your Kids". The central point of the article was that when a child succeeds at a task, it makes a big difference whether you praise them for their effort or praise them for their talent or ability. Praising a child for their effort is associated with a growth mindset. It fosters children's belief that they can overcome obstacles and increase their mental capacity.

The article I read was based on the research of Dr. Carol Dweck. There is a large and growing body of evidence showing that students with a growth mindset achieve more and overcome challenges more consistently. It's also supported by contemporary research in psychology and neurology. "The brain is like a muscle." is a common metaphor, "Giving it a harder workout makes you smarter." Indeed, continuing research shows that IQ is malleable and can be increased.

In recent years, both anecdotal and rigorous evidence for Growth Mindset has increased with books, school programs, and parental training programs. Mindset Works is an advocacy organization dedicate to the concept. The result is an explosion of Growth Mindset interest in late 2015 and 2016.

And here are some recent examples:

Risk of a Buzzword

Growth Mindset is based on solid evidence and sound psychology. But as the buzzword starts trending we risk failure and discreditation of the idea due to enthusiastic but misguided efforts. A colleague recently worried that growth mindset might fall victim to the Self-Esteem fad of the 1990s. To be sure, the right kind of praise is connected with growth mindset. But equally important are fostering the determination to overcome obstacles and the safety to fail.

Some years ago I had the privilege of being a chaperone when my children's school competed in the Utah Shakespearian Festival. It was a small school and the drama team was composed of the majority of the high school - grades 9 through 12. I watched in amazement as these average kids rehearsed dramatic scenes, choreographed their own dance pieces, and performed a breathtakingly creative ensemble scene from Much Ado About Nothing. In the sweepstakes, they took second place against much larger and better-equipped schools. I chatted with teachers and other parents about what qualities enabled our school to perform so well without cherry-picking the best drama students for the team. We decided that an important factor is the emotional safety students had at the school. The cultural climate enabled students to take risks and regularly fail with minimal fear of ridicule. The courage to step out and take risks is especially important in the performing arts. Years later I found corroborating evidence in Brene Brown's research on vulnerability

Growth Mindset has as much or more to do with proper response to failure as it has to do with proper praise for success. Like a scientist performing experiments, students should be encouraged to treat failures as opportunities to learn and gain insight. Indeed, study of a failure can yield new understanding whereas success simply confirms existing knowledge.

Learning Mindsets

The Raikes Foundation considers a broader concept of "Learning Mindsets". This includes growth mindset and adds other skills that help students "actively participate, work through problems, think critically, and approach learning with energy and enthusiasm." Andy Calkins calls this "Agency." Of these skills; which include grit, determination, self-advocacy, and confidence; growth mindset seems to be getting the attention in 2016. If people study the concept and implement it well, that will be a good thing!

30 December 2015

Personalized Learning - More Evidence, More Progress

I've written a lot about Personalized Learning on this blog. The theory has a lot of things going for it. It's intuitive, it's the principle behind the most effective learning factors, and supporting evidence continues to accumulate.

When introducing personalized learning it's useful to contrast with factory-model education. Under a factory model, students with wide variation in personality, interests, skills, and talents are exposed to a consistent educational experience. Unsurprisingly, there is wide variation in the results because the consistent learning activities resonate better with some students than others. So, we grade the students with some portion of the grade attributable to student effort and other parts attributable to evidence of subject mastery. When students with inconsistent backgrounds participate in consistent learning activities, it's not surprising that the results are also inconsistent.

Personalized education applies in two ways. For fundamental subjects like Reading, Writing, and Mathematics, the learning experience should be personalized to meet the diverse needs of individual students. Customizing the experience to each student's individual needs can result in consistent achievement in a diverse population.

With a foundation of core skills in place, the second form of personalization is supporting students as they pursue diverse interests - science, music, art, history, sports, and so forth. The most successful students have always personalized their education. The innovation is for institutions to deliberately participate in the personalization effort.

Accumulating Evidence

Earlier this year, the Bill & Melinda Gates Foundation commissioned a RAND Corporation study of 62 public charter and district schools pursuing a variety of personalized learning practices. The results are promising. Average performance of students in the study schools was below the national average at the beginning of a two-year study period and was above the national average at the conclusion. Growth rates increased in the third year achieving effect sizes exceeding 0.4 in the third year.

Five specific personalization strategies identified and studied are:
  • Increased one-on-one time between student and instructor.
  • Personalized learning paths with students able to choose from a variety of instructional formats.
  • Competency-based learning models that enable individual-pacing with supports tailored to each student's learning level.
  • Flexible learning environments that can be adapted to student needs, particularly when they have conflicting demands on their time.
  • College and career readiness programs.
The authors observe that, "While the concept of personalized learning has been around for some time, advances in technology and digital content have placed personalized learning within reach for an increasing number of schools."

Progress and Public Support

The most significant policy event this year was the reauthorization of the Elementary and Secondary Education Act (ESEA). The previous iteration was known as "No Child Left Behind", this version is titled the "Every Child Succeeds Act". About the new law, iNACOL wrote, "Through ESEA reauthorization, Congress [supports] the shift to new, personalized learning models by redesigning assessments, rethinking accountability, and supporting the modernization of educator and leadership development."

Another important event this year is Education Reimagined. The Convergence Center for Policy Resolution brought together leaders from across the political and educational spectrum to describe a new vision for education. As they describe it, "We were not your typical group -- no two in agreement about how to fix the current system. What we did share, however, was a fundamental commitment for all children to love learning and thrive regardless of their circumstances. We knew it was time to stop debating how to fix the system and start imagining a new system." I had the privilege of hearing Becky Pringle, vice president of the National Educaton Association and Gizele Huff, director of the libertarian Jacquelin Hume Foundation describe their shared vision of student-centered education. It's compelling that, when you get all of the parties to converge on a shared educational vision it focuses on personalization - on meeting the specific needs of each student.

As we head into the new year, I'm optimistic. At this moment, we have progress, evidence, and policy coherently driving toward a better education for all of our students.

15 December 2015

Back to Blogging, Smarter Balanced, and the Importance of Evidence

When I started this blog I set a few guidelines for myself. One was that I wouldn't blog about blogging. I'm violating that rule today; mostly because it's been nearly 11 months since my last post and I want to record my commitment to resume postings here.

Smarter Balanced

The main reason that I haven't been writing is lack of time. The graphic shows my email traffic over the last approximately 18 months. There's a jump around October of 2014. That's when Smarter Balanced converted to it's sustainable form as a unit in the Graduate School of Education at UCLA. A bigger jump occurred in early 2015 as we entered our first operational summative testing season. My workload is slowly improving as I've staffed up the Smarter Balanced technology team with talented set of individuals.

Here are a few of the things we've accomplished at Smarter Balanced since my last post:
  • Released open source for the test delivery system, digital library, and reporting system and proven out the open source solutions in full-scale deployments.
  • Grown the subscriber base of the Smarter Balanced Digital Library to more than 600,000 educators.
  • Administered tens of millions of interim assessments. (Since interim test results remain with states and districts we only have a rough estimate of the number.)
  • Administered summative tests in English Language Arts and Mathematics to more that 6.5 million students.
  • Gained Iowa and the Bureau of Indian Education as members (while, unfortunately, losing Iowa and Maine).
Of course, this hasn't been without challenges. Addressing challenges accounts for most of the growth in my email traffic.

The Importance of Evidence

Finally, as I return to blogging I want to re-assert the importance of evidence. Too many decisions are made based on preconceived notions, confirmation bias, and a charismatic messenger. Recent research into research (meta-research?) has indicated that even rigorous, peer-reviewed, research findings are subject to confirmation bias.

Conveniently for my own opinions, the evidence in favor of personalized learning continues to grow. I have a lot more to write about this in the coming months.

26 January 2015

K-12 Education Funding... and the Strings Attached

In the 2013-2014 fiscal year, California spent $70 billion on K-12 education. To put that in perspective, Bill Gates' net worth is $80.4 billion. So, in a single year, California spends nearly all of Bill Gates' wealth on teaching children. This is a good thing, of course, but it's also an impressive number.

Nationwide, the country spent $632 billion on on public elementary and secondary schools in the 2010-2011 school year (the latest year for which I could find data). That's nearly 4% of the US GDP and 10% of total U.S government spending (including federal, state and local).

Here's where the 2013-2014 California money came from, in billions of dollars. Other states have similar proportions between federal and state/local funds:

Local Funds$21.78031%
State Funds$40.86458%
Federal Funds$7.38211%

For this post I'm going to concentrate on the strings attached to the Federal funds.

The Elementary and Secondary Education Act (ESEA 1965)

Federal funding of education, at least at contemporary rates, centers on the Elementary and Secondary Education Act (ESEA). Passed in 1965 as part of Lyndon Johnson's "War on Poverty," the ESEA was intended to address inequities in education. It had been long observed that students from lower income, urban schools have significantly lower educational achievement than their middle income, suburban contemporaries. ESEA provided supplementary funding to the lowest achieving schools with provisions intended to insure that existing funding is preserved rather than replaced.

The ESEA was set up to require periodic reauthorization by congress – typically every five years. However, due to congressional gridlock on educational ideas, the reauthorizations have often been single-year continuing resolutions that continue funding for another year without changing the provisions of the law. Major updates occurred in 1981 under the Reagan administration and in 1994 under the Clinton administration. But the biggest update was No Child Left Behind, proposed in 2001 and signed by President Bush in January of 2002.

No Child Left Behind (NCLB 2002)

The No Child Left Behind Act (NCLB) is the name given to the 2001/2002 reauthorization of ESEA. It establishes the accountability and reform framework in which state education systems presently operate. In theory, states have the ability to opt out at the expense of federal funding. In practice, no state is willing to give up approximately 11% of their educational budget.

The principle focus of NCLB is on the Standards and Accountability theory of education reform. Here are the main requirements:
  • States must establish state standards (sometimes known as core standards) for achievement in English Language Arts (ELA), Mathematics, and Science. Most states also include standards for Social Studies and other subjects.
  • States must test all students in grades 3 through 8 and again in either grade 11 or 12 to measure progress in ELA and Math. 
  • At a minimum, states must test students in science three times. Once in grades 3-5, once in grades 6-9, and once in grades 10-12.
  • The testing results for each school should show Adequate Yearly Progress (AYP) toward having all students meeting or exceeding state standards by the 2013-2014 school year.

Adequate Yearly Progress (AYP)

Among the most challenging parts of NCLB as been the Adequate Yearly Progress requirement for schools. Schools receiving Title I assistance (those with a large number of low-income students) receive increasingly strident interventions each consecutive year they fail to achieve AYP:
  • Year 1: No intervention.
  • Year 2: Develop an improvement plan, provide students the option to transfer to other schools including paying for the transportation to get there, and prescribed uses of Title I funds.
  • Year 3: Must continue year 2 interventions plus and also provide tutoring and/or after school programs from a state-appointed provider.
  • Year 4: Must continue year 2 and 3 interventions plus one or more of the following: Replace responsible staff'; Implement a new curriculum; Decrease a school's management authority; Appoint an external expert to advise the school; or Restructure the internal organization of the school.
  • Year 5: Shut down or completely restructure the school.
When NCLB was passed, there was an optimistic outlook. Within 12 years, nearly all schools would be meeting state standards for performance with a small number of underperforming schools receiving intervention. It turns out that, as a country, we haven't worked out a formula for consistent school improvement. If the process for meeting AYP standards was well-known, the goals might have been met.

One concern has been that certain states set unreasonably low standards. Prior to adopting the Common Core State Standards, Tennessee had the lowest standards for reading while Massachusetts had the highest.

Despite low and inconsistent standards, so many schools are failing to meet AYP goals that there aren't enough resources to deliver the prescribed remedies. In 2011, 48% of public schools failed to meet AYP goals. In 21 states, more than half of schools didn't meet AYP goals and in 41 states and Washington D.C. more than one fourth of schools didn't make AYP. There aren't enough tutoring organizations, replacement staff, or trained principals to supply the year 4 and 5 remedies for this many schools, not to mention sufficient funds to pay for these interventions.


With so many schools failing to meet AYP goals and the remedies being impractical to implement, congress is way overdue for an ESEA reauthorization that adapts to current circumstances. Unfortunately, no proposed update has made any significant progress. Congress has left us with continuing resolutions that preserve the law as it stands.

To relieve pressure, the Department of Education, under Secretary Arne Duncan has begun granting waivers to NCLB to states that produce an acceptable alternative plan. Not surprisingly, the granting of waivers is controversial. The authority of the executive branch to waive requirements like these seems to have legal precedent. However, it's not clear that alternative requirements can be applied without congressional action.

Nevertheless, every state except Nebraska has applied for a waiver, many have been granted, and even Nebraska has announced plans to apply for a waiver in 2015.

The Way Forward

There's growing hope that congress may finally address ESEA reauthorization in 2015. There are even hints that the reauthorization may include support for competency education. Many organizations are offering wishlists for reauthorization from civil rights groups to advocates of federalist solutions. As in the past, divisions on education don't follow traditional political lines.

Here is my personal wish list for an ESEA reauthorization:
  • Preserve and strengthen state standards, encourage but don't require alignment of standards between states.
  • Preserve regular assessment of student achievement with an increasing emphasis on Depth of Knowledge.
  • Accelerate the shift from seat-time measures to direct measures of competency for the granting of secondary school credit.
  • Encourage the transition from periodic testing events to continuous assessment of student skills (curriculum-embedded assessment) with frequent and rapid feedback to students, teachers and parents.
  • Clarify the difference between standards and curriculum and establish a framework for public review of both standards and curriculum. Require schools to report the origin of curricular materials on public websites and on every worksheet or assignment.
  • Sustain the concept of interventions for schools not achieving AYP goals while shifting to more practical and supportive remedies than those in NCLB.

20 November 2014

Education Data Standards Update

Over the last couple of years, some colleagues and I have developed several models that are useful for understanding education data standards, where they apply and how they fit together. Many thanks go to host of collaborators who have reviewed and helped with these models.

The first is the Four-Layer Framework for Data Standards. This framework has helped guide decisions about the Common Education Data Standards – what should be the scope and how CEDS should relate to other standards in the space. However, the framework is not limited to education standards. Any organization that's developing specifications for the exchange of data should think of these four layers and try to describe each part semi-independently.

Last year I developed A Taxonomy of Education Standards. This framework categorizes standards according to their purpose or the domain in which they are applied.

These education standards are not exclusively data standards. Academic Standards, which include Achievement Standards and Competency Standards describe skills that students should be able to demonstrate as they achieve certain levels of education. Nevertheless, there are data standards for describing Academic Standards and for aligning content to those standards.

In May of 2013 my friends at SETDA published Transforming Data to Information In Service of Learning. This is an enormously valuable survey of existing data standards with guidance on how organizations can apply them to improve learning and support interoperability of their learning technologies. In doing so, they used both the four-layer model and the taxonomy.

Shortly thereafter, I combined the models into a two-dimensional matrix with the four layers on the horizontal axis the taxonomy on the vertical axis. This allows us to plot existing and proposed standards against the two dimensions to see how they fit together.

At the iNACOL symposium two weeks ago Liz Glowa, Jim Goodell and I presented a workshop on "Competency Education Informed by Data". For that workshop I updated the matrix to reflect changes in the standards landscape over the last year. Here's the updated version:

For that same workshop, Jim Goodell developed a matrix plotting the layers on the vertical axis and the progression from Pre-K to primary, secondary, higher education, and workforce data on the horizontal.

And to tie these all together, here's a translation of the acronyms into the standards with links to their corresponding websites.

AIFAssessment Interoperability Framework
CCSSCommon Core State StandardsBlog Post
CEDSCommon Education Data Standards
Ed-FiEd-Fi Alliance
EDIElectronic Data Interchange
ESBEnterprise Service Bus
IMS CCIMS Common Cartridge
IMS LTIIMS Learning Tools Interoperability
IMS QTIIMS Question and Test Interoperability
LRLearning RegistryBlog Post
LRMILearning Resource Metadata InitiativeBlog Post
NGSSNext Generation Science Standards
OAI-PMHOpen Archives Initiative - Protocol for Metadata Harvesting
OBIOpen Badge Infrastructure
PESCP20W Educational Standards Council
RESTRepresentational State Transfer
SEEDState Exchange of Education Data
SIFSIF Association
xAPIExperience API (AKA Tin-Can API)

Updated: 25 Nov 2014 to add the OAI-PMH protocol.

30 July 2014

Bitcoin - What Makes a Currency?

Today I'm diverging from the education theme to write about cryptocurrency. I am provoked, in part, by this quote from Alan Greenspan:

“It [Bitcoin] has to have intrinsic value. You have to really stretch your imagination to infer what the intrinsic value of Bitcoin is. I haven’t been able to do it. Maybe somebody else can.”

Now, Greenspan should know better than to say something like that. As a fiat currency, the dollar doesn't have any more intrinsic value than Bitcoin. And that's why I decided to write about this. Most of the supposed "Bitcoin Primers" out there are more confusing than helpful. They don't explain how money works or how cryptocurrencies like Bitcoin satisfy the requirements to become a currency.

What makes a Currency?

Currency is a form of money that accepted by a group of people to exchange value. A functional currency must have three important characteristics:
  • Scarcity - If you have too much of the currency, it's value will plummet toward zero. So, there must be a limited supply.
  • Verifiability - You must be able to verify that a unit or token of the currency is valid and not a forgery or imitation.
  • Availability - Despite scarcity, there still must be a stable supply of the currency to match growth in the corresponding economy.
Precious metals like gold and silver were the first common currencies. They meet all of the foregoing criteria. Gold is scarce; there's a limited amount of it available thereby endowing a small amount of gold with considerable value. It's verifiable; gold has certain characteristics, such as density, malleability and color, that make it easy to distinguish from other materials. And gold is available; while it is not common, gold mines still offer a consistent supply of the material.

One of the difficulties with early uses of gold currency was the complexity of exchange. Merchants had to use a balance or scale to determine how much gold was being offered. To facilitate easier exchange, governments, banks, and other trusted organizations would mint coins of consistent size and weight. This would allow someone to verify the value of a coin without resorting to a balance.

Fiat Currency

"Fiat" means, roughly, "because I said so." Fiat currency has value simply because some trusted entity says it does. It need not have any intrinsic value.

The first fiat money was the banknote. When making a large payment it could be inconvenient or dangerous to move large quantities of coins or bullion. Banks solved this problem for their customers by issuing banknotes. A banknote is a paper that a bank or other entity promises to exchange for a certain amount of coin, gold, or other currency. The bank could keep the corresponding gold locked away in a vault and people could carry more convenient paper certificates.

Beginning in 1863, the United States began issuing gold certificates as a form of paper money or banknote. Certificates like these were backed by stockpiles of gold held in places like Fort Knox. European countries did similar things. With the stresses of late 19th century wars and World War I that followed, countries discovered that they could issue more banknotes than their corresponding stockpiles. This led to a lot of instability until countries figured out how to regulate their currencies. But, by the end of the Great Depression, pretty much every economically developed country had fiat currencies controlled by a central bank. While backed by gold or other reserves, the value of these currencies is not directly tied to the value of gold.

Here's how the U.S Federal Reserve system works: The Federal Reserve Bank creates the money. Money is issued as currency (the familiar U.S. coins and bills) but also simply as bank balances. Indeed, far more money exists as bank records than in actual physical currency. Originally this was done through careful bookkeeping in bank ledgers. Now it's all done on computers. The money is issued in the form of low-interest loans, primarily to banks, which then lend the money to their customers and to other, smaller banks. Other central banking systems like the European Central Bank work in a similar way.

So, how does fiat money meet our requirements for currency?

Scarcity: Only one entity, the central bank, has the authority to create and issue the currency. The central bank limits the issue of money in order to preserve its value.

Verifiability: Coins and paper money are printed or minted using materials and techniques that are difficult for average people to reproduce but are fairly easy for to verify. Money in the form of bank balances is verifiable because each bank or credit union has accounts with higher-level banks ultimately reaching the Federal Reserve. So, when I write a check from my bank to yours, our two banks contact each other and transfer the value sending records up the banking chain until they reach a common parent bank which may be the Fed. Each bank in the chain verifies that the appropriate balances are in place before allowing the transaction to proceed.

Availability: Central banks can create as much money as they think the economy needs. The primary challenge for central banks is manage the money supply - ensuring both scarcity and availability.


Bitcoin is the first, but by no means the only cryptocurrency. The challenge that the pseudonymous creators of Bitcoin tackled was to achieve the three features of currency - scarcity, verifiability, and availability - in the digital realm. They magnified the challenge by prohibiting a central authority like a government or a central bank. Trust, in the case of Bitcoin, is in the system, not in any particular institution.

Scarcity: The "coin" part of most cryptocurrency names is somewhat misleading. Bitcoin doesn't consist of a bunch of digital tokens that are exchanged. If that were the case it would be hard to prevent double-spending of the same token. Instead, cryptocurrencies work more like bank account balances. Bitcoin has is one, big, public ledger that is duplicated thousands of times. All transactions in the ledger must balance - for one account to receive value, another account must be reduced by the same amount. This ledger is called the block chain and it contains a record of every transaction since the creation of the currency.

Verifiability: Cryptocurrences rely on public-key cryptography to ensure that only the owner of a currency balance can initiate its transfer. The bitcoin owner uses their private key to sign the transfer record and then posts it to the network of block chain replicas. Any entity in the network can use that owner's public key to verify that the transaction is valid and that ownership has been transferred.

Availability: Those who host a copy of the block chain have to perform the cryptographic calculations necessary to verify transaction validity and prevent fraud. Those who do this fastest are periodically rewarded through the creation of new Bitcoin balances. Because of the reward, maintaining the block chain is known as "mining" and a small industry of Bitcoin mining software and devices has developed. All users of cryptocurrency benefit from this because the more miners exist, the more secure the currency becomes due to the duplication of records and validation.

This is a tremendously clever scheme because it simultaneously ensures a consistent supply of currency, decentralizes operation, and secures the network against manipulation by creating thousands of replicas of the block chain.

Potential Impact

The true value of any currency is the willingness of a community of people to use it for daily transactions. The three requirements, Scarcity, Verifiability, and Availability combine to cause people to trust a particular currency. When that trust is lost you can get bank runs, hyperinflation, or simple destruction of wealth. Meanwhile, the community rushes to find a new currency.

The advent of the internet with myriad handheld devices capable of initiating transactions makes it possible for multiple currencies to coexist. For the first time in history, people may have a choice among currencies to use in daily transactions. Central bankers, and the sovereign countries that endow them with their power, are appropriately worried. An industry that has historically been immune to competition no longer has that protection.

I think this is a good thing. Just like any other competitive market, competition should incentivize good behavior both from established central banks and from upstart cryptocurrencies.

23 May 2014

Illusions of Success when Inputs are Confused with Outputs

Prosperity has been defined as, "the state of flourishing, thriving, good fortune and / or successful social status." In the United States we tend to measure prosperity in terms of wealth, or lack thereof. Indeed, the U.S. government defines poverty (the lack of prosperity) as having an income below $15,730 for a household of two. The trouble is, that this confuses the output (or outcome) of prosperity with one of its inputs, income (or wealth). And while the two values often correlate, they can be quite different.

In the early 1800's, Georgia gave away millions of acres of land through a series of land lotteries. Nearly everyone who was eligible entered the lottery because an individual had a roughly 1 in 5 chance of winning and a typical parcel was worth about the median net worth of a Georgia resident. A penniless person who entered the lottery had a one in five chance of suddenly becoming wealthier than half of the residents of the state.

When Hoyt Bleakly, of the University of Chicago, and Joseph Ferrie, of Northwestern University, learned of this event they found it to be a convenient natural experiment. Does handing out wealth to random individuals elevate their prosperity and does that prosperity carry over to future generations? The answer, at least in this particular case, seems to be "no." Even though wealth and prosperity are correlated, increasing wealth didn't increase the prosperity of the children. As Bleakley said on a Freakonomics podcast, "Maybe the resources have to come from outside the household, be it say a good public school. Maybe the resources have to come from the parents, but the parents don’t know how to provide it in terms of nurturing, in terms of reading and communicating ideas to their children, etc." In other words, wealth is only one of the contributors to prosperity and it may be among the least important.

Optimizing the Wrong Thing

When two features, like wealth and prosperity, are correlated, and one is easier to measure or influence than the other, a common mistake is to focus on the more convenient factor. The result is a host of unintended consequences.

This is a case where feedback loops offer insight:
A feedback loop with a short-circuit bypassing the system (or student).
In a proper feedback loop, we measure the output, compare it with the reference, and use it to choose the proper input. But when inputs are confused with outputs, the feedback loop is short-circuited – as with the red line in the above diagram. The evidence of this is when we get all kinds of reports showing how good the inputs are. Meanwhile, the real goal suffers.

A Pedagogical  feedback loop measures student outcomes (in the form of competencies or skills), compares them with standards of what students should know, and uses the result to choose appropriate learning activities. But, when inputs are confused with outputs we get reports of good student attendance, appropriate construction of curriculum, the prescribed amount of seat time, properly trained and certified teachers, high quality facilities, and all kinds of other reports about the inputs. Meanwhile, the output, in terms of student skills, remains unimproved.

Here are a few other inputs and outputs to consider:
To be sure, there's correlation in every one of these cases. But, just as with the Georgia Land Lottery, manipulating the input frequently diminishes the correlation and results in a less-than desired outcome. Focusing on, and reporting about the inputs can give the illusion of success. Focusing on the outcome helps identify other factors that contribute to the desired result.

Furthermore, excess focus on inputs results in missed opportunities. As Michael Horne and Katherine Mackey wrote, "Focusing on inputs has the effect of locking a system into a set way of doing things and inhibiting innovation; focusing on outcomes, on the other hand, encourages continuous improvement against a set of overall goals and can unlock a path toward the creation of a student-centric education system."

Incentives are Inputs

Just as mistaking outputs for inputs causes trouble, the reverse is also true. A 2011 study by the Hamilton Project compared incentives tied to inputs with incentives tied to outputs. Groups of students were offered financial incentives tied to input activities such as number of books read, time spent reading, or number of math objectives completed. Other groups were offered incentives tied to outcomes such as high test scores or class grades. The study found that input incentives were much more effective than output incentives. Among their recommendations are:
  • "Provide incentives for inputs, not outputs, especially for younger children."
  • "Think carefully about what to incentivize."
  • "Don't believe that all education incentives destroy intrinsic motivation."
This shouldn't be surprising. Incentives, at least when given to the student, are inputs. Incentivzing outcomes is a different kind of short-circuit in the feedback loop.
Feedback loop with a short-circuit bypassing instructional influence.
In a Pedagogical  feedback loop the instructional system interprets the results of assessment before passing them on to the student. When we incentivize the outcomes (or assessment thereof) we bypass the capacity of the education system to interpret student needs and prescribe the right learning activities.

It's notable that the Hamilton Project study found that incentivizing outcomes was especially ineffective for younger students. Among the goals of any educational system should be to develop students into independent learners. A mature, independent learner has taken on pedagogical skill and responsibility. For independent learners, incentivizing outcomes should be more effective.

Nevertheless, the Hamilton Project study didn't neglect outputs. In every experiment, the effect of the incentives was evaluated according to student outcomes. Only the point of intervention was changed.

Effective Measurement and Improvement

In 2005, New Hampshire abolished the Carnegie unit – a measure of seat time by which most U.S. schools quantify educational credits. "In its place, the state mandated that all high schools measure credit according to students’ mastery of material, rather than time spent in class." Thus, New Hampshire has shifted their fundamental measure of student achievement from an input to an output. Early results of that change are promising.

To be sure, optimizing certain inputs still has a positive impact. Otherwise schools would have completely failed since the institution of the Carnegie Unit in 1905. But shifting the focus from inputs to the outputs we wish to optimize will open the door to greater innovations and more rapid improvements in student achievement.

17 March 2014

Lecture Experiment at Summit Public Schools

A couple of weeks ago I attended the LearnLaunch conference in Boston. In one of the sessions, Diego Arambula from Summit Public Schools told a great story:

In one of their blended learning classes the students were taught by a team of teachers and given flexibility to choose the activities they felt would best help them learn the subject. One of the activities the teachers introduced was optional lectures. Strategically scheduled shortly before tests, the lectures gave students a chance to review material and solidify understanding.

At first, the lectures were quite popular – probably due to their proximity to tests. However, they found that the scores of those students who attended the lectures were not significantly different from those who chose not to do so. The students must have sensed the lack of impact because attendance at the lectures dwindled.

When lecture attendance fell to 3-5 students, scores of those who attended suddenly shot up. Arambula asked the teachers what was happening? The teachers said that with so few students attending, they didn't really deliver a lecture. Rather, they asked the students what areas they were struggling with and they concentrated the time on those particular issues. In other words, the lectures turned into teacher-led study groups or small-group tutoring sessions.

Eventually the teachers abandoned the lecture format and opened a "help bar" at the back of the classroom. Staffed by at least one of the teachers, students could go to the bar just about any time for one-on-one or small group assistance.

There are a bunch of things to learn from this vignette. Here are a few:
  • Summit was prepared to measure the effectiveness of the optional lectures (and presumably any other learning option they offer).
  • The teachers and staff are as much in a learning mode as the students. They discover what works and adjust in those directions.
  • Tutoring and small group instruction is tremendously effective even when it accounts for a small part of the student's learning experience.
Finally, Summit established an environment where innovation like this is natural and encouraged.

27 January 2014

Personalization Relies on Standardization - A Medical Metaphor

In my last post, I wrote about Yong Zhao's observation that the U.S. leads the world in cultivating 21st century skills like Confidence, Risk-Taking, Creativity and Entrepreneurship. Zhao is concerned that the current U.S. "obsession" with standards and assessment will result in reduced appreciation of creative endeavor. Indeed, Zhao's concerns are confirmed by contemporary de-emphasis of arts and humanities education in U.S. public schools.

I share Zhao's concern that today's schools suffer from excess focus on achievement as measured by test scores. I also agree with him that some of this is encouraged by federal programs like No Child Left Behind. However, I disagree with Zhao in that I believe that achievement standards and testing aren't the cause of the problem. Indeed, they're a critical part of the solution.

To explain this apparent contradiction, I’ll borrow a metaphor from Sir Ken Robinson. When I go to my physician, I expect a personalized, custom experience. I expect him to diagnose, treat and prescribe according to my personal needs. In order to do this, however, the doctor will use standard tests. He'll do a standardized exam and ask me standard questions. For example, he’ll measure my temperature in degrees and compare it against 98.6 Fahrenheit. He’ll measure my blood pressure in millimeters of mercury and compare that against standards established by the American Medical Association. Based on those results he may follow-up with custom questions or tests chosen according to my individual needs. But even those follow-on tests will be compared against standards. Finally, he'll prescribe a course of treatment that's customized to my individual needs.

Admittedly, not all doctors handle standards the same way. For example, when my cholesterol tested high, one doctor called in a prescription for Statin drugs without consulting me. This bothered me as I wanted to discuss how serious the problem was and consider alternatives like diet and exercise before simply taking a drug. Indeed, another doctor recommended a Coronary Calcium Scan before going on Statins. The test came out clean and I'm putting additional effort into my exercise.

That’s what standardized testing, properly done, is all about. This school year, the Smarter Balanced Assessment Consortium will test more than three million students in grades 3 to 11. The results from this first year will be used to calibrate the tests and find reasonable benchmarks for student achievement in English and Mathematics. In future years, students’ test results will be used by teachers, students and parents to customize learning activities to the needs of every child.

This isn't a complete solution. We need to actively fight the tendency to teach only what’s going to be tested. Not only is it not good for the child, strangely enough, “teaching to the test” doesn't improve scores as much as a well-rounded education. We also need to resist efforts to standardize curriculum and teaching. Standards belong to measurement of the results of education, not to the inputs.

Doctors can only directly measure a few vital signs and compare them to standards. For more detail they perform or prescribe more extensive tests. Some of these are screenings like the cholesterol test I had with my annual physical. Others are specific to certain problems like the CT scan I had after breaking some ribs. But even the full battery of tests available to a physician can't discover all issues. For the rest, a physician has to rely on interviews, experience, consultation with other doctors and sometimes trial-and-error.

The same is true for education. We can only measure a few of the factors that go into a well-rounded education. The Common Core State Standards only apply to fundamental skills in reading and mathematics. It's a small fraction of all that we hope children will learn. But that doesn't mean we should throw out the standards. Literacy and numeracy are fundamental skills that are prerequisite to every other academic skill we desire students to develop. The mistake is to assume that just because these are the skills that are being measured that they are the only ones that count.

Standards and testing are useful tools – but only when they serve the greater goal of developing confident, creative adults who are capable of a lifetime of self-directed learning.