Of That

Brandt Redd on Education, Technology, Energy, and Trust

05 December 2011

Quote: Peter Sandman on Climate Change Outrage

If you’re talking to a room full of people who hate the idea of the set of remedies you have proposed for climate change and instead of trying to reduce their outrage about the remedies, you’re busy trying to increase their outrage about climate change, you’re fighting the wrong fight.
(Peter Sandman, Quoted on Freakonomics)

27 November 2011

Video Presentation - Changing the Rules to the Game of School

My presentation at the iNACOL VSS Conference two weeks ago was very well-received. The video version is below, it's just voice over slides but it flows pretty well.

10 November 2011

VSS Presentation: Changing the Rules to the Game of School

Today I'm speaking at the iNACOL VSS conference. The following outline of my presentation is primarily a resource for attendees but others may find it valuable as well.

A New Perspective on Technology:

Changing the Rules to the Game of School

  • Thesis
    • The Game of School was designed around scarce resources but new technology offers abundance where scarcity once ruled.
    • Digital Abundance
      • Abundant Content
      • Learning Maps
      • Abundant Assessment
  • Games
  • Game of School
    • Goal (from Disrupting Class)
      • 1840: Preserve the Democracy
      • 1890: Prepare Everyone for Vocations
      • 1980: Keep America Competitive
      • 2000: Eliminate Poverty
    • Rules
      • Failure Consequences
      • Proxies for Achievement
      • Unbundling
    • Feedback
      • Formative Assessment
      • Control Theory
      • Pedagogical Theory
    • Voluntary Participation
      • Duncker’s Candle Problem
      • Motivation
        • Autonomy
        • Mastery
        • Purpose
  • Design of the Game
  • Resources & References

03 November 2011

The Learning Resource Metadata Initiative

Try this: Browse to Google's Homepage and search for a recipe. Given the season, try "Pumpkin Pie." On the left the Recipe Search Bar automatically appears because Google sensed that a lot of the matches were recipes. Now you can narrow the list by selecting those that do have maple syrup but don't include amaretto since it's missing from your spice cabinet. There are also options to select for cooking time and calories.

Now, suppose teachers and students had the same kind of facilitation for their searches. This week I was experimenting with AdWords and discovered that 246,000 people searched for "right triangle." (many of them probably teachers) and 60,500 people searched for "triangle calculator" (most likely students). Wouldn't it be cool if such searches resulted in a Learning Search Bar that let you choose between videos, activities and lesson plans; or that let you target a particular age group or find resources for students with specific disabilities? Indeed, there were 880 searches for "math for the blind."

That's the idea behind the Learning Resource Metadata Initiative. In June, Google, Yahoo and Microsoft jointly announced Schema.org. This is a common metadata vocabulary for describing things like blog posts, audio recordings, organizations, places, news articles and things for sale. Then they encouraged industry-specific consortia to submit extensions to the vocabulary. So, we formed LRMI to represent the education industry.

It's an amazing group. Co-funded by the Hewlett and Gates foundations and co-sponsored by Creative Commons and the Association of Educational Publishers the team has representatives from major educational publishers as well as OER repositories. The technical working group involves a cross section of educators and metadata experts. We're making excellent progress and are on schedule to solicit public comments on a draft specification starting in December.

Before long, the search for quality learning materials on the web will become much easier.

31 October 2011

Quote of the Day: Bill James on Trust

"We have a better society when we can trust one another. And wherever and whenever there’s an evaporation of systems based on trust I think there’s a loss to society. I also think that one evaporation of trust in society tends to feed another, and that we would have a better society if we could, rather than promoting fear and working to reduce the places where terrible things happen, if we could promote trust and work on building societies in which people are more trustworthy. I think we’re all better off in a million different ways if and when we can do that."
- Bill James, being interviewed by Freakonomics (quote appears at the very end of the podcast. Transcript here)

27 October 2011

The Personalized Learning Model

The first two parts of this series discussed the Tyranny of the Bell Curve and a strategy for Tackling Bloom's Two Sigma Problem. In this third and last part I describe the Personalized Learning Model that many of us are using to guide investments in education technology.

The diagram to the right is similar to those used by other education technology organizations so it's not unique to the Gates Foundation. The key components in most any Adaptive Learning System or Instructional Improvement System are Student Data, Educational Content and Assessments. We use precise definitions of these:

  • Learning Objectives are specific competencies to be learned in a particular subject domain. Most courses, both online and legacy media, start with a set of learning objectives. However, if data, content and assessment systems are to interoperate, a common set of objectives must be shared among them.
  • Student Data is a collection of  evidence of what competencies or skills a student has achieved. On a scale of weak to strong evidence, it includes presence information (the student attended a class), activity information (the student viewed a particular video or performed a lab) and assessment results.
  • Learning Content includes reading materials, textbooks, interactive activities, lesson plans, exercises and any other content that's intended to teach about a subject.
  • Assessments are student activities that are instrumented in such a way that we can measure competence in knowledge or skills. You can think of multiple choice and true/false as activities that are deliberately simplified to make them easier to instrument. However, assessment technology is advancing in ways that make it possible to instrument more realistic activities.
The Feedback Loop describes the process of learning, from determining what a student doesn't know, to teaching the subject, to assessing competency. For the feedback loop to work effectively, it must cycle frequently supplying rich and accurate feedback to students and educators.

Most of our education technology investments involve some combination of improving the state of practice in these areas and improving interoperability among systems. Future posts in this blog will profile some of the most important initiatives we and others are working on.


Posts in this series:
Breaking the Tyranny of the Bell Curve
Tackling Bloom's Two Sigma Problem
The Personalized Learning Model

14 October 2011

On Track for 50% of High School Courses Online by 2019

In the 2008 book, Disrupting Class, Clayton Christensen applied his theories of disruptive innovation to education. By that time, disruptive innovation had been studied well enough that Christensen and his colleagues could predict the adoption curve of such an innovation. It's an impressive feat -- telling us how soon something new is going to impact our lives.

They predicted that by 2014, 25% of high school courses would be taken online and that by 2019 fully half of them will be taught that way. When Christensen and his colleagues talk about online education, they include blended or hybrid formats in that bucket. This is important because the evidence shows that it's a blend of online materials and personal attention that results in superior learning outcomes.

In a recent Washington Post Column, Christensen and co-author Michael Horn offer an update citing emerging examples like Khan Academy in Los Altos and Rocketship Education. "In the year 2000, roughly 45,000 K-12 students took an online course. In 2010, roughly 4 million did." Then they reassert their prediction of 50% of high school courses online by 2019.

Three years into the prediction, we seem to be on track.

06 October 2011

Steve Jobs: How to Live Before You Die

As a teenager I learned to program on an Apple ][. First BASIC and then Pascal and assembly language. I played computer games, hacked them and then wrote my own. I have fond memories of those times. But none of that, nor the careers that followed for me and countless others would have happened without Steve Jobs. There's hardly a person in the world whose life hasn't been impacted in some way by his vision and drive to see it realized.

I join many others in recommending the following speech he made at the 2005 Stanford University Commencement. Fittingly titled, "How to Live Before You Die":

May he rest in peace.

26 September 2011

We Need an Energy Breakthrough

I haven't yet read The Quest by Daniel Yergin -- only this review. But it's nice to know that someone who has spent a career studying energy issues agrees with my conclusions. We need a breakthrough in energy technology. The environmental burden caused by fossil fuels is too great for us to rely on that source as we try to elevate the standard of living for the world's populations.

16 September 2011

Tackling Bloom's 2 Sigma Problem

Recently I wrote about the tyranny of the bell curve. Benjamin Bloom was working on this problem back in the 1980s. As an experiment, he and some of his grad students combined Mastery Learning with 1:1 tutoring. They discovered that average students in the program performed two standard deviations (two sigmas) better than their peers receiving conventional instruction. Using on John Hattie's scales from Visible Learning I equate that to more than four times the rate of learning.

In a seminal paper on the subject, Bloom wrote that that 1:1 tutoring is "too costly for most societies to bear on a large scale" and reported on their efforts to find more scalable solutions. This has become known as Bloom's 2 Sigma Problem.

Like many others working on education technology, I believe that Bloom's 2 Sigma results can be achieved and even surpassed by appropriate use of computer technology. From a number of initiatives, we're getting results that confirm this belief. While approaches vary, they have common elements:

Mastery Learning: That's what Bloom called it. Other terms are Competency Based Pathways and Proficiency Based Learning. There are nuanced differences but the basic premise is that students don't advance until they have demonstrated competency in the current topic.

Asynchronous Learning: Students advance from topic to topic independently. To do mastery learning properly, this is a requirement. However, it doesn't mean that there aren't sync points. For example OLI Courses support students spending variable amounts of time (according to their skills and background) learning the basic material. This way they arrive in class equally prepared for the live debates that are so critical to teaching certain subjects. Some classes resync every Friday with those students who are ahead assisting those who are taking more time. Results from the Khan Academy and School of One are showing us that individual students aren't consistently fast or slow. The slow and fast students trade places from day to day or week to week and overall variability tends to balance out.

Emphasis on Principles more than Facts: A student who has command of the underlying principles of a subject can often derive the facts. And in today's world, memorizing facts is of diminishing importance. It's too easy to look them up.

Strategic Intervention: The teacher is more important than ever. After all, learning is fundamentally a human-to-human process. Deploying online curricula in such a way that supports independent work frees teachers to spend more time one-on-one with students. They are enabled to focus on things only teachers can do: diagnosing misunderstanding, demonstrating the value of the subject, motivating and rewarding achievement and developing a personal relationship with each student. Paradoxically, technology has potential humanize the classroom. In a very important TED talk, Salman Khan says that we should move from measuring the student to teacher ratio to measuring the "student to valuable human time with the teacher ratio." (Quote is at 14:30 but watch the whole thing.) Teacher Dashboards are an important mechanism for informing teachers about where they need to apply their skills.

Posts in this series:
Breaking the Tyranny of the Bell Curve
Tackling Bloom's Two Sigma Problem
The Personalized Learning Model

14 September 2011

A Four Layer Framework for Data Standards

Recently I've been getting involved in a number of education data efforts. It's an alphabet soup of standards and specifications including CEDS, LRMI, SIF, PESC, Ed-Fi and more. As we've discussed these specs and how they fit together we developed a four-layer framework for how different data standards fit together. Our one-page outline of the framework has been used in ways we didn't foresee. I recently updated it with feedback from the CEDS team. Click here for the latest version. It's released under a CC0 license which pretty much means do what you want with it but don't blame me if something goes wrong. And see below for a graphic version.


12 September 2011

Breaking the Tyranny of the Bell Curve


If you take a random set of students, teach them all the same way and then give them all the same standardized assessment the results will follow a normal distribution or "bell curve" with a few excelling, the majority performing near average and a few failing. This is the tyranny of the bell curve.

There are all kinds of problems with this: Standardized tests result in normal distributions of scores because they are designed to do so. Not necessarily because human ability really follows a normal distribution. Indeed, human intelligence is malleable.

But let's set that aside for a moment and just go crazy theoretical. Suppose you had a large population of identical students. Then you put them in classrooms where instruction was delivered in identical ways. Then you gave them an identical assessment. The results would approximate a normal (or bell) curve. Why? Because a normal curve is what results when you average out a bunch of random errors. Instruction is naturally error prone. Students don't always pay attention. Even when they do, they don't always understand. Teachers make mistakes. People get sick or have bad days.

My colleague, Josh Jarrett, is fond of saying that high school graduates' knowledge is kind of like Swiss cheese with random holes in their understanding.

When looking at children, my natural inclination is to celebrate their differences. When they are dressed the same, in sports uniforms for example, I gravitate to the differences the color of their hair and their eyes, how they smile, who they cluster around, what grabs their interest.

Despite this diversity, our society needs all children to reach a certain standard of competency in core subjects of literacy and mathematics. Likewise, they need to have a basic understanding of the social and civic institutions and norms that are essential to prosperous society.
So, the challenge is achieving consistent results (academic achievement) while prizing the inconsistency of the inputs (our children). The obvious answer is that we adapt the education to the needs of each student. As a friend put it, "Every student should have an IEP."

But IEPs or Personalized Learning, as we prefer to call it, is prohibitively expensive, right? I believe that the principles of mass customization so successfully applied in other industries can also be applied to education. I'll be writing more on this in coming weeks.


Posts in this series:
Breaking the Tyranny of the Bell Curve
Tackling Bloom's Two Sigma Problem
The Personalized Learning Model

01 September 2011

Windows in Time

Last January we had to buy a new car for my wife. About five years ago I installed a bluetooth handsfree phone box in her previous car. We liked it so well that now we have them in all of our cars. Yes, I know that even handsfree phone conversations still distract drivers. But it still helps.

So, I had to decide what to put in the new car. These days car stereos often include phone capabilities so I thought that maybe we would upgrade the stereo too. And, wouldn’t it be nice if the stereo could play MP3s. Maybe GPS/nav capabilities would be good. One thing led to another and the unit we chose supports acronym city: MP3, WMA, CD, DVD, MP3, SD, USB, GPS, HD Radio. We’re in geek heaven. (Not a paid endorsement.)

Being new to Seattle, we’ve found the navigation feature to be really valuable. And, in case you’re wondering, It’s much more convenient to have it built-in than to stick a portable to the windshield. So, in the last six months I’ve done a lot of driving in which I followed the instructions of a computer voice.

It’s really a strange window in time. The car is smart enough to tell me where to turn, but not smart enough to make the turn itself. In his book, Evil Plans, Hugh MacLeod suggests that Television occupies another window in time, “a historical accident of the old factory-worker age meeting the modern mass-media age.” That people would willingly spend so much time with “passive, non-interactive media” is a temporary artifact.

What other "time windows" might we be in?

16 May 2011

How to Identify a Secure Payment System

Recently my debit card number was stolen. Three unauthorized charges totaling more than $500 were made in quick succession. Luckily I caught them almost immediately and contacted my bank which "launched an investigation" and credited the money back.

I presume nearly everyone with a card has had a similar experience. The credit card system is so abysmally insecure that there's no way it would get approved if introduced today. There are dozens of ways my card number could have been stolen. A waitress might have copied it down while away from the table at the register. An insider at a payment processing company could have taken it. I could have been part of one of the recent online retailer hacks. I don't think I was the victim of a card skimmer or a fake ATM because I'm pretty careful about such things. But it's still possible.

The "Chip and Pin" systems used in the UK are better. They are based on smart card technology which has an embedded processor chip on the card. To pay for something you insert a card into the payment device, enter your PIN number and approve the amount of the transaction. It's nearly the same as using an ATM card in the US except that you insert the card so that the chip can be accessed instead of swiping the magnetic strip. However, there's a big difference in how the transaction is handled. When you swipe a card, it simply reads the card number from the magnetic strip. There are even devices that can clone the magnetic strip. A smart card, on the other hand, uses a secret encryption key to digitally sign the transaction. The payment device never has the actual key so once the card is removed, no additional transaction can be made.

While better, Chip and Pin still has a fundamental weakness: You have to trust the payment device. A fraudulent device might ask you to authorize a charge of $25 but actually submit a charge of $250. Or, you might authorize one charge but while the card is still in the device it might process a dozen more.

A mostly secure system would have to have a display and keypad on the card itself. Or you might use a cell phone for payment as they do in Japan since the phone already has a keypad and display. Then the worry is that your smartphone might get a virus that steals all of your money.

I predict that before too long you will have some universal access device that unlocks your house, enables your car and manages secure payments both for online shopping and in person. But if that device is also your smart phone, they'll have to install some kind of hardware security to protect the security system from malware.

15 April 2011

Update: The Cost of Solar Energy

Nearly a year ago I wrote a three-part series on energy. At the time, I calculated a cost of $83.33 per gigajoule for solar power. That compares to $1.42 per gigajoule from nuclear power.

Google is investing $168 million in the Ivanpah Solar Farm in the California Desert. As far as I can tell, the total investment will be approximately $2.068 billion. It will be capable of generating 392 gross megawatts of electricity and should last at least 25 years.

In order to convert these numbers to a cost per gigajoule, we have to make some assumptions. I'll use some very generous ones. The solar array cannot generate energy at night and will only generate peak output for part of the day. Not surprisingly, the California desert location chosen for the Ivanpah project happens to be the most favorable in the entire United States. The approach used with solarvoltaics is to multiply peak output by 6 hours per day in that region. Lower numbers are used in other regions. I'll assume that the Ivanpah project is engineered to collect excess solar energy compared to its peak output and so I'm using an 8 hour multiplier instead of 6. Since a net megawatt figure isn't offered, I'll use assume 100% delivery efficiency and use the gross figure. These, of course, are unrealistically favorable assumptions.

It works out to 392 megawatts * 8 hours * 365 days *  3,600 joules/watt-hour = 4,120,704,000 megajoules/year or 4,120,704 gigajoules per year.

Assuming a lifetime of 25 years, construction cost of $2.068 billion and no maintenance costs we get $2.068 billion / (25 years * 4,120,704 gigajoules / year)  = $20.07 per gigajoule. That's an improvement of four times over my previous calculation for solar power. It starts to approach the $13.89 per gigajoule cost of wind power.

It's a huge improvement over solar panels but this still remains the most expensive way in the world to generate electricity. It's an order of magnitude more expensive than conventional energy sources which have the added advantage of delivering power 24 hours a day regardless of the weather.

I'm glad to see this happening but it won't spark a revolution in energy production.

12 April 2011

Do I Trust the New Airport Scanners? No.

I recently decided that I will refuse to step into the new backscatter and millimeter wave scanners that the TSA has deployed at US airports. So far, this hasn't cost me much. Despite flying six times in the last two weeks I haven't yet provoked the infamous pat-down. So far, I've been able to survey the scene and pick the line that uses the old-school metal detector. That won't work forever, they still pick people randomly from the alternative lines and send them through the megadetector. But it should work for a while because the new scanners are too slow to handle full passenger volume.

According to the TSA, one scan by a backscatter x-ray machine exposes an indivdual to a radiation dose of 0.005 millirem which is equivalent to 0.05 microsievert (µSv). Meanwhile, this extremely helpful chart indicates that the dose is equivalent sleeping one night next to someone, it's 1/20 the dose of eating a banana and it's 1/800 the cosmic ray dose of a cross-country airline flight. Millimeter wave scanners, which are also being deployed, use non-ionizing radiation and should pose even less of a threat.

I'm a fairly scientifically-minded individual. So, why am I taking this seemingly unscientific position? The main answer is because I don't trust the information we've been given. I even have some indicators for this lack of trust. For example, in this blog post, the TSA states, "Backscatter X-ray technology uses X-rays that penetrate clothing, but not skin, to create an image." This is language they've used in other places and it's technically true but it's also misleading. The X-rays that make the image penetrate clothing and bounce off the skin and other materials to reach the detector. But the rest of the X-rays, those that didn't make the image, are absorbed by the body. So, my spontaneous lack of trust is reinforced by the TSA's use of misleading language.

I'm not alone in distrusting government assurances. A recent survey conducted by Xavier University indicates that 78% of Americans have less trust in government than they had 10 years ago. A CNN poll shows that only one in four Americans trusts government to do the right thing most of the time.

But for me to distrust the TSA's explanations, I have to either distrust their intentions or their judgement. The fact is, I distrust both. It turns out that the benefits of the scanners weren't sufficiently convincing until the manufacturers spent millions of dollars lobbying congress and federal agencies for their adoption. And security expert Bruce Schneier says it's all just security theater with no real benefit.

So, I guess my opt out represents a concern that the scientific tests are incomplete combined with a relatively inexpensive form of civil disobedience. But my real hope is that someday government officials will quit trying to convince us they're right and start earning back our trust.

Added 2011-04-13:
My son just sent me a link to a letter written by concerned UCSF scientists. After a little more research I found this response from the FDA. Both parties agree that the absorption by the skin of low-intensity X-Rays results in a disproportionally high dose compared to medical X-Ray systems. In fact, the FDA estimates the effective dose to be 0.56 µSv which is more than 10 times the number reported by the TSA that I used above. That's still a small dose. Where they disagree is on whether sufficient research has been done to establish the safety of these scanners. So, it remains an issue of trust and with all of the misinformation in the TSA statements they just aren't behaving in a trustworthy way.

29 March 2011

Bidirectional Links: They're Here!

I attended the third annual ACM Hypertext Conference held in Pittsburgh in 1989. Three years prior I had co-founded Folio Corporation, a developer of electronic publishing software. I wouldn't earn my BS for another six months.

A fundamental question in the early days of hypertext was whether links should be one-way or bidirectional. Theorists were adamant that links should work both ways. They claimed that it's equally relevant to learn what refers to an item as to know what it refers to. Of course, that's hard to accomplish because an author may not have the permissions necessary to install a matching link. For example, if I link to a story on an arbitrary site, I probably don't have permission to install a back-link on that site.

A survey of some of the abstracts from the '89 conference reminds me of the many proposals on how to make bidirectional links work. Some used a sort of cooperative exchange protocol. Other approaches centered on a third-party link registry. Besides being unwieldy, these methods have other problems. If bi-directional links require cooperation, I might deny you the privilege of linking to my content. I might even report that my page had been deleted just to clear your link.

Tim Berners-Lee (who didn't present at Hypertext '89) launched the HTML/HTTP combo we know as the World-Wide Web one year after that conference. His aspirations were for a global, open web and so he took the practical approach of unidirectional links. His decision was strongly criticized by visionaries like Ted Nelson but today you're reading this on the web while Xanadu remains a dream.

Still, wouldn't it be nice to have back-links even if only occasionally?

Some blogging systems (not including blogger) have a "trackback" system in which blogs notify each other when someone from one blog links to a post in another.

Better yet, we do have backward links! I don't think even Berners-Lee expected a world-wide index with the capacity of Google. And one of the things it indexes are links. Google has a special syntax for it. If you search for "link:freakonomics.com" you'll find all of the websites that link to Freakonomics. A clever browser add-in (or built-in) would be to create a button that performs that query automatically when you're on a page. Maybe I'll write that someday.

This could be really useful. For example, the Common Core State Standards for education are organized on the website so that there's a unique URL for each of the standards. Here's an example related to the Pythagorean theorem: http://corestandards.org/Math/Content/8/G/B/6.

So, suppose Salman Khan (of Khan Academy) posts a video about the Pythagorean theorem. In that page he could include a link back to the corresponding standard. Then, I could post the following query to Google: link:corestandards.org/Math/Content/8/G

If this practice became common, the result would be content from all over the web that teaches the Pythagorean theorem. As of this writing, it only returns some cross references from within the standards themselves.

Of course, I couldn't resist the vanity search. A query for "link:ofthat.com" results in... one link from an old site of my own. Maybe someday.

15 March 2011

The Next Personal Computing Wave


We are partway into the next wave in personal computing. Google's Chrome OS is an excellent example but others abound.

The PC/Laptop/Tablet/Phone/PDA of the near future will work like this:

  • All local storage will simply be a cache of permanent storage in the cloud. Therefore, if a device is lost/stolen/destroyed/crashed there is little or no data loss. The individual simply picks up a new device, enters their credentials and all information gets re-cached from the cloud.
  • Applications will be cached right along with personal data. The record of your purchase (or adoption of free apps) is kept in the cloud so a new device automatically loads your apps along with your data.
  • Applications will be hosted in a runtime sandbox. Binary compatibility with the CPU or operating system will not be required.
  • Connectivity will be near-universal but not completely. Therefore applications will be designed to be “occasionally connected.” Existing examples are email and podcast readers that download information when there's connectivity but let you manipulate messages while disconnected.
  • Peripheral devices such as printers, scanners, TV tuners, heart-rate monitors, etc. will connect directly to the network, not to your individual PC.

While elements of this framework appear in the iPhone/iPad, in the Android OS and in Windows Phone 7, Google's Chrome OS is a better example. In true disruptive innovation fashion, Google is starting with a device with a lower cost, lower complexity and lower capability but superiority in one area. The superiority is fundamentally superior management and ease-of-use. This is accomplished by reducing the OS to just the services necessary to run a the web browser. In one particular area Chrome-based devices are superior to all others: They are almost entirely immune to data loss due to loss, damage, hardware failure and so forth and the dramatically simplified OS is easy to understand and use.

And even though the OS is little more than a browser, you can still load up applications. The application runtime is simply ECMAScript (Javascript) and the web environment. Notably, Google Gears and the recently released HTML5 features (both of which are in Chrome) allow browser-based applications to cache local data and continue to operate when disconnected. Google even has a special compiler that will compile Java applications into ECMAScript instead of JVM code so that they run in a browser context. And Google has also addressed the mobile device printing problem.

Of course, Chrome isn't the only example of this wave. Much of it started with Netbooks. With the Eee PC Asus pioneered the idea that a simple device that doesn't run a mainstream OS can be easier to understand and adopt.

For all of it's pioneering work, Apple hasn't fully adopted the new paradigm. And the anchor they are still dragging was introduced with the Palm Pilot introduced 10 years before the iPhone. Palm's innovation was to create a small, handheld device that was an extension of your home computer. You "cradled" your palm once a day to charge it and synchronize your calendar, contacts and so forth. Apple still maintains this framework. You can't get full utility from your iPhone or iPad without having a PC back at home. The rather horrible iTunes app (maybe it's better on a Mac) is required to backup your phone, to manage your music library, to subscribe to podcasts, to upgrade the OS and for a host of other reasons. There's no justification for this. The iPhone/iPad is a networked device and all of these services would be better in the cloud.

Microsoft has done a perfect job of imitation with its Windows Phone 7/Zune Desktop pairing. The imitation is so perfect that in both cases you can't even give a name to your device without first connecting it to your PC/Mac.

Some will point out that you don't really have to tether your iPhone to subscribe to podcasts. There's an app for that. But my point is that Apple should have done that. In fact, opportunities abound to build apps to untether these devices. Just consider the reasons for connecting to a computer and find an alternative.
  • Cloud backup. Think "Carbonite for mobiles." (Yes, Carbonite has an iPhone app but it's for getting to your PC backup using your iPhone. It should be for backing up your iPhone.)
  • Music store management (organization, tagging, purchase, backup, etc.)
  • Device management (name, iTunes account, memory management etc.)
  • OS Upgrade (in conjunction with backup).
In true disruptive innovation fashion, the first devices of this wave are specialized and have limitations but they will continue to improve until there's no reason left to keep a desktop or laptop.

07 February 2011

Quote: Education Reform Fault Line - Conor Williams

"Here's the basic fault line dividing the education reform trenches: One side believes that the best way to improve the education system is to focus on improving instruction. The other believes that the best way to improve the education system is to focus on addressing the ways that poverty affects schools with high percentages of low-income students."
   -- Conor Williams - (source)

21 January 2011

Balancing the Budget

With record-level deficits, balancing the federal budget is once again being debated in Washington. There seems to be a consensus that balancing the budget would be a good thing but how to go about it is such a contentious issue that I have little hope of progress this year.

The seeming consensus on this issue is curious to me. After all, John Maynard Keynes advocated deficit spending especially in recession times and Keynsian economics seems to be the philosophy of the day. But I'll save the reasons for balancing the budget for another post. Today, I'm writing about some of the unexpected side effects of balancing the budget.

Last April I wrote about a lecture by my former Business Finance professor where he explained some of the unprecedented features of the current recession. Among other things, he pointed out the unusual nature of our trade deficit with China. Normally, when a large trade deficit occurs, the currency of the importer nation (the US in this case) weakens relative to the currency of the exporter nation. That's because the exporter nation has an excess of the other nation's currency. That weakening of the currency causes imported goods to increase in price until domestic manufacturing becomes competitive or exports balance out he imports.

However, much of the fuel in China's current economic growth comes from exports and the Chinese government wants to keep feeding that fire. Therefore, the Chinese government buys dollars from exporters in exchange for Yuan. But to balance the trade deficit, they have to get those dollars back into the US. They do so by buying US Treasuries. In other words, we export debt to balance our importing of goods.

So, what would happen if, by some miracle, we balanced the federal budget in 2011? Chinese institutions wouldn't have a place to put their dollars, the trade deficit would weaken the dollar relative to the Yuan, imports would become more expensive to us just as our exports became less expensive to Chinese consumers. Domestic manufacturing would increase, unemployment would decrease.

Of course, domestic economic stability would occur at the expense of reduced growth in the Chinese economy. Whether they would accept that without taking some action we may never know..