18 November 2010
06 November 2010
Tonight most of us in North America get to sleep an extra hour as we go off of Daylight Saving Time. Before 2007 the shift would already happened. The US Energy Policy Act of 2005 added four weeks to Daylight Saving time. The end of daylight savings was delayed by one week in the fall (possibly to keep it lighter for trick or treaters) and it now starts three weeks earlier in the spring. These changes took effect in 2007.
I've always been skeptical of the value of Daylight Saving Time. Though I'm not great at it, I still subscribe to Benjamin Franklin's admonishment, "Early to bed and early to rise makes a man healthy, wealthy and wise. Having the sun rise and set later reduces the incentive to rise and retire early. Ironically, Franklin is widely credited with inventing daylight savings time. In fact, his writings on the subject were satirical.
Three years ago, when the new daylight rules took effect, I decided to find out how much daylight is really saved. The theory of Daylight Saving is that any daylight before you rise from bed is daylight "lost." Therefore, the amount of daylight you "save" will depend on three things: your latitude, your longitude and the time you wake up in the morning. Here's why:
- Latitude: The higher your latitude -- the further you are from the equator -- the more dramatically the length of the day changes between winter and summer. Above the arctic circle, the sun says up for days or weeks in the summer and sets for the same amount if time in midwinter.
- Longitude: If you are at the Eastern edge of your timezone the sun will rise approximately an hour earlier than on the Western edge.
- Rise Time: If you rise before the sun and retire after it sets then there is no daylight to be saved. Any daylight to be saved is between sunrise and "you rise."
Click here to try it out!
02 November 2010
Last year I wrote about the insecurity of DRE voting machines. Those systems have not improved in the last year though there is growing awareness of their vulnerabilities.
This year we are voting for the first time in the State of Washington. In this state 38 of 39 counties use a vote by mail system. Eighteen days before the election, the county auditor mails ballots to all registered voters. Voters mark their ballots in the privacy of their homes. Ballots go into an inner "security" envelope and an outer mailing envelope. They sign an oath on the outer envelope and mail it in via US Mail or drop it in a specially marked, secure drop box on or before election day.
Vote by mail has some useful advantages:
- Voters can take their time marking their ballot -- researching the candidates and issues that they might not have been aware of before looking at the ballot.
- The cost of running an election is substantially reduced.
- People who will be out of town on election day can cast their ballots early without figuring out special early voting provisions.
- Paper ballots offer a physical record of voter intention that can be manually counted to verify that electronic scanners are working properly. Of course, manual counts aren't necessarily accurate either.
- The voter's name and voter ID number appear on the outer envelope ensuring that no voter votes more than once.
- The ballot is inserted into an inner security envelope thereby allowing the ballot and the voter's name to be separated before the vote is visible.
- The public is invited to observe the vote counting process to make sure that counting is done properly and confidentiality is maintained. Since counting is concentrated at a relatively few places, fewer observers are required to cover all locations.
- Since ballots are marked away from the polling place, it is possible for someone to coerce another's vote and verify that the ballot is cast according to their mandate. Votes can literally be bought.
- Individuals can steal ballots from mailboxes, mark them, forge the signature and send them in.
- Unscrupulous mail workers could intercept and destroy ballots before they reach the counting place. Since the voter's name appears on the outer envelope, it is easy to do so selectively.
- Sophisticated criminals could intercept the mail -- steam open the envelope and substitute a different ballot. Or simply replace with a forged outer envelope.
- A bright-light scanner with digital image processing could penetrate the envelope and paper to detect how a particular ballot has been cast.
- Move the voter ID number and the signature to the inside of a cleverly-designed envelope. This would keep casual manipulators from knowing which ballots to intercept.
- Encourage voters to use drop boxes instead of mail as much as possible. Invite public observation of the drop box collection process.
- Ensure voters know that coercion is a crime that should be reported.
- Notify voters of when to expect their ballots in the mail and encourage reports of missing ballots.
The Following added on 4 November 2010 at 9:15am:
Since writing the original post I've learned two things. First, King County offers a website where I can track the processing of my ballot (you can too if you know my birthdate). As of this writing, they've received the ballot but it's awaiting verification of my signature before they process the vote. So, despite submitting my vote a week ago, it has yet to be counted. That's not too strange as there is still 30% of the statewide vote yet to count and a large fraction are in King County.
Second, this article from the freakonomics blog says that vote by mail actually reduces voter turnout -- at least in Switzerland.
19 October 2010
Welfare reform delayed things a bit but we're still approaching the point at which mandatory spending will be unsustainable. In fiscal 2009, $2.1 Billion or 61% of the federal budget was mandatory spending including Social Security, Medicare, Medicaid and interest on the national debt.
This isn't really a surprise. A recent USA Today/Gallup poll indicates that three out of four Americans "predict that the costs of entitlement programs will create major economic problems." At first, this seems hopeful. With a majority of people concerned, perhaps there's the political will necessary to make reforms. However, only 44% are in favor of raising taxes and only 34% are in favor of cutting benefits. A mere 12% say both remedies are required. That means that for any proposed solution, a majority of Americans are against it.
How confused we are!
04 October 2010
I just want to highlight and juxtapose two facts from the movie:
First: In the next 20 years our education system will not produce enough college graduates to fill US needs. By 2018 the shortfall will be 3 million and it will grow from there. These vacancies need to be filled either by increasing the performance of our education system or by immigration.
Second: While most of our schools (public, private and charter) preserve the "achievement gap" between lower and middle-class students. A growing set of innovative schools including KIPP charter schools and the Harlem Children's Zone have not only closed the gap but have elevated children of poverty-stricken areas to perform better than their middle-class competitors. Their models have been followed more than 100 schools with repeatable results.
These two facts combined mean that it's within our power to fill the need for a well-educated workforce from our most poverty-stricken neighborhoods.
Updated 26 Oct 2010: Added correct employment shortfall number with link.
01 October 2010
Only last fall we celebrated the school's 20th anniversary with a Gala celebration and an expectation of many more years to come. Unfortunately, Meridian is another victim of the recession.
Many Meridian students have transferred to charter schools, others to the regular public schools. Most are happy and I expect that all will do well.
One year I served as chaperon when Meridian competed in the Utah Shakespearean Festival. Unlike larger schools that selected their best drama students, Meridian closed classes and took nearly the entire upper and middle schools. As I watched the kids perform monologues, dialogs, a dance number and two ensembles I was struck by their confidence on the stage and how much they were enjoying themselves.
Another parent and I pondered what might be the source of such stage confidence. We agreed that it was the safety these students felt among their peers. At an age when most kids are exposed to ridicule and bullying and struggle to find a place to belong, Meridian students welcomed new friends, encouraged each other to try new things and celebrated a variety of backgrounds, religions, languages and races. This welcome culture permeated the student body, faculty and staff.
Much of my work in education technology has been finding ways to customize the learning experience to meet individual needs. Meridian managed to do this with small classes and teachers who cared enough to adapt classes and offer individual assistance. Students would immerse themselves in subjects with language plays, historical banquets, period dance and may other fantastic activities.
Diverse Cultural Experiences
Meridian reached well beyond Utah Vally to expose students to other cultures. Full-time international students came from Korea, Japan, Europe and South America. Sister schools were chosen in Germany and Japan with biannual exchange trips in both directions. Students and faculty demonstrated understanding and respect for different religions, political beliefs and national background. International week and the language fair furthered this respect and students grew up knowing that their best friends could have very different beliefs from their own.
Breadth of Experience
At Meridian you didn't have to choose between sports, drama, music and language. Everyone did it all. The sports teams were open to anyone committed to making practice. The spring musical had to be scheduled around the basketball schedule because most of the cast was also on the team. When the seniors on the 2010 basketball team were honored it was disclosed that all were also taking AP Calculus! At Meridian we believed that high school was too soon to specialize.
Other things I'll miss:
- Writing Rally
- Broadway Rocks
- Fear Factor
- Extreme Theatre
- Language Skits
- Christmas Vespers
- Medieval Banquet
- German Exchange
- Kindergarten Buddies
- Mongoose Mornings (with Minton)
- Quantum Leap and the Black Hole Cafe
- Random students running up to me and saying, "I won the game!"
- At the game: "Presenting... the Meridian Mongoo.. Mongeese... Mongooses... whatevertheyare!"
07 September 2010
12 August 2010
07 July 2010
My second post on energy detailed the cost of energy from existing sources and the prospects of using each to meet the energy needs of the developing. Notably, wind and solar are by far the most expensive sources of energy and their environmental impact isn't as neutral as they've been portrayed.
The least expensive source of energy is nuclear — beating even hydroelectric power. But we need some changes to the nuclear economy based on technology improvements. We can't continue using the predominant form of nuclear fission without a long-term waste storage plan and safer reactor designs.
I'm following four innovative approaches to nuclear energy. Any one of these, if proven viable, promises to offer abundant, cheap and clean energy that can be sustained for millennia.
|Fast-Neutron Nuclear Fission
|Nearly all nuclear reactors presently used for energy production are thermal reactors which use slow-moving or "thermal" neutrons. The advantages of these reactors are that they can use low-grade nuclear fuel (moderately enriched uranium), they can use water as a coolant and it is difficult to misuse them to create nuclear weapons. Many design variations exist from those of questionable safety like Chernobyl to reliable designs that have operated for many decades. The trouble with thermal reactors is that they require enrichment of uranium ore and they produce nuclear waste that remains dangerously radioactive for thousands of years.
In contrast, fast-neutron reactors require more highly-enriched fuel (increasing the risk of weaponization) and more exotic coolants like liquid sodium. However, they have three big advantages. First, the waste from a fast-neutron reactor has a much shorter half-life and requires storage for only a few hundred years. Second, the fast neutrons can be used to enrich uranium to produce more fuel than the reactor consumes. Third, the fast neutrons can be used to reprocess the nasty waste from thermal reactors resulting in a mix of new nuclear fuel and short half-life waste.
Many research groups are pursuing variations on the fast-neutron design that capitalize on these advantages while managing the problems of weaponization and exotic coolants. One approach is to have most reactors of the thermal design while a few fast-neutron reactors reprocess and produce fuel for the rest. However, such a nuclear economy requires a lot of transportation and processing of radioactive materials.
A traveling wave reactor is a variation on the fast-neutron design that is pre-loaded with a small amount of enriched fuel to get it going and filled the rest of the way with unenriched feedstock. The reaction starts in the enriched section with the fast neutrons enriching the neighboring fuel. The "wave" of the reaction moves from the pre-enriched section through the newly-enriched area until all fuel has been consumed.
TerraPower is working on a traveling wave design that could be pre-loaded with enough fuel to last 60 to 100 years. A small amount of enriched fuel (the dangerous stuff) would be loaded with a large quantity of depleted uranium (plentiful and safe to transport) and the whole system buried. When the reactor eventually "burns out" the short half-life waste might be left buried in place while a new reactor takes over.
Credit: EMC2 Fusion
|Nuclear Fusion has long been the holy grail of energy production. It's the primary reaction fueling our sun and the stars. For fusion you take two hydrogen atoms and fuse them using high temperature and pressure to create helium and a lot of energy. The advantages of fusion over fission is that the fuel is plentiful -- hydrogen extracted from seawater being one option -- and the waste is inert helium. It should be noted, however, that most fusion reactions release radiation so the reactor itself must still be shielded.
The trouble is that maintaining a controlled reaction has proven to be very difficult. IEC Fusion is one of several approaches that is gaining attention over the more conventional and extremely expensive tokamak.
Inertial Electrostatic Confinement Fusion was invented by Philo T. Farnsworth who also invented television. Farnsworth's idea was to place a grid in a vacuum chamber with a strong negative charge. When hydrogen ions are released into the chamber they are accelerated toward the grid and some percentage of them collide in the center with sufficient energy to fuse into helium.
The Fusor, as Farnsworth called his device, has been proven to generate fusion. Building one is relatively simple and inexpensive. Many hobbyists have built their own. However, current designs consume considerably more energy than they produce. The main energy leak is that many of the ions collide with the grid itself consuming some of the charge and contaminating the plasma with the products of the (non-nuclear) grid collision.
For a little more than a decade, Dr. Robert Bussard quietly researched ways to overcome problems with the fusor. His device, called the Polywell, makes the grid out of coils. An electrical current in the coils creates a magnetic field that guides the ions around it and prevents collisions. He and his team made several important breakthroughs shortly before their U.S. Navy funding ran out. At that point he gave a famous talk at Google in which he detailed the progress they had made and sought funding to continue the research. Unfortunatly, Dr. Bussard died of natural causes before funding was renewed. Thankfully, Dr. Richard Nebel, has obtained funding and continued the work. So far the results are promising and he expects to have proven whether the concept is viable within two years.
Credit: Lawrenceville Plasma Physics
|The Dense Plasma Focus device creates a toroidal plasma by discharging a high voltage arc in a near-vacuum. The electrical and magnetic fields in the plasma torus cause it to collapse into a very dense-hot formation called a plasmoid. Under the right conditions, the plasmoid is dense and hot enough to create nuclear fusion. The fusion reaction releases heat, x-rays and high-velocity ions. The trick is to capture all three of these products in such a way as to generate electricity.
Lawrenceville Plasma Physics claims that they have a reactor design that will effectively capture sufficient energy to be a viable clean source of nuclear energy.
Most fusion research focuses on the Deuterium-Tritium reaction (Deuterium and Tritium are both isotopes of Hydrogen). That's because it's the easiest fusion reaction to achieve because it requires the least energy. However, both IEC Fusion and Focus Fusion have potential to work with other reactions because increasing the heat is mostly a matter of raising the voltage. This raises the possibility of using a Hydrogen Boron reaction. The advantage of this is that when Hydrogen and Boron fuse they release three Helium atoms and a bunch of energy but no neutron radiation. Thus, a hydrogen-boron reactor wouldn't require heavy shielding.
Credit: General Fusion
|General Fusion proposes to inject a small amount of deuterium-tritium mixture at the center of a sphere of liquid metal. The outside of the sphere is simultaneously struck by hundreds of rams which create a spherical shock wave. When the wave reaches the center it compresses and heats the D-T mix sufficiently to generate fusion.
When I first read about the idea, the researchers proposed to use mercury for the liquid metal and steam to drive the pistons. Hence the moniker, "Steam Fusion." The current General Fusion design uses pneumatic pistons and a hot lead-lithium mix for the metal. They call it MTF Fusion but I think Steam Fusion is more catchy.
Other posts in this series:
Scotty, We Need More Power!
Increasing Energy Production
03 June 2010
14 May 2010
This number is probably low because it assumes a 50% reduction in energy consumption in the U.S and Canada and it is based on today's population. The U.S. may not be able to achieve such efficiencies and worldwide population is certainly going to increase. For the sake of the following calculations I chose a target of producing an additional 350 exajoules.
The U.S. Energy Information Administration offers the following breakdown of worldwide energy production in 2006 (the latest year for which they've published data).
So, here are the prospects for generating an additional 350 exajoules from various sources:
|Petroleum and Natural Gas
One gigajoule from oil costs $13.56.
One gigajoule from gas costs $4.74.
|The Peak Oil theory dates back to 1956. It suggests that there will come a day where the remaining oil reserves are too expensive to extract and worldwide petroleum production will be forced to decline. Current projections are that peak oil will be reached on or before 2020. But the peak oil year has been moved back several times and there is good evidence that it's still a long way off.
Regardless of whether oil and gas reserves are nearing exhaustion, there are other problems with petroleum. Foremost is pollution. I'll defer debate about carbon dioxide as a pollutant to other authors. There still remain other pollutants including sulfur oxides, nitrous oxides, carbon monoxide and so forth. Natural gas burns more cleanly than crude oil products but it still generates pollutants. New automotive technology has reduced oil emissions to a fraction of their former levels. But these gains have been achieved in industrialized countries where regulations have encouraged such developments. In the developing world, emissions are much worse though the extent isn't accurately measured.
The other problem with petroleum and natural gas is opportunity cost. Presently we have no good alternative energy source for transportation. If we consume petroleum to generate electricity and heat, the cost of transportation will be driven up.
Due to the transportation link, petroleum use will be around for a long time. But, it's hard to consider massive increases in oil and gas consumption as a sustainable solution for meeting poverty's energy needs.
One gigajoule from coal costs $3.24.
|Among fossil fuels, coal is the low-price leader. For this reason, coal supplies 49% of electricity in the United States, 69% in China and 40% worldwide. In the United States it is estimated that enough coal is recoverable to last 146 years at current growth rates.
So, there is enough coal to last for quite a while and it is inexpensive. But as with Oil and Gas, pollution is a problem. For those concerned about carbon dioxide emissions, coal releases about 35% more CO2 than gas or oil for the same amount of energy. More concerning to me are emissions of soot and sulfur dioxides. In the United States, scrubbers, are used to keep emissions relatively clean. However, most Chinese plants lack such scrubbers and China burns more coal than the United States, the European Union and Japan combined.
One gigajoule from hydropower costs $2.36.
|Hydroelectric power is a nearly perfect solution. It's renewable, non-polluting, extremely efficient and it can be stored (in the form of reservoirs) until needed. However, though they don't pollute, dams and reservoirs have considerable environmental impact. In the United States, we've already harnessed just about all the hydropower available. More might be available in the developing world but not enough to deliver the needed 350 exajoules.|
One gigajoule from a wind farm costs $13.89.
|With new technology, the cost of wind power has dropped by more than 80% in the last two decades. Despite that improvement, it's the second most expensive source of energy on this list. The amount of wind energy that can be generated per acre varies tremendously with the amount and consistency of wind in that area. A representative example is the mega windfarm proposed by T. Boone Pickens which would produce 4,000 megawatts from 200,000 acres. That works out to approximately 630 gigajoules per acre per year (assuming that the 4,000 megawatts is average production). Unfortunately I suspect that 4,000 megawatts is peak production during ideal wind conditions. Giving the benefit of the doubt and assuming 4,000 megawatts is average, this kind of wind energy density can supply the total energy needs of six people per acre.
To be clear, I'm using the total energy need per person, not just electricity. It includes lighting, heat, cooling, transportation and the energy required to manufacture and produce all goods used by that individual.
While wind will make important contributions to overall energy production, the cost of production and low energy density per acre prevent it from being more than a small contributor to the overall solution.
One gigajoule from solar-voltaic panels costs $83.33.
|Direct Insolation is the amount of solar energy delivered per square meter per day. In my city of Provo, Utah it averages 4.64 kWh/m^2*day. That works out to 6.1 gigajoules per square meter per year. The best solar cells ever tested achieve 41.6% efficiency in the laboratory. However, using solar cells of practical cost without tracking or concentration systems, the best efficiency to be expected is about 5%. Assuming these parameters, it would take 354 square meters of solar panels to supply the energy needs of one person. The roof of a typical suburban home is about 150 square meters.
This shows that the energy density of solar power starts to approach practicality. Presently, the big barrier is the cost of manufacture. At Today's Prices, a 354 square meter solar array would cost approximately $1.4 Million. This explains why solar power is far and away the most expensive source.
Solar-voltaic technology is advancing rapidly. The cost of manufacture is dropping and the efficiency is climbing. There are other solar technologies such as passive solar heating, solar water heating and solar concentrators which may cost less than solar-voltaic systems. There are also problems. Solar power is not consistent which means energy storage or alternative sources are needed for night and cloudy days.
Solar power--particularly solar-voltaic panels--is ideally suited to urban rooftops. Not only do panels deliver peak power during peak electrical demand (for air conditioning) but by converting light into electricity they reduce heat uptake on the roof thereby reducing the air conditioning load in the summer. However, for this to be practical, cost of manufacture would have to be reduced to about 5% of current costs. That's a tall order.
One gigajoule from nuclear power costs $1.42.
|Nuclear power is the least expensive energy source available. The amount of energy that can be produced is tremendous, there is enough fuel to last millennia and the environmental impact is the least of all the energy sources cited.
The problem with nuclear power is that while the actual environmental impact is very small the perceived environmental impact is large and, in an accident like Chernobyl, the potential impact is tremendous. Clearly the perception of environmental impact is due to the potential for disaster. This has resulted in a political atmosphere that has severely limited the construction of new power plants since the Three Mile Island incident.
Another problem with nuclear power is the prospect of repurposing a power plant or its waste products into nuclear weapons.
New technologies for nuclear power are emerging that address the potential for disaster as well as limiting the prospects for repurposing. Those will be the subject of my next post on energy.
Energy Content of Fuels
Spot Price of Oil
Spot Price of Natural Gas
Spot Price of Coal
Hydroelectric Energy Cost
Wind Energy Cost
Solar Energy Cost
Nuclear Energy Cost (In Europe)
Other posts in this series:
Scotty, We Need More Power!
Energy: The Future is Nuclear
12 May 2010
|"Improvisation, if you don't have something worthwhile to say, is just hot air."
(Anna Stone, English Teacher Extraordinaire speaking on the importance of knowing the subject and the historic context before relying on oratory skills.)
30 April 2010
If you believe the rankings, "good for business and careers" and "fun" seem to be contrary pressures. For example, New York City ranks #1 on the Portfolio "fun" list and #99 on the Forbes "business" list. Nearly an exact reversal of Provo's ranking (#2 for business and #100 for fun). However, I think that the Portfolio ranking is badly flawed. Their categories for ranking are Shopping, Gambling, Popular Entertainment, Culture, Food and Drink, Low-Impact Sports and High-Impact Sports.
There's little question that Provo's not a very good gambling destination (#92 on their list) and I also have a hard time disputing New York's #1 rank for shopping and culture. But how do they get away with ranking Provo as #98 for high-impact sports (represented by an icon of a skier) vs. New York's #2 ranking? I can be on the slopes at Sundance 20 minutes from leaving my front door. And best-in-the-world resorts like Alta, Snowbird, Brighton, Solitude, Deer Valley and Park City are all within an hour's drive. Where do New Yorkers go to ski, much less hike, mountain bike, camp, drive off-road, fish and so forth?
Oh well, I like New York too. And despite it's #99 business ranking I think a brokerage would be better off locating in New York than in Provo. There's a lot more subjective influence than these rankings would suggest.
26 April 2010
Following the lecture I asked him about my theory that the markets could self-correct the problem of toxic assets (outlined in my previous blog post on this subject). It turns out that he has been serving as an expert witness in several lawsuits related to the meltdown and has direct experience in this area. He assured me that, indeed, the derivatives market has mostly shut down and that the remaining derivative instruments are treated as the risky instruments they really are.
According to Dr. Heaton, one lingering problem is that the Community Reinvestment Act that I talked about in my history of the banking crisis remains in place along with enhancements that were passed in 1999 and 2005. Presently the provisions aren't being enforced but if they are, banks will be required to continue to issue the kind of high-risk loans that helped create this problem in the first place.
From his primary lecture I learned that Dr. Heaton views the sub-prime lending and the associated financial derivatives as only two components in a six-part "perfect storm." Here's the full list.
- High-risk mortgages spurred on by the Community Reinvestment Act and it's more recent kickers. (The requirements remain in place though they aren't currently being enforced).
- Enormous increase in the money supply with interest rates reduced to nearly zero. (Rates are still there.)
- Hybrid mortgages that had a two-year low introductory rate. Homeowners expected to be able to refinance after two years because "home prices always go up as they had done almost continuously for the 40 years preceding 2008. (Many of these have already been foreclosed upon but there remain several waves of ARMs yet to create problems.)
- Asset Securitization -- the financial derivatives used to finance high-risk mortgages and an enormous variety of other investments. (Mostly out of favor.)
- The transfer of manufacturing to China and other emerging markets. This results in an enormous trade deficit. Under normal circumstances, such a deficit would strengthen the yuan and weaken the dollar thereby bringing things into balance. But the Chinese government, not wanting to slow the growth, purchases dollars from manufacturers in exchange for Yuan and then invests those dollars in US Treasuries. (The recession has reduced the trade deficit by half but it remains tremendously high by historic standards.)
- The complicity of Moody's and Standard and Poor's in giving excessively high ratings to mortgage-derived securities based on the incorrect assumption that housing prices would not decline. (This has been corrected.)
21 April 2010
20 April 2010
Most of the blame for the financial crisis has been leveled at investment banks and other institutions that hid risky investments behind complicated financial instruments. However, Credit Rating Agencies like Moody's and Standard and Poor's were complicit in creating the problem because, like the banks, they ignored the possibility of market-wide problems.
So, just as the cooperation of Credit Rating Agencies helped create the problem, CRA's could likewise drive much of the reform. Unfortunately, there continue to be allegations of inflated ratings and the agencies have avoided liability for past mistakes. Despite this I have hope that reform can come from this sector without legislative pressure.
The term "Toxic Asset" was invented in 2008 to describe the financial derivatives for which a value cannot be determined with confidence. The presence of large quantities of toxic assets on corporate and bank balance sheets froze the financial markets. There's are markets for high and low-risk securities. But when a risk or value cannot be determined with confidence, that's when markets freeze up. "Toxic Asset" is very descriptive term for such things.
What I would like to see is an agency that would report on the portion of a security -- stock, bond, or derivative -- that is composed of questionable derivatives. To do so with accuracy would require cascading fractions through the network of ownership. For example, if 15% of a bank's balance sheet is composed of toxic assets and 20% of a mutual fund is invested in that bank then the mutual fund would be rated 3% toxic (15% * 20% = 3%). Of course, some portion of other stocks in the mutual fund might also be considered toxic so the total toxicity of the mutual fund might be higher.
Creating a database that tracks the network of ownership would be complicated but not impossible. The information required is all in the public record. There would have to be a objective way of determining whether a fundamental asset is toxic. However, once the system is in place, it could also be used to rate cascading ownership in many other types of assets. Fractional ownership in business sectors such as manufacturing, education or hospitality could be measured through the cascading layers. Involvement in totalitarian regimes, conflict assets or vice business could also be tracked.
I suppose this is another of my Business Concepts. It would take a considerable up-front investment and a continuing investment to maintain the database but the ability to analyze cascading ownership would be a potent investment tool.
16 April 2010
The graph above, extracted from this excellent Department of Energy study shows the correlation between the Human Development Index (a measure of standard of living) and per-capita energy use. It's arguable that the energy consumption of U.S. citizens could be reduced while still maintaining a comfortable lifestyle. Nevertheless, per-capita energy production of developing countries would have to be increased to somewhere around U.K. levels if poverty and disease are to be reduced to levels seen in industrialized countries
The DOE study indicates the threshold is about 4,000 kWh per person which is somewhere between Spain and South Korea on the the graph. Of course, electricity is only a fraction of total energy use. The same DOE study indicates a ratio of total energy to electricity use of 7.5 should be used for standard-of-living calculations. Therefore we need approximately 30,000 kWh or 108 gigajoules per-person.
There is a lot to be gained through improving energy efficiency. Insulation, hybrid cars, smaller vehicles, public transit, high-density housing and so forth are all important pieces of the solution. However, the initial figures I have used here are less than half of U.S. energy consumption. So, efficiency gains are more likely to reign in high-consumption populations like U.S., Canada and Japan then they are to reduce the needs of developing countries. Besides, it costs energy to manufacture all of these efficiency improvements.
The current world population is estimated at 6.8 Billion. So, in order to eliminate poverty, increase freedom and improve the human condition we need approximately 734 exajoules (734 * 10^18 J) of net energy production per year in addition to massive improvements in energy efficiency. In 2008, worldwide net energy production was 474 exajoules. Given that population continues to grow, we should be seeking to more than double worldwide energy production while still seeking to improve energy efficiency.
If energy production is increased at the cost of environmental damage, we'll miss the goal. Clean air, clear water and wild spaces are just as important to quality of life as good health, nutritious food and a comfortable home. In future posts I'll look into where that energy might come from.
Other posts in this series:
Increasing Energy Production
Energy: The Future is Nuclear
14 April 2010
After being down for about 45 days (and a posting hiatus before then) "Of That" is back -- this time hosted by Google's Blogger.
What Happened to Azure?
I originally started hosting on Microsoft's Azure. During the technology preview stage, hosting was free. I was also evaluating it as a platform for future products and based on the early information I could find, it appeared that the cost to host on Azure following it's full release would be modest. However, once the product released I found that the cost was going to be prohibitive.
I'll write more about Azure in a future post. For now, it's sufficient to say that it should grow to be a good platform for enterprise apps and possibly hosted services but it's not appropriate for small-scale things like my blog. There are things they could do to fix that weakness but I don't know if it's a priority for Microsoft.
What About BlogEngine?
I chose BlogEngine because it's a solid solution written in C# on ASP.Net -- a platform I'm familiar with. I wanted the ability to customize more than just the appearance of the blog and I have plans to launch active widgets as tools and experiments. However, it took me hours of programming to adapt BlogEngine to Azure and there was a lot more that I wanted to do. This all took away from any time spent on the widgets and experiments themselves. My new strategy is to let Google/Blogger worry about the blogging side. I'm confident that their available customizations and APIs will be sufficient to let me integrate my stuff.
Why did I choose blogger instead of WordPress or TypePad or <insert your favorite here>? First, because hosting is free even when using a custom domain name. Second, because Google allows monetization by placing ads on my site if I ever choose to do so (not yet). Third, because it's simple and straightforward. Sure, it doesn't have all of the features that some other platforms carry but it has all of the features that I need. By avoiding unnecessary bells and whistles I gain ease of use.
Shortly I'll write about my experience in transferring my existing posts from BlogEngine to here.