Should we blame the pilot for the Buffalo crash, or bureaucrats and trial lawyers?

The latest statement from the NTSB on the Buffalo crash suggests the pilot simply let the airplane get too slow. He then reacted badly and stalled the airplane. His reaction was seemingly inexplicable for a professional pilot, as one of the first things you learn during primary training is developing the muscle memory to break a still with forward stick. I don’t buy the notion proposed by some that the pilot mistakenly thought he was dealing with an icing-induced tail plane stall. Even though it’s true that pulling back is appropriate for a tail plane stall (as opposed to the typical main wing stall) absolutely nothing else the pilot did was appropriate for a tail stall. Everything in the NTSB statement suggests the cockpit simply devolved into complete chaos. While they mercifully keep this out of the public eye, the NTSB investigators have suffered through listening to the cockpit voice recorder, and reading in between the lines of the NTSB statement I think it’s fairly clear the pilot simply panicked.

However, I’m not passing judgment. I’m sure I’d do no better, even with all the training in the world. My reason for writing this is to say something about the system, not the pilot. When I was doing my instrument training, my biggest takeaway from the whole thing was that it was utter bullshit for people’s lives to depend on a pilot doing it correctly when the chips are down. It’s normally easy to fly on instruments, but it gets surprisingly difficult quickly when things go wrong, especially with the disorientation that occurs at night. I really think too much is expected of pilots flying hard IFR in any airplane that’s not fully automated. Yes, it’s manageable, but not with the kind of margin you’d like to see when lives are at stake, and especially with relatively inexperienced pilots. One’s ability to react correctly is shockingly bad when terrified, and I’m guessing the slow speed situation caught the pilots off guard and scared the shit out of them.

However, don’t we live in a world where automation and control technology has advanced to the point where the space shuttle can deorbit and land itself? Where affordable cars have anti-skid brake systems whose computational power rivals a jet fighter’s? Why, then, are we paying to fly in airplanes without something as relatively simple as autothrottles? The big jets have them, but they are not economical to have on most commuter aircraft. Why is this considered remotely acceptable? Why, in this day and age, is “drop out of the sky and kill us all” included in the set of possible control inputs on any commercial aircraft on which our loved ones are flying?

With the exception of the Hudson ditching, every major aviation accident that’s happened in the past ten years in the US could have been avoided with relatively straightforward control software, including 9/11. (Terrorism aside, why the heck should the airplane’s software allow the pilot to fly into something?) If you don’t believe me, cite one and I’ll explain how simple software would’ve avoided it.

Given that this is all utterly doable, why isn’t it done? The answer is that because of regulations and liability, you can’t add a bloody toaster to a commercial aircraft for less than $100,000 per plane. The idea of regulation and liability is that it should keep us safe, but at this point it is doing the exact opposite. The Dash 8 that killed those people would’ve had autothrottles installed had they not been so expensive, and that expense is caused largely by the regulators who make certification so onerous, and the trial lawyers who make liability insurance prohibitively expensive. The Dash 8 is exactly the kind of airplane that needs them, but it won’t get them until we find a way to regulate aviation without completely stifling technological advancement at the same time.

Because of this unintended stifling effect, we could get rid of the FAA and safety would probably increase. I know trusting souls out there gasp at this idea, but the FAA is essentially an extension of the airline industry, anyway, so what we have now is basically self-regulation with all the advantages of government efficiency. They don’t have the guts to do anything drastic, and almost never have the brains to not do harm. They’ll focus on the small stuff (remember the Boeing 727 wiring inspections a while back?) and completely drop the ball on the big problems. When the FAA found out that the Boeing 737 had a major issue whereby the rudder would hard-over on its own, a problem that occurred over 100 times and caused two major fatal crashes, did they ground the fleet? Nope. That would’ve been too financially disruptive. They simply told the airlines to fly the planes a bit faster so that pilots could have a better chance at recovery when it happened.

Sometimes the only difference between anarchy and government regulation is paperwork. Recognizing this is helpful. Once you lose the blind faith, you realize that your safety is in your hands. You can make decisions to limit your risk. One good one is to avoid commuter flights in bad weather.

E*TRADE to liquidate all proprietary mutual funds this week to raise capital

E*TRADE just sent a letter out to all mutual fund holders to the effect that they will be liquidating their entire family of index mutual funds this week. All funds will be cashed out by Friday:

After long and serious consideration, E*TRADE Securities has made the decision to discontinue our family of proprietary index mutual funds.

As a result, the E*TRADE S&P 500 (ETSPX), Russell 2000 (ETRUX), Technology (ETTIX), and International (ETINX) Index Funds will be liquidated on a date no later than March 27, 2009 (the “Liquidation Date”).

Of course, even though we are discontinuing these funds, as an E*TRADE customer, you have access to over 7,000 funds to help you find the right alternative.

Here are a few important points to keep in mind:

Effective as of the close of business on February 23, 2009, no purchases of the funds may be made and any applicable redemption fees or account fees charged by the funds will be waived.

If you do not redeem your shares yourself, your shares will be automatically converted to cash equal to their net asset value on the Liquidation Date. You will receive proceeds equal to the net asset value of the shares you held on the Liquidation Date after provision for all charges, expenses, and liabilities of the fund.

The redemption is treated as a taxable transaction, and you will have to pay taxes on the proceeds of the liquidation, even if your shares are automatically redeemed on the Liquidation Date.

Please be assured that this decision has nothing at all to do with the financial health of E*TRADE FINANCIAL, which has been, and continues to be, very well capitalized by every applicable regulatory standard.

I especially like the last sentence. Only a financial industry CEO could lie so effortlessly. If they are so well capitalized, why are they applying for TARP funds? Why are they liquidating their mutual funds out from under their customers, instead of just selling the business? I suspect they need the cash to cover withdrawals. There may be a run on E*TRADE going on.

Fortunately, E*TRADE’s funds don’t have a lot of money under management. Only about half a billion dollars worth of stocks will be unloaded on the market this week, by my quick estimate. However, this might be a harbinger of ill things to come, if other financial institutions start to see liquidating their proprietary mutual funds as a way to raise capital.

On the bright side, at least nobody will be forced to take a short term capital gain…

Accelerating code using GCC’s prefetch extension

I recently started playing with GCC’s prefetch builtin, which allows the programmer to explicitly tell the processor to load given memory locations in cache. You can optionally inform the compiler of the locality of the data (i.e. how much priority the CPU should give to keep that piece of data around for later use) as well as whether or not the memory location will be written to. Remarkably, the extension is very straighforward to use (if not to use correctly) and simply requires calling the __builtin_prefetch function with a pointer to the memory location to be loaded.

It turns out that in certain situations, tremendous speed-ups of several factors can be obtained with this facility. In fact, I’m amazed that I haven’t read more about this. In particular, when memory is being loaded “out of sequence” in a memory bandwidth-constrained loop, you can often benefit a great deal from explicit prefetch instructions. For example, I am currently working on a program which has has two inners loops in sequence. First, an array is traversed one way, and then it is traversed in reverse. The details of why this is done aren’t important (it’s an optical transfer matrix computation, if you’re interested) but the salient aspect of the code is that the computation at each iteration is not that great, and so memory bandwidth is the main issue. Here is the relevent section of code where the arrays are accessed in reverse:

/*
* Step backward through structure, calculating reverse matrices.
*/
for (dx = n-1; dx > 0; dx--)
{
Trev1[dx] = Trev1[dx+1]*Tlay1[dx] + Trev2[dx+1]*conj(Tlay2[dx]);
Trev2[dx] = Trev1[dx+1]*Tlay2[dx] + Trev2[dx+1]*conj(Tlay1[dx]);
dTrev1[dx] = dTrev1[dx+1]*Tlay1[dx] + dTrev2[dx+1]*conj(Tlay2[dx]) +
Trev1[dx+1]*dTlay1[dx] + Trev2[dx+1]*conj(dTlay2[dx]);
dTrev2[dx] = dTrev1[dx+1]*Tlay2[dx] + dTrev2[dx+1]*conj(Tlay1[dx]) +
Trev1[dx+1]*dTlay2[dx] + Trev2[dx+1]*conj(dTlay1[dx]);
}

Despite having exactly same number of operations in the forward and reverse loops, it turns out that the vast majority of time was being spend in this second (reverse) loop!

Why? Well, I can’t be entirely certain, but I assume that when memory is accessed, the chip loads not just the single floating point double being requested, but an entire cache line starting at that address. Thus, the data for the next couple of iterations is always loaded into L1 cache ahead of time when you’re iterating forward in address space. However, in the reverse loop, the chip isn’t smart enough to notice that I’m going backwards (nor should it be) and so it has to wait for the data to come from either L2 or main memory every single iteration. By adding a few simple prefetch statements to the second loop, however, the time spent in this section of code went way down. Here is the new code for the second loop:

/*
* Step backward through structure, calculating reverse matrices.
*/
for (dx = n-1; dx > 0; dx--)
{
Trev1[dx] = Trev1[dx+1]*Tlay1[dx] + Trev2[dx+1]*conj(Tlay2[dx]);
Trev2[dx] = Trev1[dx+1]*Tlay2[dx] + Trev2[dx+1]*conj(Tlay1[dx]);
__builtin_prefetch(Trev1+dx-1,1);
__builtin_prefetch(Trev2+dx-1,1);
__builtin_prefetch(Tlay1+dx-1);
__builtin_prefetch(Tlay2+dx-1);
dTrev1[dx] = dTrev1[dx+1]*Tlay1[dx] + dTrev2[dx+1]*conj(Tlay2[dx]) +
Trev1[dx+1]*dTlay1[dx] + Trev2[dx+1]*conj(dTlay2[dx]);
dTrev2[dx] = dTrev1[dx+1]*Tlay2[dx] + dTrev2[dx+1]*conj(Tlay1[dx]) +
Trev1[dx+1]*dTlay2[dx] + Trev2[dx+1]*conj(dTlay1[dx]);
}

The prefetch instructions tell the processor to request the next loop’s data, so that the data is making its way through the memory bus while the current computation is being done in parallel. In this case, this section of code ran over three times as fast with the prefetch instructions! About the easiest optimization you’ll ever make. (The second argument given to the prefetch instruction indicates that the memory in question will be written to.)

When playing around with prefetch, you just have to experiment with how much to fetch and how far in advance you need to issue the fetch. Too far in advance and you increase overhead and run the risk of having the data drop out of cache before you need it (L1 cache is very small). Too late and the data won’t have arrived on the bus by the time you need it.

Why did I not prefetch the dTrev1 and dTrev2 memory locations? Well, I tried and it didn’t help. I really have no idea why. Maybe I exceeded the memory bandwidth, and so there was no point in loading it in. I then tried loading it in even earlier (two loops ahead) and that didn’t help. Perhaps in that case the cache got overloaded. Who knows? Cache optimization is a black art. But when it works, the payoff can be significant. It’s a technique that is worth exploring whenever you are accessing memory in a loop, especially out of order.

The other other shoe to drop: big government in-the-loop?

This depression is historic in many ways, but one way that doesn’t get talked about a lot is that it’s the first time America has had a credit-based downturn with a government that is a major existing factor in the economy. Before the Great Depression, in the 1920s, government spending at all levels was around 15% of personal income. Today, it’s close to 50%. Sure, government spending shot up under FDR, but the point is it shot up from virtually nothing. What happens when you hit a depression when government spending is already over one third of GDP before the depression?

Tax receipts projection
Tax receipts projections.

So, we have this situation where there is a huge economic entity that is about to see its income drop precipitously, as tax revenues fall off a cliff. Whether by spending cuts or inflationary printing, future real government contributions to the economy are going to have to decline. It’s easy to forget, but the government doesn’t actually make anything. Whatever money they spend either comes from the present or the future or from inflating the currency, but either way it is a hole that has to be filled sometime. Due to the large delay between the effects of a downturn and government spending, this will take a while to play out, and it’s a huge overhanging issue that has yet to completely hit the proverbial fan.

Here’s the thing that scares me about all of this: one of the tenets of control theory is that having a strong feedback loop with a large delay in any system is a good recipe for instability, and yet that’s exactly what we have when the government becomes such a large factor in the economy. Of course, an economy is not a model system, and it could never experience runaway instability like a linear circuit could. But if there are forces pushing it towards instability, I’d argue the result may not be exponentially growing oscillations, but it won’t be good. Humans don’t like unstable systems, especially when their money is involved, and rather than suffer oscillations, I suspect the economy would just fall and stay down.

Why hasn’t this happened before? Well, one likely possibility is that I’m completely out of my mind to be applying principles of control theory to the national economy. But another is that we’ve simply never tied this big of a feedback loop around our economy before. The last time we had a credit dislocation, perhaps government was small enough that these oscillations were damped. But the gain is way up, now, and I’m a little nervous to see what happens now that the system has been given a big kick.

Classic Atlantic article on the diamond scam

One of the more useful things to be aware of as an American is the surprising ruthlessness of Madison Avenue’s manipulation. Nowhere is that more evident than in a classic Atlantic story from 1982 exposing how the public was fooled into thinking diamond rings are an integral part of marriage custom. I’d read it a while back, and had forgotten how good a read it is. The most surprising detail is that the “custom” of giving a woman a diamond engagement ring was completely contrived shortly after WWII by a Manhattan advertising agency.

The agency had organized, in 1946, a weekly service called “Hollywood Personalities,” which provided 125 leading newspapers with descriptions of the diamonds worn by movie stars. And it continued its efforts to encourage news coverage of celebrities displaying diamond rings as symbols of romantic involvement. In 1947, the agency commissioned a series of portraits of “engaged socialites.” The idea was to create prestigious “role models” for the poorer middle-class wage-earners. The advertising agency explained, in its 1948 strategy paper, “We spread the word of diamonds worn by stars of screen and stage, by wives and daughters of political leaders, by any woman who can make the grocer’s wife and the mechanic’s sweetheart say ‘I wish I had what she has.'”

The piece also explains the great lengths to which De Beers went to ensure that diamonds, actually a relatively common rock, are kept in artificially short supply to create the illusion of rarity. Furthermore, they control the entire supply chain, keeping wholesale prices much lower than retail (the markup on diamonds is ridiculous, at least 100%) so that it’s impossible for the public to unload their diamonds on the market.

The most interesting part of this piece is the notion that people in 1946 were capable of this kind of cynical manipulation, because it removes one of the most bitter aspects of our current moral degeneracy: that idea that we’ve somehow fallen from a great height. It’s always a relief to find out the fall wasn’t that far.

Investment gains may become harder to find, long or short.

I had an interesting discussion with some folks last night. The question was whether it is possible for all investments to go down in the short term if things get bad enough. One conclusion was that it’s a harder question to answer than you might think. Do you consider perceived value, or just market price? If you buy a farm, and the market collapses for real estate, that farm might still be the best thing you’ve ever bought (high value), even if the price plummets. So, while the general question is interesting philosophically, it quickly unravels into a debate on definitions, so I’ll just limit the discussion to what people normally think of as investments: things you can hold in a brokerage account.

My theory is that it is certainly possible for every conceivable investment to lose value, where no matter if you’re short or long you lose money. Just consider the absurd case where everybody becomes clinically depressed agoraphobics, sitting at home wasting away. Clearly, financial markets will freeze, and you’ll find out that your assumptions on value were predicated on the Wall Street showing up to work, the computers which record your trades running, and people holding out enough hope in the future to bother trading anything but cigarettes. Every asset, no matter what, has some finite counterparty risk. You may be right about everything, but the universe doesn’t owe you a bid. There probably weren’t a lot of good places to put your retirement funds during the declining Roman Empire, for example.

Granted, total collapse of financial market functioning is a rather extreme, and seemingly academic, case to consider. But as I thought about it a bit more after the discussion, I realized that this isn’t academic at all. Between fully liquid bull markets, where everybody makes money (on paper) and the macabre situation I posited above, is a continuum of completely plausible scenarios where it gets harder and harder to make money in any asset. In fact, this is already happening right now.

Bid-ask spreads on options have been widening in the past few months, which makes it harder to hedge either direction as liquidity dries up. While derivative markets are zero sum if you average out to expiration, in the short term both parties can have losses on paper due to wide spreads, and if you can’t close your short option position, you are forced to tie up cash as collateral, which could cause you to lose money. Thus, in a way both parties to an option contract can lose out if liquidity dries up.

Certain stocks are becoming impossible to short (nobody is willing to loan out any more shares). Others (such as Sears) are starting to require short holders to pay interest. It’s quite likely for somebody to go long Sears, somebody else to go short at the same time, and for both people to lose money.

Is this discussion of any practical value? I think so: if the market continues to deteriorate, even those that correctly predict it will have trouble making money from it. For example, it will become increasingly difficult to make money in inverse ETFs, no matter how brilliantly you predict the underlying stock market trends. The counterparties to the derivatives owned by the ETF will become so adverse to risk that they will insist upon prices which are less and less favorable for the ETF. This will manifest as extreme slippage in the ETF relative to the index it inversely tracks. Again, this is already happening. Consider the following plot of SKF versus the Dow Financials Index, which it is supposed to track the inverse of times two:

FXK: How to lose money both long and short.
SKF (green) versus Dow Financials (black): How to lose money both long and short.

The underlying index went down about 25%, but so did the ETF (there were no distributions from SKF in this time frame). Everybody lost money, long or short! Some slippage is inevitable as a “cost” of leverage and shorting, but my point is that the slippage is getting worse. Six months ago SKF was tracking much closer to its target. It might be useful to consider an index of inverse ETF slippage as an indicator of the health of the financial markets, or at the very least a index of how crazy you’d have to be to remain in the market. So, the ETF slippage, the option spreads, tight short supply: they might all be subtle hints from the market that the market is no place to be right now, long or short.