Wednesday, December 28, 2011

I live for drugs... it's great

This fall semester, I was fortunate enough to take a Law and Econ class from the puissant Alex Tabarrok. Instead of writing a term paper on the partial effects of slinging empty oil cans at the heads of drag racers, I elected to investigate some of the myths of... wait for it... the DRUGS!

If you're of a certain age, a supercilious sneer ought to set up base camp at the corners of your lips, preparing a push to the summit once the weather clears a little bit. Drugs are bad, right? Drugs ruin lives, turn users into abusers into total trainwrecks, right? This is your celebrity; this is your celebrity on meth... not even once. DARE to keep your Gummi Bears off dat juice.

Or whatever.

Point is, there's a lot of propaganda surrounding drug abuse, and the (occasionally explicit, often implict) claim that drugs lead to a life in tatters, implying causality, is sort of at odds with what I would consider common sense. Indeed, it seems as reasonable to me that someone who is having a rough go of it would turn to self-medication as that drugs veer an otherwise decent life off-track. Ordinary statistical treatments have a tough time getting at actual causality. Ordinary Least Squares regressions can show whether or not two factors move together, but are mostly silent when it comes to saying what causes what. That's what theory is for.

Or a clever two-stage regression.

What now? What's all this? Well, look, if we can make the claim that a proxy for individual drug use predicts some sort of individual adversity and we can't plausibly make the claim that the adversity can affect the proxy, then we've got a good case for actual causality.

That's sort of confusing, so let me be a little more specific. I've got some data on positive urinalysis results. I've also got crime data from the FBI and some community drug use stats from the CDC. Basically, what I've done is used the crime data of the individual's hometown (specifically, the home town of record, at the time of enlistment) to predict the probability of drug use by the individual. Nice. So, if the property crime rate of Hookset, New Hampshire is 7.4 per 100,000, then we can say that people (Soldiers) from this charming burg have, oh a 0.43% chance of pissing hot for weed (numbers simulated, natch). Take this number and slap it into an equation that tries to predict some adverse outcome, like divorce or AWOL or non-availability or whatever, and hey-presto, we should be able to show actual causality, right?

Well, maybe. Of course, the 2-stage regression didn't work. I got the first stage numbers to fit pretty well. Indeed, I think I can comfortably claim that growing up in a high-crime community probably predisposes folks to use drugs and to a rather strong degree. However, while I was able to show that drugs and adversity correlate, I was unable to show the two-stage relationship.

Now, this is consistent with my claims, but I've still got a bunch of nagging problems. Yeah, my instrument passed the first condition (first-stage correlation) with flying colors, but it clearly fails on the second condition. All this tells us is that it's a bad instrument. To put it delicately, BFD. I was looking for a bad instrument, and by golly, I found it, but this (and this is crucial) does not imply that it's a bad instument for the reasons I claim. At best, it's a rather weak tea. In short, I found no evidence to support the claim that drugs unidirectionally lead to individual adversity. Not exactly an overwhelming claim.

Still, better than nothing, I reckon. Maybe I can buff this turd out and try to get it published. At best, it can maybe knock a little wind out of the abolitionist argument. Fat chance on that though.

Sunday, December 18, 2011

Capital as Language

These will just be some random musings.

Capital is, of course, anything that is a means to satisfying some ultimate goal.

I have been reading a book on linguistics, because i am becoming increasingly curious about the potential relationship between the study of language and the study of the economy. Language is, of course, unpredictible in its future course, just as new capital combinations are by their essence unpredictable (because to envision a future, profitable, combination of capital is to bring it about; that doesn't mean that one cannot imagine new innovations or, for that matter, the direction of language).

I'm also becoming convinced that the term 'labor' is vacuous. To work on is necessarily to apply higher order goods to solving an economic problem. Labor without any previous knowledge or experience in the world would be useless - everything that is in labor is.the accumulation of a lifetime of experience, and is a specific type of capital. The only question that hangs out there is regarding time; but that is not something specific to 'labor.'

To start, here is my reasoning: The use of tools is universal among humans, just as language use is. It also has no meaning outside of those subjectively given. The capital structure, just like language, is built up from ever more complex combinations of simple factors, which can then be used to combine with other simple and complex entities to form even higher level entities. The complexity only arises when the nature of what must be communicated (produced) calls for it - complex, large combinations of words (capital) don't necessarily crowd out simpler ones, but the large ones depend on the small ones... The problem with much development policy seems to be putting the cart before the horse, introducing complex physical elements into a populace whose experience does not match...

Edward Sapir, who's "Language: an introduction to the study of speech" I'm reading, breaks down a 'word' thus: 'unknowingly' - the root word is "know", un, ing, and ly are adjuncts, thus (b) + A + (c) + (d) where.he parentheses note an element of a word that has no fundamental meaning on its own. (Of course, fundamental meaning is determined only in context; in the case of a discussion of language all of the adjunct elements of speech become independent elements on their own.) One could do a break-down and map the elements and relations in a short story, but that would be a lot of work.

The 'radical' (or fundamental) element is the simplest symbol that corresponds to anything we would recognize, yet on its own it cannot convey a thought (outside of a context suggesting another element). For example, if I were to come up to you and just say, "know," you would have no clue what I mean. Any attempt to communicate a thought requires three things: two fundamental elements and a specific relation between them. Thus, the farmer kills the duckling.

The same seems to be true for any capital good as well - a single capital good has no value, until it is combined (in some specific way) with a second piece of capital, usually a skilled person or people. It is the combination and the relation that is important: each on its own does not make a production process.

Different words mean different things at different times, which is true for capital as well. Words are also never single-specific: if you can't remember a word, you can usually come up with some close substitute, or approximate the word using other words. The same is true for capital goods: a hammer might be the most useful tool in the situation you're in, but a rock will substitute for whatever you need to hit. Words, like capital goods, carry different meanings as time advances, some becoming obsolete and some being invented, and others still being plucked from obscurity into a surprising role.

The structure of a language (all of the elements and their relations) is also analogous to the structure of production. Simple language elements, like words, must be understood in order to construct a sentence, a more complex thing. Whether an epic poem, a short story, a novel, or an academic paper, the more and more complex elements that make up a language rest on the strength of the simpler objects. If the simple objects are ambiguous or ineffective at drawing ideas out of the reader, the whole novel will surely be worse. Business organizations will involve as many elements and relations as a novel. Picturing an organization using its net worth captures as much of the image of the whole organization about as well as a page count does the contents of a novel (organizational economists, of course, know this well, as do management/economic sociology scholars)...

The main deviations between capital and language, I think, primarily involve the fact of scarcity, which is necessarily true for capital, but I'm not sure is meaningful in language.

My primary interest in this is wondering whether the methods of linguistics can be applied to the study of the economy. The more belief-ideologically based our conception of the economy is, the more relevant my work is, so I am, of course, simply pursuing my self interest in this case...

Monday, August 1, 2011

The economics of the Fallout universe


I'm a bit of a latecomer to the Fallout franchise. Perhaps a redolent streak of ludditism (or student-enforced poverty) kept me from opening a Steam account or perhaps I'm just slow on the uptake. Either way, once I discovered these games on the urgings of friends, I was immediately hooked. I've found that the standard game reviews usually do the design team justice: the games (and I refer primarily to Fallout 3 and Fallout: New Vegas) are large, immersive, and well-balanced, suitable for novice as well as experienced players, with elements that appeal to the classic traditions in FPS and RPG gaming. Despite a host of often annoying glitches, gameplay is flexible enough to accomodate a fairly wide range of styles and the faction system employed in New Vegas ensures that repeated playthroughs need never be mere rehashes of previously beaten games.

But like in any sprawling fictitious universe, game designers are obliged to mimic a functioning economy. The extent to which this effort proves convincing showcases a strange tension between what is plausible and what is, for lack of a better term, fun. Writers must optimize along a much different objective function than actual merchants (or central bankers, or currency arbitrageurs, or what-have-you), and it shows. Similarly, players' derived utility function has wide margins along the killing-the-naughty-faction frontier, but not much along the trade-with-isolated-population-centers region. Let me explain by way of describing typical mid-game play.

The setting of the games is a retro, 50s-style post-nuclear wasteland. Living as I do in Northern Virginia, Fallout 3 appeals to my sense of familiarity, set as it is in the Washington D.C. area. Many of the place names are ones I recognize (or are close derivatives thereof), and the Metro stations are lovingly rendered to more or less resemble the real thing. Having worked for a season in Death Valley, I find the stylized Southwest of New Vegas more or less congruent with my memories of the place. Consistent with the finer traditions of post-apocalyptic literature and film, roving bands of raiders patrol the wasteland, indulging in cannibalism, slavery, and mayhem. In addition to these Mad Maxesque raider enemies, there are two flavors of modified humans: the ghouls, humans transformed by extended exposure to radiation; and Super Mutants, humans exposed to a virus that has an effect not expecially dissimilar to that of gamma radiaiton on mild-mannered physicists bearing the Banner surname. Various mutated flora and fauna abound, some tame, some hostile. The wasteland itself is rendered to appear as bleak and hopeless as one might hope: the color palette is dark and muted, nothing shines (and when the few noteworty exceptions are found, the effect is startling), and vegetation is generally sparse. Indeed, the player finds, immediately following the brief opening sequences, the wasteland is unforgivably harsh. Oddly, however, this seems to pose no significant problem to the main character.

Curious indeed that the player's character can find copious food, drink, ammunition, piles upon piles of working pre-war technology, weapons, armor and components from which to fashion makeshift gear that rivals the best technology available before the bombs fell. How is this possible? Now, before I continue, let me briefly explain an economic theory often (mis?)characterized as "there is no such thing as a 20 dollar bill lying in the middle of the street." This points to a key tenet of microeconomic analysis: the no-arbitrage condition. Basically, if there's a way to make a free buck, someone's already done it. Prices reflect true scarcity, and where else would you find scarcity than in a post-nuclear wasteland, especially 50 years or more on? There shouldn't be boxes of insta-mash, sugar bombs, cram, and salsbury steak lying around, nor should there be stores of abandoned medication, surgical equipment or arms and armor, not with hostile creatures roaming about. It doesn't take much imagination to conclude that some enterprising individual would have organized well-armed scouting parties to scour the wasteland for all these unguarded (or lightly guarded) troves and warehoused them somewhere secure. The only things one might reasonably expect to find in a well-populated wasteland like the one portrayed in the game are trash (spent needles, junk metal, scrap electronics, broken wood) and naturally grown comestibles: apples, wild barley and the like. Some hardscrapple farming might be possible, but only on hard-to reach escarpments or other highly defensible positions. The lavish abundance of cheaply obtained resources present in the games beggars belief.

On the simple evidence that the player's character (a naive teenaged vault dweller in FO3, a feckless courier in FO:NV) is able to amass an enviable fortune while traveling alone (optional companions notwithstanding), it is clear that everyone else living in the environs is criminally lazy, bafflingly stupid, or inhumanly generous. The larger factions (mixing game factions here, the Brotherhood of Steel, the Enclave, the NCR, Caesar's Legion, the Great Khans, et al) would have pilfered every single tidbit of useful stuff within their grasp and stuffed it into some abandoned vault or safehouse or somewhere and posted well-armed guards at the entrance. So supplied, the faction in question would then face the ages-old economic question: is this stuff for sale or not?

This fundamental question isn't as facile as it seems. After all, we all, each of us asks this question routinely day after day. Take milk--milk is just some stuff you squeeze out of a cow. Yet for some folks, they have less milk than they want, so they'll give up consumption of other stuff to get some milk. Other folks have a lot more milk than they want in order to consume stuff that isn't milk. That's the line that separates dairy farmers from everyone else. A more convincing wasteland economy would have fewer goods strewn about and more rough-and-ready market commerce, so long as prices accurately reflect relative scarcity and the currency is stable. Fallout treats these conditions only slightly better than most games of its ilk.

Prices. This is usually the big cock-up in games. Almost all games prevent direct player arbitrage, preventing players from buying low and selling high (there are occasional exceptions, and a few games that feature exploitable production opportunities that can result in functionally infinite cash, these features are probably a result of the difficulty of anticipating player cleverness) though the real flaw in most games is a fixed-price scheme. Fallout allows players to purchase costly skills to narrow the bid-offer spread, but this is still somewhat suspect in a world where the price of ammunition is the same regardless of whether or not there is a pending war between heavily armed and armored antagonists. I can forgive a game for failing to program agricultural futures contracts, but to ignore price "gouging" seems an odd omission. Well, at least the writers are sensible enough to have decent monetary policy.

Currency. Ah, the joys of free banking. The currency of choice in the Fallout games is the humble bottlecap. Popped fresh from the top of a delicious Nuka-Cola or a refreshing Sunset Sasparilla, caps are the legal tender of choice in the wasteland. I suppose we're meant to take for granted that there isn't much in the way of seigniorage with respect to the gutted factories that produced the beverages of choice and that counterfeiting is sufficiently difficult (indeed, New Vegas boasts a side quest that allows the player to shut down a counterfeiting operation) to prevent unwanted price inflation. The game, intentionally or otherwise, neatly sidesteps Gresham's Law by effectively declaring a competitive, non-specie fiat currency (for more on free banking, see The Theory of Monetary Institutions by Lawrence White). You can't clip or shave caps, so that's nice. Using caps ensures stable nominal prices, but the games fall flat when the situation demands that real prices should shift.

The world of Fallout appears to be saying something like: "here you are in a world of nuclear devastation, please enjoy the endless bounty that half a century of banditry was unable to capture. While you're here, please take advantage of a degree of price stability that a century of central banking was unable to approach. Mind the radiation." So sure, the relative scarcity is ludicrous, and the indifference principle is busted (this suggests that in this simplified world, the player should be roughly indifferent between purchasing goods from a vendor or scrounging from the wilds), but for all that, nothing in the trade system makes the game any less fun to play... even for an economist (this is, of course, making the rather wild claim that economists are capable of "having fun").

I do wonder though, how the enjoyment of the game would change if the economics were modeled better. Players would be obliged to become merchants or farmers (or tradesmen of some other stripe) as well as warriors, and would be required to defend hoarded booty against thieves. Prices would fluctuate, asset bubbles or speculative frenzies might erupt, and bond vigilantes would be nearly as dangerous as armed mercenaries. I don't imagine this sort of gameplay to be palatable for everyone, but if Obsidian makes an MMO version, I can imagine allowing players to pursue some sort of merchant or financier option.

So, why won't games more closely model a real economy? The chief reason is probably that it needlessly complicates gameplay. If all they did was to include fluctuations in the real price of ammunition or guns, players might resent the additional constraint or just feel cheated. More complicated financial innovations would probably detract from what is basically a souped-up shoot-em-up game. Moreover, even though modeling real economic shocks in a video game might help gamers get a better grasp on real-world economics, including appropriate expository dialog would be tricky for writers, complicated by the fact that younger players usually just flick through dialog blindly anyway. Still, I think there might be a good way to pass along good economic intuition inside a fun video game. Imagine including elements of the Russ Roberts/John Papola Keynes vs. Hayek videos in a setting where you have the Bloody Mess perk. I'd have to laugh my ass off if I saw such a thing.

Tuesday, April 26, 2011

The Alchian-Allen controversy

Down in the Bahamas, a discussion got started in the hotel room with Jesse and Kyle regarding an example used to illustrate the Alchian-Allen theorem in Walter Williams’s class. The Alchian-Allen theorem states that if a flat cost is added to any two goods, one low-quality and one high-quality, people will choose more of the high-quality goods relative to the low-quality ones. The flat cost added to each good makes the number of low-quality goods that one must give up to get the high quality good lower, lowering the relative cost of the high-quality good.

The example that Walter Williams gave in class was that one would expect that couples with children would be more likely to go on nice dates to the theater than a cheap one to the movies, because the couples with children must pay a flat fee to a baby-sitter regardless of their choice of date.

The objection was raised in class (by Sam, I believe) that the baby-sitter should not enter into the decision making calculus because at the point where the couple makes the decision of which date to go on the babysitter is a sunk cost. The couple with children, therefore, would be just as likely to choose the movie over the theater as the couple without children.

Walter Williams, then, was wrong, if the problem is framed as it is above. William framed it in such a way that he would be right – that there is a sorting effect due to the flat cost which makes the couple who wants to go to the theater more likely to pay for a babysitter, with the end result being that you see more couples with children at the theater than at the movies. There’s also the possibility that both prof. Boettke and Charity-Joy suggested, that couples make the choice of a high-quality or a low-quality date before hiring the baby-sitter, and don’t change their plan once they make the choice. Which saves Prof. Williams’s example as well.

That misses the main point of controversy though, which is this: if someone makes a plan to do one thing over an alternative because of a flat cost involved, do they change their plans if they pay the flat fee before making the choice between alternatives?

Option 1: Once the babysitter cost is paid, the couple with children will choose the same way they would have if they were childless. The A-A effect does not apply, except potentially in the selection effect.

Option 2: (my argument) At the point in time after the babysitter is paid but before the couple actually goes to either the theater or the movie, the couple still takes into account the flat cost because going to either the movie or the theater requires that the flat cost be paid again. So the cost, after the baby-sitter is paid, of going to the theater is the price of the theater ticket and the foregone movie, and the cost of going to the movie is the price of tickets and the foregone theater trip. The value of both the foregone theater trip and the foregone movie include the price of the babysitter because either choice in the future necessitates the hiring of a babysitter. Thus, even once the babysitter is paid this time, the movie and the theater still cost the same.

My intuition comes from the claim that on a trip to Maine, tourists will more likely choose expensive lobster than cheap lobster because in order to get expensive Maine lobster ever again one has to take a trip to Maine, so even though the current trip to Maine is a sunk cost the A-A effect still applies.

I’m interested in getting this resolved so that I can stop thinking about it.

Thursday, January 13, 2011

The High Cost of Cheap Communication

15 years ago, I don’t think I would have imagined it possible to look someone in the eye as I chatted with them from 5000 miles away. Not, that is, unless I paid substantially for the privilege. Today, a webcam-and-microphone combination is cheaper than dinner for two at Applebees and Skype software costs nothing but the time to install it. E-mail is ubiquitous, texting is close to free, facebook is available on cheap telephones, and twitter can seemingly show up on anything that runs on batteries. Rapid communication is shockingly inexpensive, with the predictable result that a whole hell of a lot more messages make it past the quality filter. The benefits of the communication revolution are readily apparent to anyone who has had to coordinate anything on the hoof (imagine how different is the procedure to make lunch arrangements with friends today than a decade and a half ago).

For Soldiers in the field, this means that instead of waiting weeks for APO deliveries (I’m not fond of the “snail mail” moniker), messages that takes seconds to compose whisk their winsome way around the world in mere milliseconds, fresh as a daisy. This, combined with the negligible out-of-pocket expense, means that messages that were once far too trifling to send come pouring in at a rate of knots. The Soldier of yore might have expected to hear about Uncle Frank’s surgery or the fire out by the ol’ barn, but by no means did he help junior with his homework. Today’s combat-deployed Soldier has one boot in theater and if not all of the other, then at least some of the laces still at home to a degree that was unimaginable to the average infantryman of previous wars.

Under normal circumstances, most of us handle split attention fairly deftly. We can track careers, family, football scores, pop culture minutiae, fashion, art, or any of the other tiny stars of interest in our personal galaxies with relative ease. Under normal circumstances, we are not under enemy fire. Think of two competing goods: mission-relevant information and all other information. Like other resources, attention is finite, and plain ol’ microeconomics show that if we make irrelevant information cheaper, people will consume more of it. Logorrhea from home is a tax on Soldiers’ attention and may contribute to a decline in readiness.

Without taking a closer look, it’s impossible to say exactly where the margins are, so the details end up in the good ol’ “it’s an empirical question” pile, so beloved by classroom economists, but that’s fine. There’s also a big steaming policy question I wish I could just ignore. Part of the equilibrium solution is that we’d have to compensate Soldiers for giving up the luxury of staying in contact with family back home (assuming we don’t have Jonesian 0-MP Soldiers), which may or may not be substantial and could hurt morale. At any rate, I’d like to run those crazy regressions. I imagine the data is out there.

H/T Dave Gauntlett