Wednesday, June 4, 2008

AEA 2009

Just learned that my proposed session for the 2009 AEA/ASSA meetings wasn't accepted. Bummer. Only 15%, the article said, of all proposed sessions get accepted, though. As an old friend used to say, "the unconditional probability of getting anything published is zero," or in this case 15%. I thought we had a better chance than normal, though, because the topic was timely given a lot of news coverage on it in the last 6 months, and I filled the chair with a famous economist who was perfect for the session. But, in the end, I wonder if it didn't hurt us that the papers were themselves all written by assistant professors? I don't know if that matters or not, actually, but I wonder. Plus, maybe the paper abstracts just weren't doing it for the reviewers, I don't know. I'll try again another time, but as I'm already moving beyond this research topic myself, I don't know if I really have it in me to keep it going. I also wonder if it's totally stupid to be "moving beyond this research topic" when I haven't even published the papers yet. So much of this academic business is alien to me. I could spend my life studying this very issue, in all seriousness, if the data problems were resolved. I often feel like with applied microeconomic work, unless you can get your hands on very special datasets, you just can't make any real headway. I recently purchased an almost $20,000 dataset, but even with it, I still have worries that I will be able to match it up with something else that will be meaningful or valuable. The problem in economics - and here I'm just complaining - is that our data is all "observational." That is, we don't run trials or experiments - most of the time, we take the real world as given, and then try to improvise using various empirical strategies to test various theories. I'm really honestly still at a place where even saying that I've shown A causes B is important to me. From the outside, to the noneconomist, it must look incredibly irrelevant to exert so much time and energy on being conclusive about causality, but among my peers, that's like the necessary condition for all good work. Of course, it's just a necessary condition - it's not a sufficient condition. As a friend and I were saying the other day, there's also "generalizability" and "the value of the question." To have "generalizability" (meaning can I extrapolate these results from the sample and apply them to other phenomenon, or is this so narrow as to be really non-useful?), "an interesting question," and a valid empirical strategy showing "causality"? Well, that's the trifecta, and you'd be surprised how hard it is to get all three on the same paper. Except for one paper I currently have, all my work has been either one or two of those issues, but rarely all three. And even with the new stuff I am working on, I sense how vulnerable I am to one or two of these issues.

This is what makes empirical work really challenging. Honestly, empirical work is to me the more important work. Theorists would no double disagree, and I'm not saying that theory isn't important at all. But in the end, a theory that cannot be tested or that will never be tested is simply a hypothesis in my mind. Maybe I've swung so far that I'm a "logical positivist" but I suppose because I work in the intersection of culture, social behavior and economics, I am more squeamish because the field outside of economics is simply saturated with "theories" (if you can even call them that) that people treat as truth even if they've never tried to really test them empirically. That is especially the case in Christian circles, and in my denominational circles, where the last 50 years has been one of greater interaction with culture, but at the same time, by taking a kind of "narrative-exegetical" approach to studying society, rather than taking a social-scientific approach.

Anyway, that's neither here nor there. Summer's long and I've got to get work started. Nor more blog posts today.

No comments: