Wednesday, July 11, 2012
ELS Nuts and Bolts
In addition to finally joining the blogging bandwagon, I am also joining the empirical legal scholarship bandwagon. I am working on my first empirical piece, a survey about public attitudes regarding certain alternatives to prison. I attended Martin and Epstein's Workshop on Conducting Empirical Legal Scholarship at USC in May, and after learning a lot of useful things, and after determining that the question I am interested in exploring has not already been covered and cannot be answered by mining existing data, I have utterly disregarded their advice not to do my own survey. (My lead RA is fantastic, with a background in survey design and statistics based on her experience in market research and work with non-profits prior to law school.)
Much has been said in these pages and elsewhere about the wisdom--or lack thereof--of more people, especially empirical novices, joining the ELS fray. I am interested in insights from those who have been through the process about things to watch out for. I am talking not just about the relatively obvious advice which I've already violated (don't do it; team up with an expert; outsource the survey, etc.), but tips or warnings about practical things one wouldn't know to look out for unless they've been through the process. It might be about training or monitoring RAs or volunteers, time-saving tricks regarding data coding or storing, transferring, or presenting data (I'll be using STATA), or some other similar topic I don't even know enough about to know to list.
Hopefully, sharing tips like this could help improve the work of--or at least make day-to-day life a little easier for--not only myself, but also other novices or even more seasoned ELSers. Thanks in advance.
Posted by Martin Pritikin on July 11, 2012 at 03:36 PM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference ELS Nuts and Bolts:
Mark, welcome to the fold of dabblers in ELS. Having done one large empirical project with Daniel Chen on whether subsidizing reproductive technology through insurance reduces the rate of child adoption, and just finished another one with Travis Coan we have just sent out on whether you can "buy" sperm donor identification by offering sperm donors additional compensation, both of which were done co-authoring with methodological experts, I have two pieces of advice:
First, juniors or others who face time constraints should be wary on banking on empirical projects for tenure. The time needed to get them up and running, the quality of what you get and its significance and effect size, etc, are just too hard to predict. My advice would be to treat these as "side projects," unless ELS is your thing (like my colleague Daniel Chen) in which case you have no other choice.
Second, consider presenting your research design and materials to those whose scholarship assumes the opposite of the claim you are hoping to make, BEFOREyou do any of the heavy lifting. You want to try and design a study that will convince this interlocutor, and not one that they will find easy to dismiss on ecological validity, etc. Now obviously you will not be able to overcome all of their critiques or concerns, but I think it is much much more helpful to hear what they are likely to say at the planning stage rather than investing all this time to do it and only hear it on the back end. In some instances, the fact that you won't be able to overcome some of these critiques may dissuade you from undertaking the research project to begin with, which, while sad, may be the better outcome in terms of how you spend your energy.
Posted by: I. Glenn Cohen | Jul 11, 2012 3:57:37 PM
A couple thoughts — first, be sure to get approval from your University’s IRB. Not doing so could cause headaches down the road. It’s often the case that ELS research can be given expedited review — of a very different sort than law journals’! — and is often even exempt. But even when that’s the case, the IRB still has to categorize it as exempt.
Second, when you code the data, be very detailed. Use explicit and understandable variable names, label everything; basically, make sure that if someone else picks up your data or your dataset a year from now, they could understand how you coded and analyzed. More important, that also ensures that YOU will remember how you did things — a couple times I’ve gone back to old datasets and struggled to recall what I meant by variables “Code1,” “Code2,” and “Code3.” Along those lines, just as you might put dates on article drafts, do the same for the data and output files (and save output files with clear names — I use SPSS, which gives defaults such as “Output1” or “Output2" — much better to say “Correlations Output,” “Multiple Regression for Variable1,” etc.).
I don’t mean to state the obvious; but I can’t over-emphasize being very detailed in coding and labeling, etc. Again, it cuts down on future headaches.
Finally (for now), I agree about showing your survey to others ahead of time; also consider collecting and analyzing some pilot data. That helps you identify patterns that you might not have expected, and allows you to tweak the instrument by adding or deleting questions, changing unclear wording, etc.
Best of luck with it and enjoy!
Posted by: Jeremy Blumenthal | Jul 11, 2012 9:24:59 PM
I've shepherded many people through your situation. The little tips are too numerous to enumerate (even in logarithmic form), but here are three I think will be helpful.
After you write your survey questions pretend you've finished data collection and put numbers in (your priors are fine). Now write paragraphs from the "completed" survey using declarative sentences. You'll find out pretty quickly if your questions are awkward or imprecise, or if you want to infer something from a question that you didn't actually ask. Then pilot test it.
Document the heck out of your Stata do files. Insert comments everywhere justifying your recodes, scales, transformations, analyses, etc. If you can't justify it in complete sentences while you're doing it, writing it up later will be torture.
Have strict data entry rules and keep a tight control over the process. It will save time in the long run. Make sure that every cell conforms to the type of entry for that column, and that the RAs are consistent. If one RA is entering proportions (0.70) and another percentages (70) you will have to spend time figuring out which is correct and fixing the other ones. Or you'll have an RA annotating a data field that is supposed to be numerical ("10 to 20") which will also require cleaning time; you can't compute a mean using "10 to 20". Or an RA who puts multiple entries in a cell made for one piece of information (State of death: "Idaho/Utah"). Pro tip: have a cell in each record for the RA's initials so you'll know where to go if something arises.
Most of all, enjoy yourself. When you get the survey back you will be the only person in the world who knows what you know.
Posted by: Joe Doherty | Jul 11, 2012 10:37:40 PM
This is not a tip regarding process but final product: I have noticed Westlaw frequently omits images, including simply data tables, so I often need to track down empirical papers elsewhere. Make sure your final article, once published, goes up on SSRN or elsewhere with the correct journal pagination so others can cite it!
Posted by: junior mint | Jul 12, 2012 10:41:48 AM
Thanks to all for the fantastic and helpful comments. Also, FYI to Prof. Cohen, I do have tenure now, which I why I felt comfortable trying this piece, although there are admittedly plenty of other time constraints making it a challenge!
Posted by: Martin Pritikin | Jul 17, 2012 9:29:42 AM