Some recent papers in formal theory have tried to develop a new link between theory and empirics. These works seek to go beyond use theory for more than predictions (comparative statics) purposes. They diverge from the traditional Empirical Implication of Theoretical Model (EITM) approach. These studies attempt to make sense of (especially causal) empirical estimates. These papers start with models of real-world interactions and use formal reasoning to rethink counterfactuals. This recently developed link between formal theory and empirics is best referred as Theoretical Implications of Empirical Model or TIEM. The TIEM approach can take many forms. Here, I briefly describe three of them with no claim to exhaustivity.
One possible way to think about theoretical implications of empirical model, and probably the closest to EITM, is parsing out mechanisms. The empirical literature establishes a link between variable A and variable B. Scholars then provide some rationale for say relationship. A theoretical model can serve to properly evaluate the proposed mechanism or to offer a competing rationale. An example of this particular TIEM approach is Ashworth, Bueno de Mesquita, and Friedenberg (2018) (henceforth ABF). A long tradition in political science and economics documents that politicians tend to be punished for events outside their controls (e.g., drought or economic shocks). This link between random events (variable A) and vote share (variable B) is then assumed to prove that voters are irrational (the mechanism). ABF show that the link between shocks and vote shares can be fully explained by an alternative mechanism which does not assume away voters’ rationality. Random events change the informativeness of an incumbent’s performance and, thus, everything else equals (fixing factors controlled by the office-holder) may affect his electoral performances. A very interesting finding in ABF’s paper is that random events have no effect on the informativeness of office holders’ performance only in knife-edge cases.
Another connection between models and empirics belonging to the TIEM approach corresponds to the evaluation of research design. Using research design D, scholars claim that they control for confounders, thus isolating the causal effect of variable A on variable B. A theoretical model can then be used as a representation of research design D to verify whether confounders are indeed kept constant. A powerful illustration of this form of TIEM is Eggers (2016) (see also Fowler, 2018). A large empirical literature uses regression discontinuity design (RDD) using close elections to measure the incumbency effect (or incumbency status advantage), the electoral bonus simply due to holding office. In this design D, scholars claim that candidates’ underlying ability (or quality) is perfectly controlled for so that the only cause of variation in vote shares (variable B) is the incumbency status advantage (variable A). Eggers, in turn, shows that RDDs do not keep incumbents’ quality constant when losers and winners do not rerun in similar proportion. Eggers even establishes an impossibility result: if RDDs work for open race elections (the close election features no incumbent), they cannot work for close races where an incumbent is running, and vice versa. Eggers’ model is helpful to evaluate all research designs claiming to measure an incumbency status advantage or disadvantage (e.g., a useful exercise is to use Eggers’ approach to evaluate the research design in Klasnja and Titiunik, 2017).
A third form of TIEM reasoning seeks to evaluate what research designs actually measure. Consider a research design D which convincingly control for confounders. Variable A then causally affects variable B, but what is the meaning of this relationship. The research design yields a causal estimate, but a causal estimate of what. The main question then is whether the well identified causal relationship between A and B corresponds to a well identified quantity of theoretical interest. A recent paper by Bueno de Mesquita and Tyson (2019) tackles this question when variable A and variable B both are observed behaviors by some strategic actors. As an example (used in Bueno de Mesquita and Tyson), take counterinsurgency repression by a government (variable A) and rebels’ reaction (variable B). The effect of counterinsurgency can be direct (raising the cost of rebel activities) or indirect/informational (providing information about the government’s strength). Ideally, a researcher would like to measure the direct effect, the informational effect, and the total effect (the combination of direct and informational effects). In actual research designs (i.e., research designs available to scholars who cannot randomly vary all factors), is a researcher capable of measuring any of these effects? Bueno de Mesquita and Tyson uncover conditions under which actual research designs measure the total effect or the direct effect of variable A. That is, they establish conditions such that empirical research designs measure a meaningful theoretical quantity. This conclusion is not just critical to interpret empirical findings. As Federica Izzo, Torun Dewan, and myself show in another setting, when empirical estimates do not correspond to a theoretical quantity of interest, these estimates are not comparable across studies, making the accumulation of knowledge difficult, if not outright impossible.
Several of my own papers use the TIEM approach:
- Cumulative Knowledge in the Social Sciences: The Case of Improving Voters’ Information (with Federica Izzo and Torun Dewan) in which we discuss the difficulty to accumulate knowledge in the social sciences. We establish that two studies with the same research design and observations drawn from similar settings yield comparable estimates only if these estimates correspond to well identified theoretical quantity;
- Electoral Imbalances and Their Consequences (with Carlo Prato) in which we highlight the difficulty to recover unbiased effects of the incumbency status advantage or of the electoral benefit of campaign spending by highlighting an important confounding factor, voters’ attention to politics;
- Are Biased Media Bad for Democracy? in which I explain that empirical studies cannot measure the effect of media bias, only the impact of right-wing or left-wing bias relative to more balanced coverage;
- Lobbying: Inside and Out. How Special Interest Groups Influence Policy Choices in which I detail how regressions using contributions or informational lobbying as a proxy are likely to yield downwardly biased estimates of the extent (when) and the strength (by how much) of SIG influence.