The Challenges of Individual Heterogeneity: Modeling Public Opinion on Military Spending

What explains someone’s opinion on the state of military spending?  Like many social science questions, this can be particularly challenging to model, since it is highly subjective to variety across individuals.  For example, the “profile” of someone who supports increased military spending could be someone who is highly patriotic, or a veteran or someone with a close friend or family member that is a veteran, or someone with a hawkish preference for foreign policy.  Conversely, someone who supports decreased military spending could be a pacifist, or an isolationist, or simply pro-smaller government across the board.  Most survey data that I know of do not go into such depths on personal histories, relationships, and political views, so often times modeling these question require incomplete data and might yield slightly unsatisfactory results.

The data, variables, and descriptive statistics

For this example, I used panel data from the General Social Survey (GSS), spanning from 2006 to 2010 with three waves: 2006, 2008, 2010.  Pooling respondents across the panel, there were about 1,500 observations, with 626 respondents in the first wave, 489 respondents in the second wave, and 391 respondents in the third wave.  Note that I elected to keep the panel unbalanced to preserve data, even though I have no way to know attrition was non-random.  Remember that this time period included the controversial surge in Iraq (2007), Barack Obama winning his first presidential election based on an anti-war in the Middle East platform (2008), and subsequent backlash against President Obama’s foreign policy (the beginning of which would have started around 2010), so support for military spending understandably changed over time.

I used the variable natarms, reverse coded to rnatarms so that a higher value corresponded to wanting to spend more, as my dependent variable.  It is a three level factor coding responses to the question “are we spending too much, too little, or about the right amount on the military, armaments, and defense.”  The respondents could answer “too much,” “just about right,” or “too little.”  Pooling the responses across the panel, 585 respondents said we are spending too much, 489 said we are spending about right, and 391 said we are spending too little. This equates to rnatarms having a mean of 1.88 and a median of 2 (corresponding to the about right category), with a standard deviation of 0.8 across the panel. Between 2006 and 2010, 101 people moved from wanting to spend more to wanting to spend less, and 100 people moved from wanting to spend less to wanting to spend more, while 754 people did not change their opinion. Figure 1 shows the mean value of rnatarms over the course of the panel, with a basic regression line of rnatarms on panelwave in red.

Figure 1: Mean value of rnatarms over the panelmean_spending_opinion.png

For my primary independent variable, I used polviews, since I thought that the respondents’ political views would be highly correlated with some of the characteristics of individual heterogeneity mentioned earlier.  In the GSS, polviews is a seven level factor, where 1 is extremely liberal, 4 is moderate, and 7 is extremely conservative. Again pooling responses across surveys, 58 respondents said they were extremely liberal and 42 said they were extremely conservative. “Moderate” was the most common category, with 602 responses. The mean was 4.02 and the median was 4, (corresponding to the moderate category), with a standard deviation was 1.41. Between 2006 and 2010, 594 people did not change their category, and 436 people moved up or down only one category. One person moved from extremely conservative to extremely liberal and no one moved from extremely liberal to extremely conservative.

For control variables, I included sex, race, education, confidence in the army, and panel wave. I recoded sex into a new binary variable female, where 1 indicated female and 0 indicated male. After pooling the waves, there were 611 women and 895 men in the data. It is notable that across the panel, 1,182 respondents did not change their gender over time, but 161 people moved from being female to male and 162 people moved from being male to female. This is a bit suspicious since gender is generally constant, and it might be indicative of a possible miscode or other mistake (such as a husband answered the survey in 2006 but the wife answered the survey in 2008).

I also recoded race into a new binary variable white, where 1 indicated the respondent was white and 0 indicated the respondent was black or other. In the pooled data, there are 328 non-whites and 1,178 whites. Over the course of the panel, there were again some suspicious changes for what is generally a constant characteristic: while 1,303 respondents did not change their race, 101 people became non-white and 101 people became white.

Education is on a scale from 0 to 20, which roughly corresponds to the respondent’s number of years of education. The mean is 13.83 and the median is 14, which is approximately about two years of undergraduate education. Over the course of the panel, one would reasonably expect that someone at most would earn at most four or five years of education (depending on the time of year for initial and final surveys). However, 498 respondents reported fewer years of education over the course of the panel, and 58 respondents reported 6-18 additional years of education; 1,047 people did not change their education or gained 5 or fewer years of education.

Finally, I recoded the variable representing the respondent’s confidence in the army to rconarmy so that a higher value indicated higher confidence in the army. It has 3 levels: “a great deal” of confidence, “only some” confidence, and “hardly any” confidence. Pooling across panels, 700 respondents had a great deal of confidence, 634 had only some confidence, and 172 had hardly any confidence. This corresponds to a mean of 2.35 and a median of 2 (approximately “only some” confidence”), with a standard deviation of 0.68. Over the course of the panel, 781 respondents did not change their opinion, while 55 moved from “a great deal” to “hardly any” and 44 moved from “hardly any” to “a great deal.”

While I considered dropping variables that had suspicious changes over the course of the data collection, the loss of data was quite significant. I therefore decided to keep all observations, though all results should be interpreted with great caution.  I initially had included age and age-squared in my analysis to account for respondents who lived through the Vietnam War, since that might influence one’s opinions on military spending.  However, according to my analysis, the respondents should have at most aged 4 or 5 years (depending on when the initial and final surveys were administered) between 2006 and 2010.  However, only 32 of the respondents aged an appropriate amount.  While I think this should have a limited impact on my primary variables of interest – after all, people are human and are prone to lie about their age, whereas opinions on military spending and political views should not have such associated behaviors – but it is also something to keep in mind for the analysis (sigh, social science data!).

Hypotheses

I expected that people who are white and male, who are stereotypically more conservative, might favor more military spending, whereas someone who was more educated, a characteristic stereotypically associated with being more liberal, would favor less military spending. I also expected that someone who had more confidence in the army would be more likely to favor more military spending. Based on the baseline plot in Figure 1, I expected 2008 to be negative relative to 2006 but 2010 to be positive relative to 2010.

Baseline OLS model

For the baseline model, I did a naïve OLS regression of rnatarms on polviews with the control variables, clustering the standard errors to account for the same respondents in each wave. The R output results are shown in Table 1. They indicate that, on average, every additional category on the polviews scale (i.e. more conservative) is associated with a 0.112-point increase in the rnatarms scale (i.e. toward more funding), net of gender, race, education level, confidence in the army, and panel wave. It is highly statistically significant (p-value < 0.0001).

Regarding my control variables, being white and better educated both aligned with my initial hypothesis (as evidenced by the coefficients’ positive and negative signs, respectively) and were highly statistically significant (p-value < 0.0001) and moderately significant (p-value < 0.001), respectively. However, the coefficient on female was positive, the opposite of the sign I expected, and was not significant. Confidence in the army is positive and highly statistically significant as well, indicating that, on average, every additional category of confidence in the army is associated with a 0.31-point increase on the rnatarms scale, all else constant. The categorical variable for 2008 is indeed negative, indicating that, on average, relative to 2006, respondents were 0.062 points less favorable for military spending net of other factors, but it is not statistically significant; the categorical variable for 2010 is surprisingly also negative, indicating that, on average, relative to 2006, respondents were 0.014 points less favorable for military spending net of other factors. Neither of the panelwave dummy variables are significant. The model only explains about 16.6% of the variation in rnatarms, but generally the model seems to confirm my initial hypotheses.

Table 1: OLS Model (R Output)

Screen Shot 2016-04-08 at 2.59.48 PM.png

Fixed effects and individual heterogeneity

As noted previously, modeling opinions on military spending can be tricky because of the degree of individual heterogeneity, .e. that someone who favors greater military spending is fundamentally different than someone who does not favor military spending. For example, perhaps someone’s family member is serving or served in the military, which influences their opinion on military spending. Since I do not have this information, I have potentially introduced omitted variable bias in my model. Fixed effects would address this problem by giving each person their own intercept, which essentially controls for any particular factors not in the model that affects that person’s opinion by allowing the individual to be their own control.

The fixed effects results from the R output are shown in Table 2. For polviews, the results indicate that, on average, a one-category change in polviews is associated with a 0.011-point positive change in the rnatarms scale, net of any particular person and other factors, across three waves of the panel. It is no longer statistically significant.

For my control variables, the sign on female is now negative, which better aligns with my initial hypothesis, but the sign on white is now negative as well. The signs on educ and rconarmy are unchanged. In addition, white and educ are no longer statistically significant, and rconarmy has dropped in significance level (p-value < 0.001). In this new model, 2008 is still negative and is now slightly statistically significant (p-value < 0.01), and 2010 is now positive, in line with Figure 1, but is still not statistically significant. This new model now only explains 1.4% of the variation in rnatarms, which is low but not wholly unexpected, since most respondents appropriately do not change for sex, race, and education. Furthermore, since fixed effects models only examine change, the remaining error in the model is more random and unstructured, so explaining this variation is more difficult.

Table 2: Fixed Effects Model (R Output)Screen Shot 2016-04-08 at 3.03.38 PM.png

Perhaps I am overestimating the impact of individual heterogeneity (though I don’t think so, given the low adjusted R-squares).  A random effects model might be a better choice. In contrast to the fixed effects model, the random effects only partially demeans the data and makes assumptions much more similar to OLS, so the results should theoretically fall between the fixed effects and the naïve OLS models. If fraction of the mean of each year’s value that is demeaned, lambda, is closer to one, then the random effects model results will be closer to the fixed effects model, while if it is closer to zero, then the random effects model results will be closer to the OLS model.

The random effects results from the R output are shown in Table 3. For polviews, they indicate that, on average, every additional category of polviews is associated with a 0.087 category increase in the rnatarms scale, net of other factors including time trends. As expected, this coefficient is lower than the OLS but higher than the fixed effects model. Like the OLS model, it is highly statistically significant (p-value < 0.0001).

Regarding my control variables, the coefficient on female is again positive, the coefficient on male is again positive, and the signs on educ and rconarmy remain the same. The significance of the coefficients is more similar to that of the OLS: white and rconarmy are highly statistically significant (p-value < 0.0001, same as the OLS model), and educ is slightly statistically significant (p-value < 0.01, lower than the OLS model). The coefficients for 2008 and 2010 are now both negative again, and 2008 is very slightly statistically significant (p-value < 0.1). The adjusted R-squared for this new model is still low, indicating that only about 13.5% of the variation in rnatarms is explained by the model.

Table 3: Random Effects Model (R Output)

Screen Shot 2016-04-08 at 3.06.30 PM.png

For the moment of truth on the impact of individual heterogeneity, I ran a Hausman test to decide whether random effects or fixed effects is a better model.  Because random effects is more efficient (by not having a dummy variable for each observation and examining levels) and allows for estimation of time unvarying parameters that are of interest (such as sex or race), it is preferable to use random effects, but if there is sufficient heterogeneity that is uncontrolled for in the model then fixed effects would be the better model to avoid omitted variable bias. The Hausman test examines this question by testing the null hypothesis that the two models are statistically the same, which favors random effects. If, however, the test rejects the null hypothesis, then fixed effects is probably a better model.

The Hausman test R output is shown in Table 4. The p-value for the chi-squared statistic is very small, hence rejecting the null hypothesis, so fixed effects is the better model. This is not very surprising, especially given the low adjusted R-squared values – the models have indicated that a lot of factors contribute to one’s opinions on military spending, which most likely includes individual heterogeneity. This could also be a possible explanation for why the signs for the 2008 and 2010 dummy variables in the fixed effects model more closely follows the plot in Figure 1; the 2010 dummy variable was potentially confounded in the OLS and random effects model, causing the sign to switch.

Table 4: Hausman Test OutputScreen Shot 2016-04-08 at 3.08.55 PM.png

To conclude, the fixed effects model is the most appropriate model given my variables and question. It indicates that more conservative respondents were more likely to favor increased spending for the military, which is in line with my original hypothesis, although it is not significant in the fixed effects model.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s