Luz H. Ospina, PhD

Research Scientist

The Authoritarian Personality: Statistical Tests


October 21, 2022

Now that we've prepared our dataset, we can start to run some basic statistical tests to better understand how our variables relate to the right-wing authoritarianism (RWA) personality.
 We can begin by importing our dataset and some of our Python libraries of interest:
#import libraries
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns

#import dataset
data = pd.read_csv('dataclean.csv')

#describe our data
data.describe()
These libraries I've imported are some of the more popular data wrangling, statistical analysis and data visualization libraries that are commonly used in data science. New libraries I've imported include Scipy, which is used for data analysis, and Seaborn, used for data visualization and plotting. The great thing about these libraries is that you can run the same analysis using different libraries, meaning that there is more than one way to conduct the same test. Some libraries may give you more information than others, so conduct analyses with different libraries to see which library you prefer. I end with describing our dataset to make sure we're working with our latest, cleaned dataset (95 variables and 9,661 observations).
As always, check your numerical variables to determine their distribution, specifically whether they are skewed or kurtotic. For example, if we check our variable of interest (i.e., dependent variable), the RWA total score:
#Check distribution of our dependent variable

stats.describe(data['rwasTot'])
We see that the RWA total score has a mean of 102.27, with a high variance (122.34), yet is normally distributed (skewness = 0.60, kurtosis = 2.56). 
Independent-Samples t Test
Let's start to visualize some variables. What are the RWA total scores for individuals who voted compared to those who did not vote in a previous national election?
#Let's plot some variables and run some common tests
#voted graph

data.groupby('voted')['rwasTot'].mean()
data.groupby('voted')['rwasTot'].std()
objects = ['Yes', 'No']
votemean = [101.58, 102.79]
std = [10.67, 11.31]
y_pos = [i for i, _ in enumerate(objects)]

plt.bar(y_pos, votemean, yerr=std)
plt.xlabel("Vote Status")
plt.ylabel('RWAS Total')
plt.title('Voted in Last Election')
plt.xticks(y_pos, objects)
plt.savefig('ML2_1.png', dpi=300)
plt.show()
As you can see, we first grouped our data by voting status in order to compute respective means and standard deviations. We created objects that refer to these means and standard deviations to include in our function, which then apply these values to our bars:
Matplotlib allows us to edit our graph further by labeling our axes, ticks, and title (using 'xtitle', 'ytitle', 'xticks', and 'title', respectively). Also, we can export our graph in higher resolutions, perfect for journal article publications and posters. Overall, we see a nice (but rather unimpressive) bar graph.
 Now let's conduct an independent samples t-test to determine whether these groups are statistically different from each other. Given how similar these bars look, we might expect no difference.
#t-test for 'voted' 

yesvote = data[data['voted'] == 1]['rwasTot']
novote = data[data['voted'] == 2]['rwasTot']
stats.ttest_ind(yesvote, novote)

from statistics import mean, stdev
from math import sqrt

#Effect size for voted
cohens_d = (mean(yesvote) - mean(novote)) / (sqrt((stdev(yesvote) ** 2 + stdev(novote) ** 2) / 2))
print(cohens_d)
To conduct this test, you group your data by your independent variable (and code the groups appropriately), followed by calling on the independent t-test in the Scipy library. Our results yield: t(9659) = -5.36, p < 0.001, suggesting that the RWA total score differs between individuals who did and did not vote in the prior year's national election. Interestingly, those who did not vote demonstrate higher right-wing authoritarianism scores. This could reflect an actual difference, or perhaps it is a spurious finding due to our large sample size. Once we compute Cohen's d, we obtain an effect size of -0.11, which is indeed considered a small effect size.
ANOVA
OK, let's create a visual representation of our RWA total score for each of our education groups. Instead of a bar graph (which presents minimal information), we'll generate a boxplot using the Seaborn library:
#Let's create a boxplot of 'Education' using Seaborn

import seaborn as sns
objects = ('< High School', 'High School', 
'University', 'Graduate')
y_pos = [i for i, _ in enumerate(objects)]
boxpl = sns.boxplot(x="education", y="rwasTot", data=data)
fig = boxpl.get_figure()
plt.xlabel("Education")
plt.ylabel('RWAS Total')
plt.title('Respondent Level of Education') 
plt.xticks(y_pos, objects)
fig.tight_layout()
fig.savefig('ML2_2b.png', dpi=300)
#ANOVA: Education (using statsmodels gives you thorough information)

import statsmodels.api as sm
from statsmodels.formula.api import ols
results = ols('rwasTot ~ C(education)', data=data).fit()
results.summary()
aov_table = sm.stats.anova_lm(results, typ=2)
aov_table
Our results, [F(3, 9657) = 104.48, p < 0.001] indicate that RWA total scores significantly differ depending on educational status. But wait! There are some assumptions that we should have checked before we ran the ANOVA. While we know that our dependent variable is normally distributed, we should check for homogeneity of variances between our groups (which we should have run prior to our independent samples-t test as well). The most common test is Levene's test, which we can easily run.
#Levene's test for homogeneity of variances [education]

stats.levene(data['rwasTot'][data['education'] == 1],
data['rwasTot'][data['education'] == 2],
data['rwasTot'][data['education'] == 3],
data['rwasTot'][data['education'] == 4])
Our Levene's test yields a significant value: F(3, 9657) = 30.48, p < 0.001. Therefore, we cannot assume our group variances are equal. While there are a number of ways to proceed when Levene's test is significant, one conservative way to address this issue is by comparing our groups using the non-parametric version of the ANOVA, or the Kruskall-Wallis Test:
#Kruskal-Wallis Test for Education
stats.kruskal(data['rwasTot'][data['education'] == 1],
data['rwasTot'][data['education'] == 2],
data['rwasTot'][data['education'] == 3],
data['rwasTot'][data['education'] == 4])

#ANOVA: Efffect size
esq_sm = aov_table['sum_sq'][0]/(aov_table['sum_sq'][0]+aov_table['sum_sq'][1])
esq_sm
Our Kruskal-Wallis Test was also significant: H(3, 9657) = 315.54, p < 0.001. This suggests that our education groups are still significantly different from one another in terms of the RWA total score.
As the ANOVA is an omnibus test, it will not exactly tell us between which pairs of groups the means significantly differ. We will therefore have to conduct follow-up, post hoc tests to determine between which educational groups our statistical differences lie.
#ANOVA for Education variable: Post Hoc
from statsmodels.stats.multicomp import pairwise_tukeyhsd
from statsmodels.stats.multicomp import MultiComparison

mc = MultiComparison(data['rwasTot'], data['education'])
mc_results = mc.tukeyhsd()
print(mc_results)
We called upon the Statsmodels library which allows us to run post hoc tests, specifically Tukey's HSD. Using the family-wise error rate, which corrects for multiple comparisons of 0.05, we can see that all of our pairwise comparisons are significant: all four groups statistically differ from each other. (Some additional coding can generate specific significance levels, rather than true/false rejections, if preferred.) Overall, it appears that RWA differs between education groups; subsequent prediction models would benefit by including this variable.
 Pearson's Correlations
 Let's now run some correlations! I am really curious about how our five personality dimensions correlate with the RWA total score. I would assume there's some overlap among these personality dimensions, given that the right-wing authoritarian construct is considered to be a personality trait itself. We should start by checking the distributions of our personality dimensions to assess for normality:
#Check distribution of personality variables

stats.describe(data['extraversion'])
stats.describe(data['agreeableness'])
stats.describe(data['conscientiousness'])
stats.describe(data['emotionalstability'])
stats.describe(data['opennesstoexperience'])
Good, our personality dimensions are normally distributed. Let us now create a correlation heat-map; this will yield a traditional correlation matrix and color code the strength of the correlation statistics (i.e., the stronger the values, the lighter the color).
#Plot and correlate personality variables with RWAS
#correlation matrix heat map

datasel = data[['rwasTot', 'extraversion', 'agreeableness', 'conscientiousness',
'emotionalstability', 'opennesstoexperience']]
corr = datasel.corr()
map = sns.heatmap(datasel.corr(), annot=True, fmt=".2f")
fig3 = map.get_figure()
plt.title('Correlations between Personality and RWAS') 
fig3.tight_layout()
fig3.savefig('ML2_3.png', dpi=300)
Looking at this heat map, we can easily see that some of the five-factor personality dimensions correlate moderately with each other, but they correlate weakly with the RWA total score. Nevertheless, we should look at these correlations' p values to see whether any of these associations are statistically significant.
#pvalues for personality/RWAS correlations

stats.pearsonr(data['rwasTot'], data['extraversion'])
stats.pearsonr(data['rwasTot'], data['agreeableness'])
stats.pearsonr(data['rwasTot'], data['conscientiousness'])
stats.pearsonr(data['rwasTot'], data['emotionalstability'])
stats.pearsonr(data['rwasTot'], data['opennesstoexperience'])
Interestingly, all of our correlations were statistically significant with the exception of the association between emotional stability and the RWA total score (p = .06). Again, this may relate to our large sample size and may represent spurious associations (and low effect-sizes). 
Remember that these correlation statistics assess for linear relationships between our variables of interest. Therefore, it is a good idea to plot these associations visually, to determine whether a linear relationship best explains the nature of these relationships, or perhaps some other relationship best explains how they are related (e.g., quadratic, curvilinear, etc.). Let's create a scatterplot of the relationship between conscientiousness and the RWA total score:
#Scatterplot (with regression line) between conscientiousness and RWAS total

plt.xlim(0,16)
plt.ylim(0,200)
reg = sns.regplot(x=data["conscientiousness"], y=data["rwasTot"])
fig4 = reg.get_figure()
plt.xlabel("Conscientiousness")
plt.ylabel('RWAS Total')
plt.title('Correlation between Conscientiousness and RWAS') 
fig4.tight_layout()
fig4.savefig('ML2_4.png', dpi=300)
Given that our scales use whole numbers, we see that our plot yields this distinct vertical spread of scores. However, we do not see evidence of a curvilinear relationship, and we can see why our correlation coefficient for this relationship is quite low. Nevertheless, this particular relationship was significant, probably due to the large number of respondents.
Linear Regression
I'm really interested to see whether any of the five-factor personality dimensions significantly predicts the RWA total score. We can run a multiple linear regression, including all five personality dimensions, as our predictors, and assess the significance of the overall model.
#Basic Multiple Linear Regresson

X = datasel
y = data['rwasTot']
X = sm.add_constant(X)
model = sm.OLS(y, X).fit()
predictions = model.predict(X)
model.summary()
As you can see, this library generates a detailed regression analysis. We created a data frame of our predictors (X), and referenced our RWA total score as 'y'. We also requested that the model add a constant (i.e., y-intercept). Our R-squared value is 1, or 100% of our variance is accounted for by our predictors! (This is most likely due to right-wing authoritarianism itself being a personality construct, which should be captured by the five-fator personality dimensions; in other words, redundancy.) Also, by looking at each of the predictor coefficients, we can see that all of our predictors significantly predict the RWA total score. All of our predictors demonstrate positive relationships with our dependent variable, with the exception of conscientiousness (i.e., as conscientiousness increases by 1, the RWA total score decreases by -1.49e-15). 
All of our predictors were continuous variables; but what about including some categorical variables? We can surely do this, but we need to make sure that they have been dummy coded before introducing them to our model. In addition to our personality variables, let us add to our data set of predictors: education, gender, religion, voted status, and married. Next we will dummy code our variables, meaning we will create additional variables, all coded as '0' or '1', indicating the presence ('1') or absence ('0') of a particular level for that variable.
#Create new predictor dataset with some categorical vars
X = data[['extraversion', 'agreeableness', 'conscientiousness','emotionalstability', 
'opennesstoexperience','education','gender','religion','voted','married']]

#Dummy Code
X = pd.get_dummies(X, columns=['education','gender','religion','voted','married']
          ,drop_first=True)
Get_Dummies has created dummy variables for all of our categorical variables. For example, variables were created for all levels in the education variable: 1, 2, 3, and 4. If an individual scores a '2' on this variable, then a '1' would appear under the 'education_2.0' variable, and zeroes for all other education variables. However, as the code suggests, we need to drop one of these variables; in this case, the first variable created (i.e., 'education_1.0') was dropped. This is necessary for our stability of the model, and is why one should create N-1 variables when dummy coding. For example, if a variable has two levels, then only one dummy coded variable is necessary. Now let's re-run our regression including our dummy-coded variables:
#Basic multiple linear regresson with categorical variables

y = data['rwasTot']
X = sm.add_constant(X)
model = sm.OLS(y, X).fit()
predictions = model.predict(X)
model.summary()
As expected, we have many more predictors due to our dummy coding procedure. Our R-squared statistic has been reduced drastically with the inclusion of these categorical variables, accounting ~6% of the variance of the RWA total score. However, some of these new variables do significantly predict our dependent variable, specifically: our education groups, religion groups 3-10 and 12, and our married groups. For example, married respondents ('married_2.0') have a significantly lower RWA total score compared to unmarried respondents ('married_1.0'). With these exercises, we can see how sensitive our linear regression models are depending on which predictors we incorporate into the model, and highlights the importance of choosing a specific set of variables that will yield the highest predictive model that may be generalizable to subsequent data. 
Overall, this was a brief review of the many statistical analyses that Python libraries can easily conduct. My tutorial is by no means all-inclusive, but is meant to highlight some of the more common statistical analyses (and their associated code) we psychologists use. Play around with all the Python libraries to see which you prefer for your analyses, and remember that you can easily search for code online. Stack Overflow is a great resource/discussion board, if you have any coding questions.
To view and/or download my Python Jupyter notebook, visit my Github page.

Share

Tools
Translate to