Congratulations to Lorien and David for presenting our poster, “Personality change beyond the big five: Personality aspects, vocational interests and cognitive ability” at the fifteen annual meeting for The Society for Personality and Social Psychology!

Poster is available upon request:

Elleman, L., Condon, D. M., French, J. A., & Revelle, W. (2014). Personality changes beyond the big five: Personality aspects, vocational interests and cognitive ability. Presented at the Fifteenth Annual Meeting for The Society for Personality and Social Psychology. Austin, TX.

Item Response Theory can be used to evaluate the effectiveness of
exams given to students. One distinguishing feature from other paradigms is that it does not assume that every question
is equally difficult (or that the difficulty is tied to what the researcher said). In this way, it is an empirical investigation
into the effectiveness of a given exam and can help the researcher 1) eliminate bad or problematic items and 2) judge whether the test was too difficult or the students simply didn’t study.

In the following tutorial, we’ll use R (R Core Team, 2013) along with the psych package (Revelle, W., 2013) to look at a hypothetical exam.

The ‘subset()’ function in base R is, in my experience, the cleanest method for trimming large data frames and matrices. While there are many options for accomplishing this chore, I find ‘subset’ to be the most readable and intuitive — qualities that are imperative for minimizing errors and using code which is readable to others (especially when “others” may include those who are not useRs). If you’re not using it already, it’s worth making the switch (see ?subset or here).

That said, making it work with vectors is annoying! There are logical reasons for this of course, but… it just won’t do what you want if you use ‘subset()’ as you would with a data frame or matrix.

Here’s the example:

Use the ‘letters’ character vector

123

letters
# Get the second halfsubset(letters, select = c(n:z))

Here’s what you get:

123

[1]"a""b""c""d""e""f""g""h""i""j""k""l""m""n""o""p""q""r""s""t""u""v""w""x""y""z"Error in subset.default(letters, select = c(n:z)): argument "subset" is missing, with no default

After some consternation, I stumbled across this gem submitted by Marc Schwartz.

Function to subset vectors in R

12345678

subset.vector <-function(x, select){ nl <- as.list(1L:length(x)) names(nl)<- x
vars <- eval(substitute(select), nl) x[vars] print(x)}

There are other options, but this conveniently uses the same approach as the base ‘subset()’.

When you’re writing up reports using statistics from R, it can be tiresome
to constantly copy and paste results from the R Console. To get around this, many of us use Sweave, which allows us to embed R code in LaTeX files.
Sweave is an R function that converts R code to LaTeX, a document typesetting language. This enables accurate, shareable analyses as well as high-resolution graphs that are publication quality.

Needless to say, the marriage of statistics with documents makes writing up APA-style reports a bit easier, especially with Brian Beitzel’s amazing apa6 class for LaTeX.

By using LaTeX to author APA manuscripts, researchers can address many problems associated with formatting their results
into tables and figures. For example, ANOVA tables can be readily generated using the
xtable package in R, and graphs from
ggplot2 can be rendered within the manuscript using
Sweave (see Wikipedia). However, more complicated layouts can be difficult to
achieve.

In order to make test items or stimuli easier to understand, researchers occasionally organize examples in a table or
figure. Using the standard \table command in LaTeX, it’s possible to include figures in an individual table cell
without breaking the APA6.cls package.

So, you’ve figured out how to get your data in R and you want to see some basic descriptive statistics. No sweat.

Are you using the psych package? If not, download it and install it (as described in my post on importing data). Once that’s done, the describe function (in the psych package) should give you all you need, and possibly more:

Descriptive Statistics with the psych package in R

1

describe(mydata)

Obviously, you will substitute “mydata” in the command above for the name you have given your own data frame. If you’re unsure of what you called it, try typing ls() in R.

The output will look something like this:

Output from describe(mydata)

123456

var n mean sd median trimmed mad min max range skew kurtosis se
gender 12001.620.4921.660.00121-0.51-1.750.03age 220025.0710.382023.294.451459451.371.000.73height 320066.994.216766.964.455777200.10-0.710.30q_6 4215.191.0365.350.00363-0.88-0.600.22q_22 5273.301.3843.301.48165-0.01-1.270.27

Sometimes, a task needs gettin’ done and I’d rather just do it than invest the time to figure out how to do it in R. This has been the case for me with the “vlookup” function for the last couple of years. Well, today is the day that we’re gonna transfer responsibility of this chore to R.

For those of you who don’t know, vlookup is one of the many powerful built-in functions in Excel. It allows one to search through a data structure for rows that match some specified characteristic. And, it is then possible to pull information (from, say, some other column(s) in that data structure) for the matching rows. I realize that doesn’t sound so magical, but if you’ve never used it before… trust me, it will change your life (if only a smidge).

Let’s just go straight to an example. Start by creating this data frame in R:

1

students <- data.frame(people=c("Lily","Bo","Jen","Omar","Sara","Jack","Ting"), team=c("Red","Blue","Green","Red","Blue","Green","Red"), number=c(1,2,3,5,2,7,1))

If you call the “students” data frame, it will be a 7x3 object showing the numbers and teams for seven people. Now we have another data frame with scores for nine teams.

If you call “scores”, it’ll be a 9x3 object showing two scores for each team (named after a color). Rather predictably, we now want to get the scores for students in the first data frame. Start by making columns with missing values for inserting the scores.

Here’s where the work is done. Match the two data frames by their common values and declare which values you want to take out of the scores data frame to put in the students data frame. We do this for both scores below:

Call the students data frame to check that everything worked.

You do have to be careful about the ordering of variables in the ‘match()’ function. If you get it backwards, it’ll break because the vector of data to be inserted does not fit the dimensions of the target data frame.

I admit, this doesn’t cover all the bells and whistles of vlookup but it’s good enough for most uses. And you didn’t even have to open Excel.

In this tutorial, I’ll cover how to analyze repeated-measures designs using 1) multilevel modeling using the lme package and 2) using Wilcox’s Robust Statistics package (see Wilcox, 2012). In a repeated-measures design, each participant provides data at multiple time points. Due to this, the assumptions about model error are different for variances which are presented between subjects (i.e., SS_{B} than are variables presented within subjects (i.e., SS_{W}. After the within-subject variability is partialled out, we model separately the effect of the experiment (i.e., SS_{E} and the error not account for by the experiment (i.e., SS_{R}).

When using this tutorial, there are a few things to keep in mind:

This is a draft. I’ll be updating this page with more graphs and explanations as time allows, informed by your feedback.

Multilevel models and Robust ANOVAs are just a few of the ways that repeated-measures designs can be analyzed. I’ll be presenting the multilevel approach using the nlme package because assumptions about sphericity are different and are less of a concern under this approach (see Field et al., 2012, p. 576).

One way repeated measures

The first dataset we’ll be using can be obtained from the Personality Project:

Reading the dataset

12

data <- read.table(file ="http://personality-project.org/r/datasets/R.appendix3.data", header =T)

The main research question is does the valence of the word affect the rate at which items are recalled? First, let’s take a look at descriptive statistics of the dataset. We can sort them by the item valence using the describeBy() function in the psych package, which is available on CRAN.

Generating descriptive statistics by a group

12345

# Load the psych packagelibrary(psych)# Describe the Recall variable by the Valence variabledescribeBy(data$Recall, group = data$Valence)

Generating descriptive statistics by a group

1234567891011

## group: Neg## var n mean sd median trimmed mad min max range skew kurtosis se## 1 1 5 27.8 3.9 29 27.8 4.45 22 32 10 -0.39 -1.72 1.74## -------------------------------------------------------- ## group: Neu## var n mean sd median trimmed mad min max range skew kurtosis se## 1 1 5 11.6 2.7 12 11.6 2.97 8 15 7 -0.09 -1.83 1.21## -------------------------------------------------------- ## group: Pos## var n mean sd median trimmed mad min max range skew kurtosis se## 1 1 5 40 3.81 40 40 2.97 35 45 10 0 -1.78 1.7

Graphing the Effect

We can generate a quick boxplot to display the effect of Valence on Recall using the ggplot2 package from CRAN.

A multilevel model is simply a regression that allows for the errors to be dependent on eachother (as our conditions of Valence were repeated within each participant). To run this type of analysis, we’ll use the nlme package from CRAN, although I’ve also had good luck with the lme4 package if you like experimenting.

Installing nlme

1

install.packages('nlme')

Similar to any approach to model testing, we want to see if our predictive, augmented model is better than a simple, 1 parameter mean model. Thus, we begin by specifying a baseline model in which the DV, Recall, is predicted by its overall mean. Second, we specify our model of interest, in which Recall is predicted instead by a the item Valence, which was repeated within subjects.

Specifying our 2 Models

123456789

# Load the nlme packagelibrary(nlme)# Compact Modelbaseline <- lme(Recall ~1, random =~1| Subject/Valence, data = data, method ="ML")# Augmented ModelvalenceModel <- lme(Recall ~ Valence, random =~1| Subject/Valence, data = data, method ="ML")

One way of assessing the significance of our model is by comparing it from the baseline model. By comparing the models, we ask whether Valence as a predictor is significantly better than the simple mean model (i.e., a better fit). We can do this with the anova() function.

Comparing the Models

1

anova(baseline, valenceModel)

Comparing the Models

123

## Model df AIC BIC logLik Test L.Ratio p-value## baseline 1 4 125.24 128.07 -58.62 ## valenceModel 2 6 84.36 88.61 -36.18 1 vs 2 44.87 <.0001

The output contains a few indicators of model fit. Generally with AIC (i.e., Akaike information criterion) and BIC (i.e., Bayesian information criterion), the lower the number the better the model, as it implies either a more parsimonious model, a better fit, or both. The likelihood ratio indicates that our valenceModel is a significnatly better fit for the data than our baseline model (p < 0.0001). Therefore, the item Valence had a significant impact on the measured Recall of the participant, Χ^{2}(2) = 44.87, p < 0.0001.

We can obtain more specific details about the model using the summary() function:

## ## Simultaneous Tests for General Linear Hypotheses## ## Multiple Comparisons of Means: Tukey Contrasts## ## ## Fit: lme.formula(fixed = Recall ~ Valence, data = data, random = ~1 | ## Subject/Valence, method = "ML")## ## Linear Hypotheses:## Estimate Std. Error z value Pr(>|z|) ## Neu - Neg == 0 -16.20 1.31 -12.36 <2e-16 ***## Pos - Neg == 0 12.20 1.31 9.31 <2e-16 ***## Pos - Neu == 0 28.40 1.31 21.67 <2e-16 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## (Adjusted p values reported -- single-step method)

Thus, our post hoc analysis shows that participants’ rate of recall was significantly better for positively valenced items (M = 40) than neutral (M = 11.6, b = 28.40, p < .0001) and negatively valenced items (M = 27.8, b = 12.20, p < .0001). Similarly, neutral items were recalled at a significantly higher rate than negatively valenced items (b = -16.20, p < .0001).

Wilcox’s Robust ANOVA

As of 5/1/13, the WRS package must be compiled from source to be installed. You can obtain the source package from the R-Forge repo below:

Unlike using lme() to analyze the data as a multilevel model, rmanova() requires that the data are in wide format. To adjust our table, we’ll use the reshape2 package from CRAN and cast the data into a wide format.

For some reason, the rmanova() function doesn’t like dealing with factors variables, so we’ll remove the 5 Subjects. Finally, we’ll use rmanova(), which trims the data by 20% before estimating the effect.

Similar to our findings from above, Valence had a significant influence on the item recall rate of the participant, F(1.26, 2.52) = 154.66, p < .01. However, we still want to conduct post-hoc analysis on the 20% trimmed means, which we’ll do using the rmmcp() function.

Post-hoc analysis confirms that Negatively valenced items are significantly different from both Neutral (Ψ̂ = 16, p < .01) and Positive items (Ψ̂ = -13, p < .05). Additionally, Neutral items are significantly different from positive items (Ψ̂ = -28.33, p < .01).

When presenting data, confidence intervals and error bars let the audience know the amount of uncertainty in the data, and
see how much of the variance is explained by the reported effect of an experiment. While this is straightforward for
between-subject variables, it’s less clear for mixed- and repeated-measures designs.

Consider the following. When running an ANOVA, the test accounts for three sources of variance: 1) the fixed effect of the condition, 2) the
ability of the participants, and 3) the random error, as data = model + error. Plotting the repeated-measures without taking the
different sources of variance into consideration would result in overlapping error bars that include between-subject variability, confusing the presentation’s audience.
While the ANOVA partials out the differences between the participants and allow you to assess the effect of the
repeated-measure, computing a regular confidence interval by
multiplying the standard error and the F-statistic doesn’t work in this way.

Winston Chang has developed a set of R functions based on Morey (2008) and Cousineau (2005) on his wiki that help deal with this problem, where the sample variance is
computed for the normalized data, and then multiplied by the sample variances in each condition by M(M-1), where M is the
number of within-subject conditions.

Since I’m just getting started using a new blogging framework here at SAPA, I figured I’d tackle a topic that is relevant for new R users. So we’ll both be doing something very basic.

The topic is getting your data out of Excel and into R. It turns out that loading the data is one of the most frustrating experiences for new R users — incomprehensible error messages are not uncommon. This results in a rocky start to what might otherwise be a beautiful relationship (between you and R, that is).

So, let’s get past this basic roadblock. Seven simple steps.

Do you have R loaded on your machine? If yes, great – open it. If not, bummer. You’ve got to go off and get it. If you need help, try William Revelle’s R pages on the Personality-Project. You might also consider installing RStudio, which is an increasingly popular option (especially among Windows users). Personally, I’d recommend giving the basic “R Console” a try once or twice if you have a Mac, but plenty of RStudio devotees would disagree with me. Whatever decision you make, open it after downloading.

Do you have the “psych” package installed and loaded? This does not happen automatically when you download R. If you’re using RStudio, this is most easily done in the lower right window under the packages tab (click the “Install Packages” button and type “psych” in the search bar). If you’re using the R Console, use the dropdown menus: “Packages & Data ==> Package Installer”. Then type “psych” in the search box and hit “Get List”. Select the psych package and hit “Install Selected”. Then, in the main console window (the one with the “>” prompt and the blinking cursor), type:

1

library(psych)

Open the Excel file with your data.

Column headers. If you haven’t already, give your variables brief but meaningful names in the Excel file (by “variables”, I’m talking about the column names). Do not use numbers as the leading values/characters (e.g., “12blu”) for the column names as this will cause issues in R. In fact, it would be best if you avoided the use of any special characters as some of these will cause issues as well. Also, don’t use column names that spread over more than one row in Excel.

Select all of the data and the columns in your Excel spreadsheet and “copy” it.