In my current project, I want to answer if various cognition items (ratio, 30+ of them, may get reduced based on a separate factor analysis) predict moral outrage - in other words, do increases in item 1 (then item 2, item 3, etc) predict increases in outrage in a significant way. Normally, this would be a simple regression. But then I complicated my design, and I'm having a hard time wrapping my head around my potential analyses and whether it will actually answer my stated question, or if I'm over-thinking things.

Currently, I'm considering a set-up where participants will see a random selection of 3 vignettes (out of 5 options) and answer the cognition items and moral outrage about each. This complicates matters because 1) there is now a repeated measure component that may (or may not?) need to be accounted for and 2)I'm not sure how my analyses would work if the vignette selection is random (thus, all vignettes will show up the same number of times, but in different combinations to different people). I am anticipating that different vignettes will not be equal in their level of DV (which is on purpose - I want to see if these patterns are general, not just at very high or very low levels of outrage).

When originally designing this, I had wanted to average the 3 vignette scores together for each subject, treating them as single, averaged item values to use in a multiple regression. But I've been advised by a couple people that this isn't an option, because the variance between the vignettes needs to be accounted for (and the vignettes can't be shown to be equivalent, and thus can't be collapsed down in analysis).

One potential analysis to combat this is a nested, vignette-within-individual multilevel design, where I see if the pattern of cognition items to outrage is consistent between vignettes (level 1) and across subjects (level 2), to account for/examine any vignette-by-cognition/MO pattern interactions. And this makes sense, as MLMs can be used to compare patterns, rather than single scores.

But I can't wrap my head around what part of this set-up/the output I would look at to actually answer my question: generally, which, if any, of these cognition items predicts outrage (regardless of vignette, or across many scenarios)? And can this approach work when the vignettes combinations differ between subjects?

Or is this the incorrect analysis approach and another, simpler one would be more fitting? For example, is the averaging approach workable in another format? What if all vignettes were done by all subjects (more arduous on the subjects, but possible if the strength of the analysis/results would be compromised/overly-complicated)?

Confirmation that my current analysis approach will indeed work, help with what part of the output would answer my actual RQ, or suggestions for an alternative approach, would be appreciated.

More Emily Galeza's questions See All
Similar questions and discussions