At first from a data mining perspective, the French vision of data analysis (analyse des données) has always seen R and Q analysis as not being methodologically different because of mathematical properties (see duality diagrams and eigen-decomposition of Hermitien operator(s)): linking the objectivity (R analysis) with subjectivity (Q analysis).

Nonetheless  interpretation coming from highlighting the double associations is qualitative (see also biplot for quantitative appreciation) and one may give the focus to one or the other.

It is the ‘survey angle’ that may entail the Q or R approach therefore in the era of 4V data sources availability (Volume, Velocity, Various, Veracity, i.e., Big Data) what does this change in terms of methods or practice of these methods?

Keeping this in the context of PCA or MCA (reducing obviously the range of Q methods), there are consequences of being able to increase the dimensions of the matrix used: subjects, variables, both, and the consideration of including time and space.

Do we tend to favour an R analysis when increasing the Volume? What kind of subjectivity analysis (Q analysis) one can control by increasing the Various? Velocity and Veracity may simply be seen as adding complexity but the question of ‘snapshot analysis’ as having persistence has to be considered, in the same way as changing opinions and veracity may be also related. Are we faithful to ourselves when using social media to communicate or do we use a ‘range’ of ourselves? is this depending on the space/time ?

This can be extended to other methods in Q methodologies I suppose. Does this 4V reveals our habitus or in the contrary makes it unreachable, noisy, bluring what could have been analysed within a controlled survey (i.e., using the big data as secondary data). Is the solution to mine the survey (its structure) at the same time as mining the data (answers to this survey)?

Similar questions and discussions