Analytic strategies come before software tactics: that’s the Five-Level QDA approach. But there are times when software tactics can usefully inform analytic strategies. This leads to serendipitous exploration, and fits well with the emergent spirit of qualitative research.
When analytic strategies drive software tactics the use of the software is meaningful – focused on the needs of the research rather than the capabilities of the program. But many CAQDAS advocates say that new software features (tactics) do offer new analytic possibilities (strategies), and so the relationship goes both ways. I agree.
While I usually discuss the downsides of software tactics driving analytic strategies, here is an example from the workshop I led at the MAXQDA International Conference (MQIC) in Berlin earlier this year of tactics appropriately informing strategies.
The situation: Using secondary (and therefore less familiar) data
The type and amount of data that is included in an analysis – and crucially the level of familiarity we have with it before the formal stages of analysis begin – are incredibly important. If generating primary data yourself (whether via interviews, focus-groups, observations, or whatever) you have a good level of familiarity from collecting it. If you then transcribe the data yourself, your familiarity increases. But when working with secondary data (that which is ‘naturally occurring’ such as anything online, or collected by others for different purposes) this familiarity is not so high.
The issue is compounded when working with large volumes of secondary qualitative data because it may not be feasible to generate an overview of everything before formal analysis commences. It is in this sort of situation when serendipitous use of tactics can usefully inform strategies.
The example: serendipitous use of software features
Let’s imagine I wanted to explore political discourses in social media spaces. As an illustration in my workshop at the MQIC I used MAXQDA’s Twitter import feature to download and import all tweets written in English between 25/01/2018 and 01/02/2018 that used the #trump hashtag (10,000 tweets, no retweets). As part of the import, all hashtags that were used in 20 or more tweets were also auto-coded (n=99) and the 100 most prolific authors were auto-coded. The imported data and auto-coding is shown in the screenshot below.
A software tactic (data exploration) informs analytic strategies
The purpose of this sort of analysis may only be quite loosely defined at the outset. For example, it may not be possible to develop coherent research questions when the content of the data is unknown – because without having an overview of the Twitter content I wouldn’t know whether those questions were answerable with the available data. I could state that I wanted to explore the nature of online interaction about Trump but it would be difficult – and perhaps unproductive – to try to be more precise early on. I would need to become familiar with the content of the tweets in order to develop meaningful and answerable research questions.
Becoming familiar with this much data would be incredibly time-consuming without the help of software tools. But tools such as those available in MAXQDA (and several other CAQDAS packages too) greatly facilitate the process. For example, to initially explore the content of the tweets, we could:
use the Word Frequency feature to identify the most and least frequent words used across all the tweets
use the Word Combinations feature to identify common phrases used across all the tweets
These two features would quickly provide an overview of content based on words and phrases used – both the most frequent and least frequent terms. Having identified words and phrases that initially seem interesting, we could use the Keyword-in-Context (KWIC) feature to visualize them within the context of whole tweets. See below for screenshots of the results that are produced by these features.
Now we’re familiar, we know some useful questions to ask
In this example we started with a tactic: using data exploration tools – without first having a specific analytic strategy to fulfill. The result was to inform the development of a strategy – coherent and answerable research questions. In other types of projects these data exploration tools may serve other purposes. For example in a discourse analysis project they may be used to identify discursive features of communication or interaction to serendipitously create a corpus of discursive features before knowing which ones are relevant to the overall purposes of the study.
In each of these examples a new strategy has been identified, informed by the preliminary data exploration tactic. Now the Five-Level QDA approach can be adopted to turn this strategy into one or more analytic tasks which can be translated into software tactics in the usual manner.
Resisting the urge to helter-skelter code
Returning to the Twitter example, it’s possible to code tweets containing seemingly interesting words and phrases directly from the result displays of Word Frequency, Word Combination and KWIC features in MAXQDA. But during this early exploration phase that may not be appropriate, because the purpose of this exploration tactic was to familiarize with the content of the tweets – not as a strategy in itself, but only in order to inform the development of a strategy – in this case, new coherent and answerable research questions.
It’s important to resist the urge to helter-skelter code when there is no strategic purpose for doing so. Mechanically coding just leads to mechanical coding. As a tactic, attaching codes helter-skelter to data is fast and easy-peasy. But at the strategies level coding needs a lot of careful thought to ensure that the concepts the codes represent are actually helpful in relation to the purposes of the project, and contribute to answering the research questions.
Driving versus informing
The core principle of the Five-Level QDA method remains that analytic strategies should drive software tactics. But as we’ve seen, tactics can inform strategies in some situations. The difference between driving and informing is critical. Driving means to “push, propel, or press onward”. When analytic strategies are driving the use of software, the strategies are dominant – as we want them to be. Informing means to “impart information or make aware of something”. When software tactics inform strategies, they are not taking over the process but participating in a form of co-production, until new analytic strategies are identified and the usual process of strategies driving the harnessing of the software continues.
Things we can do now that we couldn’t before
Software developers continually develop features that offer new analytic opportunities – there are many things we can now do that we couldn’t do before. This is a great thing as long as the difference between driving and informing is kept in mind – we shouldn’t let these new possibilities drive what we decide to do. Software developers also need to be commercially successful, and they add new features that the broadest range of qualitative and mixed-methods researchers may find useful. That means it is likely that you don’t need all of them for your current analysis project.
Just because something is possible doesn’t mean it’s appropriate or useful: don’t let the tactics drive the strategies.
Comments