The recently published editorial by Jűrgen Rudolph and colleagues, Don’t believe the hype: AI myths and the need for a critical approach in higher education covers several topics concerning the use of Generative-AI (GenAI) in Higher Education (HE), that I’ve been thinking about and working on in relation to Qualitative Data Analysis (QDA). And I’m reflecting on them in this post.
For more info on my related recent and forthcoming work in these areas see links to resources throughout this post and at the end.
Rudolph et al.’s article debunk “8 AI myths of AI” (Fig 1.) in an attempt to encourage a greatly needed critical approach to it in higher education. Several resonate with or directly relate to critical concerns in the filed of AI-assisted qualitative analysis. In this post I reflect on how in relation to:
Myth 2: AI is intelligent;
Myth 4: AI is objective and unbiased;
Myth 6: AI will not significantly affect the job market;
Myth 7: Generative AI revolutionises higher education; and
Myth 8: Higher education teachers can detect AI-generated content with or without AI.
Myths 1, 3 and 5 are also important, but not discussed in this post.

How Myth #2: AI is intelligent relates to AI-assisted QDA
Rudolph et al (2025) quote Crawford (2021) and Broussard’s (2018) aptly titled book “Artificial Unintelligence” to emphasise that “what we call AI is a set of limited, error-prone algorithms lacking the depth and adaptability of true human intelligence” (p10). They go on to also quote Gardner’s theory of multiple intelligences, that I drew on in a blogpost, to argue that what we need in the QDA space is “interpretive intelligence”.
They rightly remind us that “AI does not truly think” (p11), asserting that the simulation of human reasoning that GenAI produces “risks diminishing our moral capacities and critical capacities by seemingly relieving us of the cognitive labour required for genuine thought” (ibid.).
Whether our critical capacities are diminished when we use AI tools to assist QDA is the focus of my current work with and reflections about AI-Assisted QDA. And there have been several studies published recently looking at the impact of AI on critical thinking and cognitive capabilities in a number of fields.
The crux in this debate for QDA concerns whether GenAI tools are used to replace or assist; in fact a long-standing discussion in the field concerning use of digital tools.
Don’t forget interpretive intelligence in the era of artificial intelligence. Post on the QDAS blog, October 23rd, 2024
How Myth #4: AI is objective and unbiased relates to AI-Assisted QDA
We’ve all heard about biases inherent in LLMs, based as they are on biased training data, thereby perpetuating societal biases. Rudolph et al also quote the age-old adage “garbage-in-garbage-out”, something those of us who teach computer-assisted qualitative data analysis have said many times: the results you get from interrogating patterns and relationships amongst coded data in CAQDAS-packages is based on the categorisation of ideas (coding) and factual characteristics (variables) the researchers inputted previously.
The issue of model bias is clearly significant for qualitative analysts doing social research, because it relates to our values as researchers and humans. Our use of LLMs should absolutely not go unquestioned in this regard. For some these and other ethical issues concerning the development of LLMs are enough to reject any use of them – listen for example to the episode with Janet Salmons of my CAQDASchat with Christina podcast. Others remind us that as humans we are ourselves biased, and to pretend otherwise is naive and detrimental to the practice of QDA – see for example Susanne Friese’s blogpost on the topic.
The issue of biases in research are of course related to objectivity, and so our epistemologies come into play. Quoting Vallor (2024), Rudolph et al remind us that whilst LLMs are designed to generate “fact-like language that sounds accurate”, what it generates can actually be a “veneer of objectivity [that] masks a propensity to generate misleading or outright false content”, mirroring the behaviour of a “human bullshitter” (p12).
In our use of GenAI for QDA are we seeking facts and objectivity? This depends on whether in our research we seek them. But, just like considering the question of the intelligence of LLMs, we need to understand how they work, what they’re designed to do, and what they can in fact, do and what they can’t.
This is why considering for what, when and how different GenAI tools are used within the QDA workflow are the critical questions, so that any use is appropriate to its context.
How Myth #6: AI will not significantly affect the job market relates to AI-Assisted QDA
The effect of GenAI on job markets is yet to be seen. In the QDA space the explosion of new Apps based entirely on the capabilities of LLMs and GenAI is striking. As is the use of general-purpose chatbots (ChatGPT and the like) for QDA, and the continued integration of GenAI capabilities into established CAQDAS-packages. These genres of GenAI assistance for QDA are changing not only the ways in which it happens, but also who is undertaking QDA and in what contexts.
What happens to the profession of qualitative research in the light of these developments? Some hail the rise of GenAI as having a democratisation effect. In the context of QDA this means the opening up of what has historically been the preserve of the professions, to anyone who can access general-purpose chatbots. Are professional qualitative researchers needed when anyone can ask a chatbot to come up with ‘themes’ and do a qualitative analysis?
At the moment the dominant discourse seems to be that the professional qualitative researchers are not going to be out of a job, but that what we do and how we do it, is what is changing. LinkedIn is for example awash with discussion on this topic.
But what this raises is the question of what we do with the time freed up when we delegate QDA tasks to GenAI, which is related to the next of Rudolph et al’s debunked myths…
How Myth #7: Generative AI revolutionises higher education related to AI-Assisted QDA
There is much hype about AI revolutionising almost every professional sector and academic discipline. Rudolph et al discuss this in relation to HE in terms of assessment and the identification of cheating, predicting a “dystopian future” (p 16).
We’re told, often by the developers of the tools, that handing over the boring, time-consuming tasks of QDA to AI frees us humans up to do more of the thinking needed for QDA, and this is often framed in terms of discourses that place GenAI capabilities in contrast with the principles of QDA, what Trena Paulus and Vittorio Marone refer to as “discursive dilemmas”.
However, if we don’t transcribe, code, or summarise qualitative materials ourselves, we lose the accumulation of understanding that is the bedrock of interpretation. And thus, we circle back to the questions of intelligence and objectivity, and what is involved in the practice of QDA.
Whether AI is revolutionising the practice of QDA is debated, see for example the post by Nick Woolf asking the question as to whether GenAI can assist with interpretive QDA and Susanne Friese’s comments on the post, for differing perspectives on the question. Also watch out for a forthcoming post by Susanne on the CAQDAS blog titled Embracing the Paradigm Shift: Moving Beyond Coding in Qualitative Research, due to be published early March 2025.
In addition, the panel discussion I chaired at the WCQR in February 2025 spoke to this question in several ways, and it was particularly interesting to hear the developer of Quirkos, Daniel Turner share that a survey of his user-base revealed only a tiny percentage were interested in GenAI tools for QDA.
In my opinion, the question of whether AI is revolutionising QDA is usefully addressed in relation to definitions of AI and QDA practice. When doing so it is clear that the answer is a resounding: "it depends".
When GenAI exploded onto the scene, I was perplexed to observe the instantaneous interest in these tools despite the fact that the QDA field has had for decades numerous tools harnessing AI technologies that had seen very little interest. It’s also interesting to observe that the term "AI" in the context of QDA tools now has a different meaning. Now, when researchers refer to "AI" they almost certainly mean the kind of "Generative-AI" that draws on LLMs, typically summarisation, prompting, code suggestions, coding and theme generation. Tools drawing on traditional AI is not usually what’s being referred to anymore, despite the fact that these have many affordances, especially when it comes to large datasets (as Normand Péladeau described on the Qual-Software Jiscmail list on 21st February and which Daniel Turner responded to on 24th February).
This relates to the second terminological issue: "qualitative data analysis", QDA for short. What we seek to learn from datasets of different sizes is an important consideration when determining the appropriateness of different analytic tools, whether leaning on traditional-AI, generative-AI or not. This also harks back to the earlier raised question of whether we want or expect the tools we use to assist in our analyses or to replace some aspect of our practice, related to our epistemologies and perspectives on objectivity.
See also
Developments in Qualitative AI and why it matters. Research Accelerator Conference presentation. 4th December 2023.
How Myth #8: Higher education teachers can detect AI-generated content with or without AI, relates to AI-Assisted QDA
As mentioned in relation to myth 7, cheating via the use of AI is a big concern for HE institutions, and detecting the use of GenAI also resonates for the field of QR. Some uses may be appropriate, others are not. Dialogue and guidelines around such uses is happening in a range of spaces, and we must continue to foster these discussions.
I've previously highlighted the significant issue of "qualitative deepfake": the reality that it is possible to conduct an entire qualitative research project, from start to finish, entirely via AI, with minimal or no human input.
Can the reviews and publishers of QDA outputs and those who read and cite them detect when AI has been used? Does it matter whether they can? We are yet to develop widely shared and adopted guidelines for when it’s appropriate to use different types of traditional- and generative-AI in the qualitative workflow, but there is a pressing need to do so.
If students are likely to hide their use of Generative-AI for fear of being caught and punished as a result, then it is also likely that researchers will too. What does transparency look like in this context? We need to discuss these issues more openly and recognise that transparency about the use of tools is more important than ever. It's not a new issue or topic of discussion in the methodological community – we’ve been discussing transparency in digital tool use since the advent of CAQDAS-packages in the 1980s - but the stakes a higher now than ever before for our profession.
At the recent World Conference on Qualitative Research (WCQR) in Krakow, Poland, Amira Ehrlich in discussing engaging ChatGPT as a collaborator in autoethnographic research shared their intention to co-author the resulting book with ChatGPT. The qualitative research community must consider what transparency of GenAI tools looks like, and issues relating to authorship are part of that. This speaks to the issue of the heightened importance in the age of GenIA of analytic strategies driving the use of digital tools (our tactics), something I’ve written about before, and will be speaking about again soon
See also
The risk of qualitative deepfake is a reality. Post on the QDAS blog, July 10th, 2024
Generative-AI in Qualitative Research: Step-Change, Abomination, or…? Cathy March Memorial Lecture, Co-Hosted by the Royal Statistical Society and the Social Research Association. 20th March 2025.
What does this all mean?
There is much false hype around AI and also horror and resistance around the implications of its use for doing qualitative research. But we must also recognise that many qualitative researchers, just like many students, are already, and will continue to use it, regardless of the risks and consequences. We cannot hide from this fact.
As a scholar and teacher, I agree with Rudolph et al that “critical AI literacy must be at the forefront of higher education curricula […] as this literacy is essential for navigating the ethical, epistemological, and practical challenges posed by AI” (2025:18).
In the context of the practice of qualitative data analysis, this means that we must understand what LLM’s are and how they work to be able to assess whether and when their use may be appropriate – or not – within the qualitative workflow.
We need to critically reflect on the implications, and discuss them as a community of practice in transparent and respectful ways.
See the following awareness-raising activities I’m involved in to contribute to meeting this aim:
FREE webinar series via the CAQDAS Networking Project (about the use of digital tools for QDA generally, but recently with a focus on AI-Assisted tools).
CAQDAS-blog via the CAQDAS Networking Project (about the use of digital tools for QDA generally, but recently with a focus on AI-Assisted tools).
Training courses, webinars and other events in the appropriate uses of traditional- and generative-AI in the qualitative workflow, forthcoming sessions include:
March 3rd AI, Critical Thinking + Reflexivity in QDA: What’s the relationship? Webinar hosted by the CAQDAS Networking Project.
March 20th Generative-AI in Qualitative Research: Step-Change, Abomination, or…? Cathy March Memorial Lecture, Co-Hosted by the Royal Statistical Society and the Social Research Association.
March 25th Appropriate Uses of AI for Qualitative Analysis. IASSIST Workshop Series: Tools for Qualitative research
April 4th AI-Assisted Qualitative Analysis. Training hosted by the Social Research Association
April 7th Qualitative Analysis in the Age of AI: Enacting Methods Appropriately with MAXQDA’s AI-Assist. Lecture hosted by Verbi Software
April 15th & 16th AI-assisted Qualitative Analysis. Seminar hosted by Instats
April 24th How to get the best from analysis using AI tools. Hosted by The Association for Qualitative Research (AQR) as part of their AI Hackathon
References
Broussard, M. (2018). Artificial unintelligence. How computers misunderstand the world. The MIT Press.
Crawford, K. (2021). The Atlas of AI: Power, politics, and the planetary costs of Artificial Intelligence. Yale University Press.
Ehrlich, A (2025) Exploring the Synergy Between Human and Artificial Intelligence in Qualitative Research: An Autoethnographic Dialogue. Paper presented at the World Conference on Qualitative Research (WCQR), Krakow, Poland. 6th February 2025
Friese, S. (2024) Looking into the Mirror: Reflection on AI and Human Bias in Research. Post on the Qeludra blog. January 13th 2024
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. Basic Books.
Paulus, T. M., & Marone, V. (2024). “In Minutes Instead of Weeks”: Discursive Constructions of Generative AI and Qualitative Data Analysis. Qualitative Inquiry, 1(8)
Rudolph, J., Ismail, F., Tan, S., and Seah, P. (2025) Don’t believe the hype: AI myths and the need for a critical approach in higher education. Journal of Applied Learning and Teaching. Vol.8 No.1
Silver, C. (2023) What’s a foot in the Qualitative AI space? Post on the QDAS blog. May 5th 2023
Silver, C (2024) Navigating the Intersection of Qualitative Analysis and Technology: Strategies Driving Tactics in the Age of AI. Post on the ATLAS.ti Blog. February 7th 2024
Silver, C (2024) Don’t forget interpretive intelligence in the era of artificial intelligence. Post on the QDAS blog, October 23rd, 2024
Silver, C (2024) The risk of qualitative deepfake is a reality. Post on the QDAS blog, July 10th, 2024
Silver, C (2025) Generative-AI, Critical Thinking and Reflexivity in Interpretive Qualitative Analysis. Paper presented at the World Conference on Qualitative Research (WCQR), Krakow, Poland. 6th February 2025
Silver, C., Paulus, T., Morgan, D., Turner, D., and Friese, S. (2025) The Process of QDA in the era of AI. Panel discussion at World Conference on Qualitative research (WCQR), Recorded Online 11th February 2025
Salmons, J (2024) Christina chats with Janet Salmons. CAQDASchat with Christina podcast. Episode 10. 31st May 2024
Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford University Press
Comments