Generative AI – (Not) a Tool for Qualitative Research? Rethinking the Role of AI in a Human-Cantered Discipline
- Susanne Friese
- Jun 8
- 6 min read
Introducing a New Series
The emergence of generative AI (genAI) has sent ripples through every corner of academic and applied research. But in qualitative research—long known for its reflexivity, interpretive depth, and human-centered ethos—the response has been particularly conflicted. Is this new technology an unwanted intruder, threatening the epistemological roots of our practice? Or is it a powerful assistant, ready to augment our thinking and analysis?
A common misconception about generative AI is that it seeks to replace the researcher. Given the current discourse around automation, efficiency, and productivity gains, it’s no surprise this perception exists. But this narrative often triggers resistance—and with good reason. Concerns around bias, hallucinations, or inconsistent outputs only reinforce skepticism and fuel hesitation.
But what if we changed the frame? What if generative AI isn't positioned as the analyst, but rather as an assistant—a tool to support, not supplant, the researcher? Like a research assistant, it doesn’t think, interpret, or decide for you. It offers suggestions, not conclusions. The goal isn’t automation; it’s augmentation.
This shift in perspective is crucial. It creates space for collaboration. Researchers maintain full epistemic authority while AI helps manage complexity, surface alternative interpretations, and expose blind spots. By viewing AI as a thinking partner—not a thinking machine—we can begin to work with it critically and creatively, rather than defensively.
It is in this spirit that this blog post launches a new series in which I unpack some of the most common concerns and misconceptions surrounding the use of large language models (LLMs) in qualitative research. Many of the objections I hear stem from limited AI literacy, reliance on outdated models, or—perhaps most often—not having spent enough time using these tools to truly understand what they can (and cannot) do.
As Ethan Mollick rightly puts it: “The only way to learn AI is to use AI.”
So, what can you expect in the upcoming posts? Together, we’ll explore key areas of concern—from bias and hallucination to data security and beyond. I’ll also dive into the specific statements and assumptions researchers have made about genAI and evaluate them in light of current technology, ethical considerations, and good research practice.
Let’s begin by tackling the three reservations that come up most often: bias, hallucinations, and data security. Each one raises legitimate questions. But as we’ll see, they can also be addressed—technically, procedurally, and methodologically.
Addressing Core Concerns – Bias, Hallucination, Security
1 Bias: Not New, Not Unique to AI
Bias is often the first critique hurled at AI. Yet as Denzin and Lincoln reminded us in 1994, long before AI came into the picture, there is no such thing as value-free science.
“The researcher as a bricoleur understands that research is an interactive process shaped by his or her personal history, biography, gender, social class, race, and ethnicity, and those of the people in the setting. The bricoleur knows that science is power, for all research findings have political implications. There is no value-free science” (Denzin and Lincoln, 1994, p 3).

Researchers themselves bring bias, shaped by their history, identity, and context. The same applies to AI. Its outputs reflect the data it was trained on. Rather than aiming for some mythical bias-free tool, the task is to engage with AI transparently and reflexively. We must understand how both human and machine interpretations are shaped and take steps to mitigate harm through critical thinking and triangulation.
2. Hallucination: From Fiction to Fact-Based Dialogue
Generative models can produce “hallucinations”—confidently stated answers that aren’t actually grounded in the source material. This is a valid concern, especially in qualitative research, where misattributing or inventing participant quotes, themes, or connections can seriously compromise the integrity of an analysis.
However, hallucinations are not an unavoidable flaw of AI—they can be effectively mitigated with the right architecture. One powerful method is Retrieval-Augmented Generation (RAG). In RAG systems, the AI doesn’t rely solely on its general training knowledge to generate a response. Instead, it begins by retrieving relevant sections from your own uploaded documents—such as interview transcripts, focus group notes, or open-ended survey responses—and uses those excerpts as direct evidence to inform its output. This grounding mechanism dramatically reduces the risk of hallucinations and shifts the interaction from speculative output to fact-based dialogue.
This approach is central to how for instance QInsights operates. Every response is traceable back to your own data. When the AI provides a theme, insight, or summary, you can inspect the supporting references—right down to the sentence. This allows you to verify, question, or refine the output with full transparency. Rather than generating insights from thin air, QInsights helps you stay anchored in your material, making the AI a true assistant, not a fiction writer.

So when you're evaluating which AI tool to use, ask the right questions: Where do the answers come from? Are they drawn from the model’s general training data, or grounded in your own documents via a RAG system? Knowing to ask this doesn’t just protect your research—it positions you as someone who understands how to work critically and effectively with large language models. It’s a smart question—and one that signals methodological awareness and technical literacy.
3 Data Security: APIs, Encryption, and Procedural Safeguards
Uploading sensitive data to an AI platform can understandably feel risky—especially when it involves participant transcripts or confidential documents. But it's important to recognize that not all AI tools handle data in the same way. There’s a major difference between pasting sensitive text into a public chatbot like ChatGPT and using purpose-built research software, such as MAXQDA, NVivo, ATLAS.ti, or QInsights, which connects to large language models via secure API integrations.
When data is transmitted this way, it's typically protected using 256-bit encryption—one of the most secure encryption methods available today. And yet, many people drastically underestimate its strength. When I ask how long it would take to crack 256-bit encryption, I often hear answers ranging from “a few seconds” to “maybe 20 minutes.” Very few people realize the truth: it would take longer than the age of the universe—many times over. Here’s why:
Key Length | Number of possible combinations | Time to crack |
56-Bit | 256 = 72.057.594.037.927.936 ≈ 7,2×1016 | 20 hours |
128-Bit | 2128 ≈ 3,4×1038 | 1,08 × 1019 years |
192-Bit g | 2192 ≈ 6,28×1057 | 1,99 × 1038 years |
256-Bit | 2²⁵⁶ ≈ 1,16 × 10⁷⁸ | 3,68 × 1057 years |
To put this into perspective: the universe is about 13.8 billion years old (1.38 × 10¹⁰ years). Cracking 256-bit encryption using brute force would take longer than millions of universes have existed. So, from a technical standpoint, your data is extraordinarily secure during transmission.
But of course, no system is 100% immune to risk. Mistakes happen. If someone were to gain access to a master key, data could theoretically be exposed. But the same is true for data stored locally: do you always lock your office door when you step out? Are you sure your laptop is secure from unauthorized access?
One research team I know took extreme precautions by storing their data on an external hard drive kept in a steel safe—only plugging it in while analyzing. Most researchers don’t go to such lengths. So, the real question is: How do you safeguard your local data now?
With cloud-based platforms like QInsights, data is encrypted not just in transit but also at rest—meaning it's protected even while idle on the server. QInsights doesn’t rely on makeshift infrastructure but uses Microsoft Azure, an enterprise-grade provider. Azure secures the environment with multi-layer encryption, giving QInsights projects an additional layer of protection. In many cases, this setup is far more secure than local storage.

But strong encryption alone isn’t enough. Procedural safeguards matter just as much:
Tailored confidentiality agreements
Defined access rights, storage practices, and anonymization protocols
Transparent data processing policies
To support you in this area, I’ve created downloadable templates that you can adapt for your projects—ensuring your legal and ethical bases are covered, especially when working with AI.
Template (German): Vertraulichkeitserklärung mit KI Nutzung
Template (English): Informed Consent when AI tools are used
Information: What is GDPR compliant research?
Information: Overview of Security Risks and Measures
Template: Safeguarding your GenAI Research Project
In the next blog post, we’ll take a closer look at a sentiment I’ve heard often: “I only trust an assistant I trained myself.” It’s a powerful statement—one that reflects a deep concern about transparency, control, and trust in AI systems. But is it realistic to expect full insight into how large language models are trained? And are there practical alternatives that still allow researchers to work responsibly and confidently with these tools? We’ll explore what’s behind this perception, why it resonates, and how to approach the question of trust from a more grounded and informed perspective. Stay tuned!
References
Denzin, N. K., & Lincoln, Y. S. (Eds.). (1994). Handbook of qualitative research. Thousand Oaks, CA: Sage.
Mollick, E. (2023). Co-Intelligence: Living and Working with AI. Little, Brown Spark.
Comments