By Kasia Kamińska and Kasia Zaniewska
We’d like to share with you how you can use note-taking tools and Miro AI to get insights from user interviews and future exercises.
A lot of the times, teams leave a workshop or a user session feeling a buzz of ideas, but when they want to revisit findings and make them actionable they are faced with a wall of Post-It notes on a whiteboarding tool or hours of recordings.
What can make or break future time investment into workshopping, using canvases or focusing on user needs is how.
So, how can you enable data-driven decisions and future-proof your skillset at the same time? Make AI work for you, not against you.
First and foremost, every AI tool needs input. This guide is based on our experiences working with teams to better understand their users, though you could be using it for finding insights in desk research and futures work.
1. Data collection
We used note-taking tools to get the transcripts of our conversations. We’ve tested several ones and I must say Fireflies works well when it comes to English but cannot handle many more languages. Another tool I’ve used on several conversations was Sembly, which does cater for more than 40+ languages, including not the easiest language in the world which is our mother tongue, Polish.
Usually, we work in English, but if you're using a different language, there are several ways you can solve it:
Obviously, using Google Translate is always an option, but there is a limit of 3900 signs, which for 45-minute interviews would mean a lot of cutting.
Using DeepL to do the translation — until you run out of free uses. This one is substantially superior to Google Translate as it seems to rely more on phrases and semantics and not singular words.
What I use normally is Chat GPT* (the free version). We also tested Miro AI and it did well. But it has a downside; it needs time to deal with text. *Tip: You need to be careful to give it a very specific prompt to ‘translate from x to English the following text in the formatting as is. ' You’ll probably need to cut the text because it would stop at some point if it was too long. But watch out; sometimes, it can just randomly stop translating and start developing the conversation (as if its task was to imagine how this conversation would end, by the way).
2. Data preparation
We used Miro to gather all the material and start the analysis. In order to use most of the AI features of Miro, the best thing you can do is to aim for the text you have to become stickies.
To achieve that, copy the text that you have into an Excel sheet, and copy and paste it into Miro. It will ask you how you want to paste it:
Choose ‘Paste as sticky notes’ and you will get this:
You can do a similar process with notes on signals — copy them into a sheet and then into Miro as Post-its.
I’ll tell you in a second why it might be an interesting idea to do that.
3. Analysis
And that’s where the fun begins.
You can pick an area (coming from one signal, one conversation or multiple) and choose the blue square on the right, which is the Miro AI Assistant.
This will start a chat (“Ask me Anything”) you can ask for insights or use a predefined “summary” or “key action point” prompt.
Tip: Like any AI, this one likes to hallucinate and pull on open sources when it lacks answers in the content provided, and will pull on generic best practices related to the subject. That’s why it’s important to tell it to “take into account only the selected Post-It notes and ….”. This helps your insights to be juicy and steers away from high level best practices and common knowledge.
I asked it: “Give me the most important insights from the selected post” and then after listing them, it gave me a chance to either create a presentation or create more posts out of them.
This is where your research muscle comes in. If you started with formulating good questions, or hypotheses, use them now to converse with AI Assistant.
We found ourselves reusing similar questions or sequences of them, while working on a given project. You can start with the most basic one and ask AI to prepare a summary of your points or signals from the texts and then continue asking based on what it provided e.g. you can ask for the most frequently arising ones or those that were outstanding. And then just dig deeper. If it starts to hallucinate you can just start over. Both rooted in future scenarios we were aiming for research techniques of focusing on friction points, needs, gain points, etc. If that’s the case for you as well, start thinking of a well-crafted lengthier prompt that can also include how you want the output to be structured. The more you experiment with your prompts, the more adaptive you will be to future AI tooling.
After you’ve played with several prompts and you are at least partially happy with what it’s providing, you can choose to make a presentation out of it, which would look something like this:
It's not great, but it's not terrible. I found it to be a great start to engage the team to polish the presentation, add quotes from the interviews they had conducted, and make the wording more precise and on point. Since it has a polished feel, it helps collaborators not to feel overwhelmed by raw data and jump in to layering their own perspectives and learnings.
Repeat the process with subsets of data based on the method (notes on signals vs transcript) to a topic and follow up in the same thread with prompts to similarities and differences.
Hack: If you go straight to the AI, ask it to prepare a presentation for you, it will claim it’s outside of its capabilities. But if you ask it to analyse, list, prepare bullet points or get insights, it will give you a chance to create a presentation. Still, a very limited intelligence.
As you can see there are a lot of opportunities here to cut corners and fully rely on automation. An AI Assistant will give you a short overview and key action points in a presentation, and you will be done with it in a matter of minutes. We trust you don’t train your stakeholders not to. Value your skill of asking poignant, well-crafted questions, harness AI to scale qualitative data beyond what everybody remembered. Otherwise, you and your team will end up with a synthesis “everyone can do” and in time, it will lose the conviction that the design effort is fruitful and actually moves a needle.
What we’d like to stress is that it might seem that there is less accuracy and effort needed from humans while working with AI tools but remember: “Shit in, shit out” — you need to be very careful what you feed the AI tools to get interesting results from it.
Last but not least, while working with AI, we saw similarities between doing a great interview, insights gathering sessions and a good analysis of it with AI: all of them need really good questions, often rephrased and all need your full attention. Also, in all cases you need to check whether the human or AI are not lying nor hallucinating. A good to skill to have in life, generally.
Let us know what you think about the process and the tools used, what you liked and what surprised you. Also, let’s chat if you test it or if you’ve already played with AI to get insights, we are curious to learn about your journey.
Kasia Kamińska is an independent strategic designer. Kasia has been working in the tech and innovation space for the past nine years as a team lead, community builder, facilitator, consultant, and mentor. She is a strategic designer working in the fields of tech, innovation, and social justice. She is currently working with businesses and NGOs to understand their users, test out new offerings, and plan for various futures.
Kasia Zaniewska is a Lead UX Design Leader. She has a user research and service design academic background and almost two decades of industry experience. She’s driven by implementing user-centered principles into product development and enjoys leading cross-functional teams towards future-proof, inclusive solutions. She is currently working in Klarna managing B2B customer journeys.
Comments