The promise and perils of conversational agents.
For surfacing user intent.
In our August 2025 Experience Advisory Panel workshop we presented a very early stage prototype to kickstart discussions about possible features of the LEND intervention. Participants were asked to use the prototype to ‘bookmark’ up to four narratives that they felt were valuable. They were also asked to try two different approaches to find narratives:
i. structured browsing which displayed thumbnails of videos alongside titles and summaries, with simple filters to refine the list of available videos and
ii. a conversational interface that provides clips from narratives in response to discussions with a chatbot.
Richness of requirements and 'co-identification'
In this small test, participants wanted their narratives to provide information, to offer advice and to present shared, culturally-relevant experiences. 'Shared experience' can be both broad (gender, relationship roles, type and stage of illness) or quite specific (people similar to me with similar situations and circumstances). We discussed the implications of getting it wrong; there is a risk of causing harm (frustration, distress, feelings of exclusion), even if a matched narrative meets most, but not all of what a user is looking for.
From our (very early) observations, we can't assume users will come to an intervention with a clear understanding of the narratives that will help them the most. If this is confirmed, then perhaps an extensive set of search filters or even rich search functionality won't help much. We'll test this idea a little more, to learn whether the search process could be a key part of helping users learn about what they need. Perhaps the 'active ingredient' is not only the consumption of a well-matched narrative, but what is learnt from discovering it.
Scaffolding the narrative lifecycle
With the conversational interface, we noticed that some participants wanted to discuss narratives that they'd just watched. Could this be significant? Here are a few hypotheses we'd like to explore further:
- Structured reflection may amplify benefits.
- Post-narrative discussion may yield richer, more nuanced feedback.
- Referencing previously viewed narratives may help users articulate complex or hard-to-express needs.
The ambiguous role of conversational agents
Conversational agents (AI chatbots) may offer a promising way to identify effective and appropriate narratives, but not without challenges. When participants used our simple prototype, there was some ambiguity about the chatbot’s role. Its friendly, but open-ended dialogue (i.e. answering user messages with follow-up questions) encouraged personal disclosures and expectations of emotional support - nudging it over the line from helpful assisant to therapist.
For now, we're cautiously interested and keen to dig a little deeper. Although conversational co-identification of narratives has potential, it has must meet high standards of personalisation, cultural sensitivity and user safety. We need full transparency on AI’s role and limitations.
We presented a poster that summarises some of these early observations to the University of Nottingham's Dementia Showcase. You can find it here