LEND
← Back to all articles
Featured

The promise and perils of conversational agents.

For surfacing user intent.

By Tom Lodge20 August 2025

In our most recent Lived Experience Advisory Panel workshop we presented a very early stage prototype to kickstart discussions about possible features of the LEND intervention. Participants were asked to use the prototype to ‘bookmark’ up to four narratives that they felt were valuable. They were also asked to try two different approaches to find narratives:

i. structured browsing which displayed thumbnails of videos alongside titles and summaries, with simple filters to refine the list of available videos and

ii. a conversational interface that provides clips from narratives in response to discussions with a chatbot.

Richness of requirements and 'co-identification'

This small exercise confirmed something important: people want many different things from their narratives. In this small test, participants wanted their narratives to provide information, to offer advice, to present shared, culturally-relevant experiences. 'Shared experience' can be both broad (gender, relationship roles, type and stage of illness) or quite specific (people similar to me with similar situations and circumstances). The intervention must understand users needs, to correctly match them to narratives. We discussed the implications of getting it wrong; there is a risk of causing harm (frustration, distress, feelings of exclusion), even if a matched narrative meets most, but not all of what a user is looking for.

From our (very early) observations, we can't assume users will come to an intervention with a clear understanding of the narratives that will help them the most. If this is true, then perhaps an extensive set of search filters or even rich search functionality won't help much. We'd like to test this idea a little more, to learn whether the search process could be a key part of helping users learn about what they need. Perhaps the 'active ingredient' is not only the consumption of a well-matched narrative, but what is learnt from discovering it.

Scaffolding the narrative lifecycle

We have tended to view a narrative intervention as a two-stage process; discovery and delivery (with perhaps basic feedback at the end). However we also noticed, with the conversational interface, that some participants chose to discuss narratives that they'd already watched. Could this be significant? Here are a few hypotheses we'd like to explore further:

  • Structured reflection may amplify benefits.
  • Post-narrative discussion may yield richer, more nuanced feedback.
  • Referencing previously viewed narratives may help users articulate complex or hard-to-express needs.

The ambiguous role of conversational agents

Conversational agents (AI chatbots) may offer a promising way to identify effective and appropriate narratives, but not without challenges. When participants used our simple prototype, there was some ambiguity about the chatbot’s role. Its friendly, but open-ended dialogue (i.e. answering user messages with follow-up questions) encouraged personal disclosures and expectations of emotional support. This nudges it over the line between helpful assisant and therapist. We must either make it clear of the scope of the chatbot's role, so users don't expect it to address unmet emotional needs, or the systems must be capable of providing validated specialist support.

For now, we're cautiously interested and keen to dig a little deeper. Although conversational co-identification of narratives has exciting potential, it has must meet high standards of personalisation, cultural sensitivity and user safety. We need full transparency on AI’s role and limitations.

We presented a poster that summarises some of these early observations to the University of Nottingham's Dementia Showcase. You can find it here