AI-generated humans: when user research is not real user research
The most important person in any product development journey is the end user. Without someone who wants or needs to interact with your product or service, its creation is meaningless.
At Unboxed, we talk a lot about user research. User research can be conducted at various stages of the design and development process. This is often during a discovery phase - although, as we wrote recently, there are times when it is OK to skip discovery.
User research has a long history, from the time-and-motion studies that were carried out by Frank & Lilian Gilbreth in the early 1900s in a bid to improve workplace efficiency to the sophisticated interviews and analysis we see today. There are many companies whose job is to recruit the very specific types of user needed for niche research, although ideally design and development teams will have built their own relationships with potential users and can find willing participants this way.
However your research subjects have been sourced, it is inevitable that the research process is time-consuming. Scheduling a convenient time for meetings - especially if you are doing the research face-to-face - and making sure people feel comfortable can limit the number of sessions that a small team on a tight budget can carry out. Choosing the right people to research is a science in itself: see the Government guidance on finding suitable participants.
These are some of the reasons why companies such as Synthetic Users have created a service that provides entirely AI-generated users for user researchers to question. Their aim is not only ease of use, with research subjects available at the click of a button, regardless of the time of day, but also as diverse a group of subjects as possible, addressing a common challenge when finding user research participants.
As AI makes inroads into seemingly every avenue of our lives, it is tempting to seize every opportunity to automate and streamline processes that can be difficult and time consuming.
But will it help us build better products?
Rather than focusing on the AI creations provided by Synthetic Users, let’s think about AI-generated personalities in a more general sense.
A thought-provoking blog post last month by Nielsen Group argues that we should exercise caution when asking AI “users” research questions: “Synthetic users cannot replace the depth and empathy gained from studying and speaking with real people. They often provide shallow or overly favorable feedback.”
This second point will resonate with anyone who has experience of prompting LLMs such as ChatGPT or Claude. The annoyingly upbeat tone of such chatbots reminds us that they have been programmed to please and to be positive. When you are hoping that research will flag up potential problems with your product, this confirmation bias may not be what you want to hear.
As the Nielsen post points out, personalities that have been created as composites of a number of real people tend to have their personality quirks flattened out to the point of blandness: “Real people care about some things more than others. Synthetic users seem to care about everything.”
This can cause issues for prioritisation of product features, as well as for that most basic requirement: to ensure that your product is in fact needed and has a market.
"The value of engaging real people in the process of creating tools to solve their own problems is fundamentally undermined if the care and attention of facilitating participation is junked in favour of easier and cheaper AI processes"
Ethical considerations are also a concern. Most LLMs have been created from the knowledge that is available on the internet, without the consent of the people whose knowledge and data has been used for training.
This can also mean that the pool of knowledge that has been used to construct a synthetic research subject can be relatively shallow and may not necessarily reflect reality.
Having said all this, there are ways where synthetic users can be helpful in the research and design process. We create personas in order to visualise a typical user. AI used in this way can help bring our personas to life. Instead of a persona that exists only in the team’s imagination, an AI-generated persona can chat about a product or service and how closely it matches their “expectations”.
Tom Gayler, designer at Unboxed, says: “One of the skills of user research is engaging and understanding the unique differences of users. AI users are by definition created as an average of many, many inputs, but you have no way of being confident that anyone at all is actually exactly like them. Whilst AI removes some of the costs and logistical challenges of user research we have to seriously ask if it is only allowing us to perform user research instead of actually building empathy and understanding.”
“Whilst they might make financial cost savings in the short term there it is also important to consider the environmental impact of using AI throughout design processes. The ever increasing energy and resource demands of computing mean we need to think about the value of what we are using machine learning and AI for.”
“Further, user research is only the initial level that we look to engage with our users. Activities such as co-design sessions that could take place with AI would be even more questionable. The value of engaging real people in the process of creating tools to solve their own problems is fundamentally undermined if the care and attention of facilitating participation is junked in favour of easier and cheaper AI processes.”
The pace of generative AI development is astonishingly fast, and presents us with great opportunities to make our jobs easier and automate away many tasks. However, engaging with other humans in this context should not be one of those tasks. Experimenting with the enhancement of personas is an area where synthetic humans can be of real help - but in our opinion, they should never be a drop-in replacement for real user research sessions with real humans.