Back to Blog

AI: the importance of being curious

Rhian | September 2024
BOPS sketching session 01

At Unboxed, we are proud of the work we do. We love taking on meaningful projects that improve people’s lives in some way - and a large part of that is understanding what our users need, through research sessions and user testing.

We build digital products and services, so it is important for us to be creative and innovative when it comes to technology. This doesn’t mean unthinkingly leaping on every shiny new thing, but instead being curious and adaptive, and weighing up how and where new technologies can add value and be genuinely helpful for the users of the services we build.

Like the majority of our peers, we are excited about the possibilities that new generative AI tools offer. Some of us have taken innovation days to work on new ideas, and most of us use AI to some degree in our private lives. We understand that it has the capacity to transform the way we interact with organisations and individuals in the future.

While Forbes ran an opinion piece last year on the importance of developing a company position on AI, we believe the reality is more nuanced. A company is not a person: it is made up of unique individuals, each with their own belief systems and interests. At Unboxed, we share certain values but we are not robots who follow the company line unquestioningly.

That can make it tricky when we think about questions like: “What is Unboxed’s position on AI?” It’s OK for everyone to have different opinions and challenge each other, because life should not be about living in an echo chamber.

Over the last two weeks, I’ve talked to Unboxeders working across different disciplines for their take on how these technologies, especially generative AI, are likely to impact the work we do.

A company is not a person: it is made up of unique individuals, each with their own belief systems and interests. At Unboxed, we share certain values but we are not robots who follow the company line unquestioningly

IMG_0095

Designer Laura Smith believes that AI can help create useful tools, but strikes a note of caution about the possible biases it may introduce.

“I believe that AI could make some things easier, for example, digesting a lot of information from existing research reports and policy papers. I know that people also use AI to help draft ideas for research questions based on the user group and topic area.

“However I would always use this with extreme caution - as a designer I'm already conscious of my own inherent bias and work hard to stop that from influencing how I work. I don't want to be influenced by the biases that come from an LLM that may be trained on information that has nothing to do with the people we're designing for.

“Humans are full of nuances and the reason we do user research is often to uncover 'unspoken' needs and desires - I don't believe that AI can capture this, though it could make it easier for us to collate and analyse what we've learnt.”

AI as a social inflection point

Unboxed director Matt Turrell, like Laura, balances optimism with caution. He sees us being at very much an inflection point in terms of societal change:

“I do think there are huge potential opportunities to improve productivity, and I think at the moment we are at one of those technology inflection points, just as the advent of social media proved to be. And as with social media, we can’t quite predict where it is going to take us. We don’t yet know what the full implications are, whether they are advantages or negatives, but we have to be flexible and willing to change our thinking as things evolve.

“Thinking that AI is a panacea for everything is obviously a trap. Some things it can do very well, but then there have also been examples of the sort of hallucinations you see coming out of LLMs, which can reduce trust in what the outputs are.”

A big concern for Matt is data privacy: “I also have concerns about data being grabbed for model training, especially in cases where you are providing a safe space for people to talk about how they feel about a particular service. You would not want user research data consumed in this way. Terms and conditions can change over time, as we have seen with ChatGPT and Grok.”

"We don’t yet know what the full implications are, whether they are advantages or negatives, but we have to be flexible and willing to change our thinking as things evolve"

What does AI mean for data and software?

Developer Ben Eskola points out that AI-based tools are not necessarily good at finding outliers, or at showing us that our initial assumptions may be wrong: “The kind of tasks LLMs can be good at are ones where you want the ‘normal’ result, something close to the most common answers to a given question and so on.

“So my concern would be that the norm is not always what you want: in user research, for example, you might already have a sense of what the norm is. And in that case, firstly I don’t think you could rely on an LLM to tell you that your initial assumption is wrong; and secondly, you might also be interested in the outliers, which an LLM is naturally not going to account for.”

chris-ried-ieic5Tq8YMk-unsplash

Unboxed director Martyn Evans questions the race to productivity above all else: “The biggest problem is when people talk about AI increasing efficiency and productivity. I don’t know why we want to increase productivity when we could be talking about increasing quality.

“In a medical setting, if AI is going to help increase the quality of diagnosis, then great - but when we hear it can do the job of six people, then I think that’s not necessarily a good goal. I don’t see that replacing people in a process because it will save money seems like a very noble goal. So it’s a philosophical question for me, it’s not just about practicality. Obviously it would be a good thing to increase the quality of experience, but not at the expense of people.”

Martyn also debates whether we should be rushing to use imperfect new technologies when we don’t effectively use the ones we already have.

“As new tech emerges, there's always excitement and there’s always resistance,” he says. “But a lot of the problems we face on a day to day basis result from people not making good use of the tech we already have. So we know how to write software to help people do their jobs more efficiently - but we don’t and people still use rubbish software and software still gets in the way of what they want to do.

“Similarly, there is so much we can do with good well structured data, but we don’t. There's a lot of noise about AI which just feels like ‘Let’s whack another layer on top of our already imperfect technology’ - and this means the implications of doing a bad job with AI will be a multiplier for the negative impact of bad software and bad data.”

"There's a lot of noise about AI which just feels like ‘Let’s whack another layer on top of our already imperfect technology’ - and this means the implications of doing a bad job with AI will be a multiplier for the negative impact of bad software and bad data"

AI for delivery management and outreach

Similarly, delivery manager Shaneek Glispie is cautious about the risk of losing the human element in interactions: “I had a recent experience with a chatbot that had clearly been created using generative AI. I was trying to make a complaint and the chatbot couldn’t help me. I just ended up frustrated, and from an end user perspective, missing that human element in that situation was not great.”

Like many other Unboxeders, Shaneek has been keen to experiment with AI tools and find out where they can save her time and bring a fresh perspective.

“I help out at a charity and I needed to request feedback from people who had attended an event that I helped run. I used ChatGPT to suggest the questions I should ask, and I was surprised by how helpful it was,” she says.

“Not everything is that useful, though. I’ve experimented with an AI design tool which wasn’t user friendly at all, and which produced designs that didn’t fit my brief. At the moment, I’m not sure that it would help me with my delivery management tasks, or with complex research questions.

“I was thinking about how to facilitate a roadmap session and what I would need to ask an AI tool to generate a useful output, and I realised I wasn’t exactly sure what I would ask. In these circumstances, where I can’t think of what to ask, it could be a symptom of a problem that isn’t going to be easily solved, whether that is by AI or by me personally. In these situations, I tend to draw on my own experience as a delivery manager, rather than seeking external solutions.

“I’d imagine that it would be the same when using AI tools to aid with research questions: it would be fine for simple things, but when it comes to more complex issues, I’m not sure it would help.”

So far, we’ve spoken about how Unboxeders feel about using AI tools in a project context. But what are the possibilities for streamlining marketing and business development activities?

I asked Jo Oliveira, who is responsible for strategic partnerships and business development, how she felt about using generative AI tools: “In my limited but growing knowledge of what it can do, there’s potential for it to help,” she says. “I can’t envisage how it would work for user centred design because there is so much uncertainty about the results you get from a generative tool, and it would be risky relying on it for user research without knowing the biases that come from its training data.

“I do think it can be useful in a marketing context, but it can’t replace the human touch - and neither should we expect it to. We need to use our own authentic tone of voice and incorporate our own experiences. I can always tell the difference between outreach messages that have been generated by AI and those that have been written by a human because they never sound quite natural.

“I’ve dabbled a bit with using it for outreach. I found a tool that was useful to a certain degree, but I ended up rewriting so much of it that I dropped it in the end. I am excited to see where it goes in future as I think there are loads of different uses. But don’t think it’s there yet.”

Fede showing a map on his screen

User experience in the age of AI

Unboxed head of design Liam Hinshelwood also sees great potential for the future. He sees both positives and negatives to user centred design practitioners increasingly relying on AI tools, and he thinks the effects will be profound - particularly for user experience, where we will see a shift towards intent-based outcomes.

“If you look at it from the user perspective and especially from a service design perspective, it radically decentralises access to services. Google has traditionally been your homepage, whereas now AI is the way you achieve a service outcome. Our services are going to be more atomised - microservices threaded together through AI experiences. AI is less about having your groceries delivered by Tesco, and more about having your shopping delivered through an aggregated service. You won’t be going to a specific weather provider: instead you will ask your AI to provide you with the best info available and how it’s going to affect your day, or how you are going to travel from point A to point B.”

He also strikes a cautionary note about the ethics that often get overlooked in the rush to adopt AI tools: “Part of me is worried about equalities that may be lost in the name of efficiency gains. Based on my own experience using AI tools, I recognise that they don’t always achieve the result I want. I worry they may make us lazy and reduce the quality of our output.

“Also massive energy usage of AIs is a huge problem, and if we are not engaging with the consequences of data usage in AI and training LLMs, we are storing up huge problems for the future.”