ylliX - Online Advertising Network
Reckoning with generative AI’s uncanny valley

Reckoning with generative AI’s uncanny valley


Mental models and antipatterns

Mental models are an important concept in UX and product design, but they need to be more readily embraced by the AI community. At one level, mental models often don’t appear because they are routine patterns of our assumptions about an AI system. This is something we discussed at length in the process of putting together the latest volume of the Thoughtworks Technology Radar, a biannual report based on our experiences working with clients all over the world.

For instance, we called out complacency with AI generated code and replacing pair programming with generative AI as two practices we believe practitioners must avoid as the popularity of AI coding assistants continues to grow. Both emerge from poor mental models that fail to acknowledge how this technology actually works and its limitations. The consequences are that the more convincing and “human” these tools become, the harder it is for us to acknowledge how the technology actually works and the limitations of the “solutions” it provides us.

Of course, for those deploying generative AI into the world, the risks are similar, perhaps even more pronounced. While the intent behind such tools is usually to create something convincing and usable, if such tools mislead, trick, or even merely unsettle users, their value and worth evaporates. It’s no surprise that legislation, such as the EU AI Act, which requires of deep fake creators to label content as “AI generated,” is being passed to address these problems.

It’s worth pointing out that this isn’t just an issue for AI and robotics. Back in 2011, our colleague Martin Fowler wrote about how certain approaches to building cross platform mobile applications can create an uncanny valley, “where things work mostly like… native controls but there are just enough tiny differences to throw users off.”

Specifically, Fowler wrote something we think is instructive: “different platforms have different ways they expect you to use them that alter the entire experience design.” The point here, applied to generative AI, is that different contexts and different use cases all come with different sets of assumptions and mental models that change at what point users might drop into the uncanny valley. These subtle differences change one’s experience or perception of a large language model’s (LLM) output.

For example, for the drug researcher that wants vast amounts of synthetic data, accuracy at a micro level may be unimportant; for the lawyer trying to grasp legal documentation, accuracy matters a lot. In fact, dropping into the uncanny valley might just be the signal to step back and reassess your expectations.

Shifting our perspective

The uncanny valley of generative AI might be troubling, even something we want to minimize, but it should also remind us of generative AI’s limitations—it should encourage us to rethink our perspective.

There have been some interesting attempts to do that across the industry. One that stands out is Ethan Mollick, a professor at the University of Pennsylvania, who argues that AI shouldn’t be understood as good software but instead as “pretty good people.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *