Being human

With its ability to write text and generate images, artificial intelligence is making inroads into many areas of life. Perceived as threatening, enriching or just plain gimmicky, AI also raises a fundamental question: what is it that makes us human?

Sculpture of a woman's head in which computer chips can be seen.
This image was created by the AI software Midjourney based on the prompt "Sculpture of a head computer chip on a marble base, hyperrealistic sculptures, pastel colors". (Photograph: Sir Mary / Midjourney)

“What holds the world together” is the title of this issue’s series of images, all of which were generated by artificial intelligence, or AI. Using the software tool Midjourney, we transformed text prompts into photographs – with astonishing results. The sculptures depicted on these pages of Globe don’t exist in reality; the software simply selected an arrangement of pixels that would give the impression of a three-dimensional object. True to its name, generative AI has created something that would not exist without it.

Various software programmes are now capable of generating text or holding conversations with people via chatbots. The best-known of these is ChatGPT. Developed by the US company OpenAI, it communicates using natural-language text generated by AI – and its rise has been nothing short of meteoric. The app crossed the 100-million-user mark in under two months. That’s a milestone that took Facebook more than four years to reach – or 75 years in the case of the analogue telephone. Google is now keen to follow suit with its recently released chatbot, Bard.

Criticism of ChatGPT wasn’t long in coming. Lawsuits have been filed alleging that the use of data to train the underlying large language models infringes copyright law. Critics also point out the difficulty of knowing whether ChatGPT’s output is correct when no sources are cited and there is no way of knowing how the chatbot reached its conclusions.

Question of accountability

This is a topic close to the heart of ETH professor Gudela Grote. As a work psychologist, she is interested in the question of when a new technology can be reliably integrated in a work process. “Quality assurance requires a technical system to be certified. But that’s just about impossible if you have no idea how it generated its results,” she says. When humans and machines work together, the question of accountability looms large. “Employees are obviously concerned about the precise extent of their responsibility,” she adds. That’s true not only for the emerging technology of generative AI but for any form of automation.

Computer science professor Thomas Hofmann is certainly impressed by how fast generative AI is developing. “Although the added value from other forms of automation is probably even greater,” he says. Hofmann, whose area of research includes language models, agrees that reliability is a thorny issue. Ultimately, all text-based applications share a fundamental weakness: they are based on language models that were trained with a tremendous variety of texts – including fiction.

This makes sense when it comes to learning grammar and spelling. But made-up stories are far from ideal when it comes to factual accuracy. “Language models still can’t tell the difference between what’s true and what isn’t,” Hofmann cautions.

A question of choice

A key concern for work psychologist Grote is whether people are using a new technology voluntarily as private individuals or whether they are being obliged to do so as employees. As customers, we can choose whether to purchase and use technology giants’ products in our private life or not, and companies respond to those choices by improving what they offer. “But as an employee, I’m trapped in a process I don’t fully control,” says Grote. “I’m faced with whatever technologies my employer has decided to use – and those decisions are rarely made in consultation with staff.”

Scientifica 2023

Whether people can work successfully with these new technologies depends on a number of factors. “In my experience, the key is how competent and empowered someone feels,” says Grote. For example, those who are less well educated are more likely to worry that their job may be at risk. Equally important is how the company communicates its future technological path. “Employees need a clear idea of how they can adapt and how their employer is going to support them on that journey,” says Grote.

Ideally, this process would also address the question of which tasks we regard as fulfilling. Hofmann cites the example of language models that are optimised for programming: “A piece of code that might take me ten hours to write perfectly can be generated by the models in a fraction of a second,” he says. That frees up valuable time for other activities. “But if someone used to enjoy spending a full day programming, they’re not going to be very happy about that development,” he says.

From programming to ChatGPT, language models are making inroads into many areas of society. Grote argues that human language is something quite special. “Spoken language is what makes us unique,” she says. “Language is a creative process that expresses thoughts through words.” This is the human ability that language models are currently attempting to emulate.

It makes a big difference whether a text has been written by AI or by a human, according to Hofmann: “Whenever we use language, we’re also expressing our feelings and experiences. AI doesn’t have recourse to that experience, however well written its text may be." During his studies, he also became interested in philosophy and now wonders whether intelligence and being necessarily need to be tied to a biological substrate. Ultimately, he says, it’s a question of where we draw the line between artificial intelligence and being human.

About

Gudela Grote is Professor of Work and Organizational Psychology in the Department of Management, Technology and Economics at ETH Zurich.

Thomas Hofmann is Professor of Data Analytics in the Department of Computer Science at ETH Zurich.

Globe What holds the world together

Globe 23/03 Titelblatt:

This text appeared in the 23/03 issue of the ETH magazine Globe.

Download Read whole issue (PDF, 4.6 MB)

JavaScript has been disabled in your browser