Ideas Matter

AI and Creativity

Episode overview

What does AI mean for the future of human creativity? Janet Rafner of the University of Southern Denmark in Odense, an expert in hybrid intelligence and human–AI co-creativity, joins Sandro Galea to discuss AI’s role in the creative space.

Transcript

[Sandro Galea] Welcome to Ideas Matter, a podcast hosted by WashU. I’m Sandro Galea, vice provost of interdisciplinary initiatives and dean of the School of Public Health. AI is a tool, yet sometimes it feels more like a co-creator. We iterate with our chatbot of choice. We ask it to do something, then maybe ask it to do the same thing a bit differently, refine its responses, and soon the process starts to feel less like using a machine and more like bouncing ideas off a colleague. This can be both exciting and in truth a bit unsettling.

And it’s happening more and more as AI becomes increasingly ubiquitous in our lives and work. So what does AI mean for the future of human creativity? How can we work with this technology to generate ideas, art, and innovation? What are the potential pitfalls of using AI in the creative space? I am excited to explore these questions with today’s guest. Janet Rafner is an assistant professor at the Center for Integrative Innovation Management at the Department of Business and Management and also at the Danish Institute for Advanced Studies at the University of Southern Denmark in Odense. She’s a leading expert in hybrid intelligence and human AI co-creativity. I’m really delighted to be speaking with her today. Professor Rafner, welcome.

[Janet Rafner] Thank you very much. I’m really looking forward to having a chance to talk about these hot items and also something that I’m really passionate about.

[Sandro Galea] Well, let’s start with you. Let’s start by asking a bit about your background. So how did you come to be doing the work you’re doing? When did you first become interested in AI?

[Janet Rafner] That’s a, you’re in for a little bit of a long story now.

[Sandro Galea] Please go ahead. I like stories.

[Janet Rafner] So I have a very interdisciplinary background. I got my first two degrees from the University of Virginia in physics and in studio art. And I moved to Denmark on a Fulbright Fellowship to study how complex phenomenas convey visually. Then I stayed and did a master’s also in physics, but not just plain old physics but I was doing computational fluid dynamics, and I became really interested in how humans solve problems and how algorithms solve problems and when looking at in this case it science question: How do we put our algorithms to making progress and then how do we as humans think about problem solving?

And from there, I continued that work on my PhD, which is in information communication technology, really looking at developing synergies between humans and algorithms. And my work has spanned from the individual level. So looking interface design, the algorithmic design, and individual human cognition, to looking at dyads, so differences between humans and humans working together versus a human and algorithm. And then looking at also in organizational contexts: So how do organizations now adopt these different technologies and what are the barriers to adoption and how can they be used in meaningful ways to bring human value?

[Sandro Galea] Well, this leads nicely into then a question about creativity. Let’s move to creativity. So I want to understand what creativity is. So how do you define creativity and what does creativity mean in today’s world? To what extent is creativity measurable, scientific, or to what extent is it mysterious?

[Janet Rafner] Yes, think that I typically use a very practical definition of creativity that is generally accepted and that is producing something that is both novel and useful. So that’s the short answer, something that’s novel and useful. But the longer answer is really that it is complex, contextual, and nuanced.

So that creativity has to do also with the culture that’s accepting it. You can make something that in a Western culture would be deemed as creative, but in the global South, maybe it’s not. It has to do also with the, like beauty is in the eye of the beholder. So who it is that’s judging and evaluating the creativity, the novelty, the usefulness of the product. You can also have creative environments, you can have something that you look at and evaluate as creative, but maybe the process itself to get there wasn’t creative. There are many different ways of looking at it, and that’s also what makes it so difficult to measure and to be on the same page in many different contexts. Because there are so many different dimensions to creativity, and many different perspectives to how it should be interpreted.

[Sandro Galea] I like that definition very much and this leads us nicely now into, let’s bring AI into it. So you’ve spoken about hybrid intelligence. So tell us a little bit, what is hybrid intelligence and how does it help create opportunities for human AI co-creativity?

[Janet Rafner] Yeah, the short form, I’ll try to go the short form and then the little bit deeper for each of these, is that a hybrid intelligence focuses on using artificial intelligence to augment human value. So optimal synergies between humans and algorithms, instead of using AI to simply automate or replace, it uses it to augment the human. And that sounds very nice.

But without practical guidelines, one could argue that many things are in that case, hybrid intelligence. So actually recently myself and a consortium of about 50 researchers from around the world and also practitioners have been working on developing a manifesto for hybrid intelligence where we’re identifying the four really crucial challenges. So one of them at the interface level, so that’s designing interactions that build rather than drain tacit human knowledge. So you can imagine, how am I making this value for me as an individual?

One of the big issues is tacit knowledge. So how can we create that? Then at an organizational level, so making this hybrid intelligence, this development of human and algorithmic synergies, measurable, part of a shared culture, and actually a competitive identity. We can all very well say, let’s do something that’s good for the employees. Let’s do something that’s good for society. But if it doesn’t actually have a competitive advantage in our world, then it’ll be very hard to get a broader buy-in for it. And then, I mean, that goes directly up to the macro, the societal level that we need markets and governance structures that help incentivize this hybrid intelligence, using artificial intelligence to support human value instead of this automation and replacement and potentials for skill loss. And then sort of cross-cutting through all of this is an educational challenge. So how do we transform an educational pipeline to design workflows and institutionalize ways for educating the public about how to use artificial intelligence in a way that will support their human values?

[Sandro Galea] So let’s talk about digital creativity. So what does digital creativity look like in an era of hybrid intelligence?

[Janet Rafner] Yeah, so there’s digital creativity, there’s computational creativity, artificial creativity, lots of people throwing out different words. And you can think about it in sort of three different categories. One is what you could call computational creativity, where you have more or less AI researchers developing algorithms that then you press go and it creates something and then the human is left to evaluate if it’s algorithmic or not, or if it’s creative or not. And then you have, I guess in the old days, what you would call just creative support tools, which would be your Google Docs or your Photoshop or whatnot. Nowadays, those are being augmented with AI too, so it’s hard to draw a line. But then you have this human AI co-creation.

And that’s really where I’m most interested and where I focus my work on, where a human is actively working within artificial intelligence. Most of the time now, that’s a generative artificial intelligence to co-create something together. And that can mean a lot of things. Now you could think about someone simply prompting a GPT to create something, but there are many, many different types of interaction dynamics, turn-taking, collaborating with a shared product. I mean, imagine co-writing with an agent versus chatting about a topic and then writing it yourself. And so there are many different forms now of this co-creativity that depend on the task structure, the task decomposition. And ultimately, we get into lot of questions about ownership, about the control, where does the autonomy lie, who has agency in the process.

[Sandro Galea] I have so many more questions now, which is terrific. But let me ask this one. So everything you just said, explicitly and implicitly, is that AI is a complement to human creativity. Can AI be created by itself? Are we going to see great novels, songs, images generated by AI without human prompting? Or is this something even AI is already doing?

[Janet Rafner] Yeah, so actually what has recently a court case, I believe it was in the United States, about exactly this topic, trying to get IP for an artwork created more or less autonomously. And by that I mean an algorithm was created and then set on its own. And then over time, it decided to create this. And we are seeing that.

But as I am coming back to talking about the definition of creativity, it is very much in the eye of the beholder. And there is a growing distaste of artworks, innovations, of creative products being made by algorithms. And by that I’ll say that you could ask a room full of students or a room full of anyone who raise your hand if you would like to go see an art exhibit of AI generated work. Most people actually would not say yes to that.

And so we see this development of actually, you could call it a human premium, that even when an algorithm can create something that arguably is a creative product, people are less interested in seeing it because it doesn’t have that human aspect of it that makes the story of the artist, that makes the story of the writer something really nitty-gritty to get in and to have that human touch to. So the short answer is yes, that our algorithms can create a product that by all scales, objective measures, is as creative, if not more creative, than many human creations. But whether or not society accepts it, whether society wants it, whether society values it in the same way, that’s not clear.

[Sandro Galea] I’ve been thinking about a metaphor of a lot of consumer goods. A lot of consumer goods are made by machine, but then you and I are likely willing to pay more if something has a label, you know, human-made. And sometimes, of course, it doesn’t mean that human-made is necessarily better, but we value it more. Would that kind of analogy potentially apply as we move forward with AI?

[Janet Rafner] Absolutely. In fact, myself and some of my colleagues, we are working on a hybrid intelligence label.

[Sandro Galea] Right, that makes sense.

[Janet Rafner] And you can see that it’s inspired by the sustainability efforts and also, for example, organic or fair trade labels, that represent how a product was developed. I mean, you’re exactly right. People want to spend the extra money. They want to invest in products that they agree with and want to support the production of it. And I think that we’re going to increasingly see this with respect AI-created or AI co-created. And that’s why I think the research on co-creationand co-creativity is so important to define how we can use these tools in meaningful ways. I use generative AI every day. I use it in my research, I use it in my personal life. I don’t, when I talk about having that human premium or that human touch and having that labeled, it’s not to say don’t use an algorithm or don’t use artificial intelligence, but it’s to find the ways of maintaining that human control, that human authorship throughout the process, so that it is a product that has been augmented by the capabilities of artificial intelligence, but it has been through and through decided on by human judgment.

[Sandro Galea] Let me ask this question when AI and humans work together who’s learning more, the person or the machine?

[Janet Rafner] I think that’s very contextual. It depends on how the human is interacting with the algorithm and the technical setup of the algorithm. But I think the technical setup of the algorithm is less interesting at this point than talking about the human interaction with it. You can get the details of whether an algorithm is training online based on your interactions with it or whether it’s already been trained and then just put online. But if we focus on the human for a moment there and consider what the human learns, it really has to do with how the interaction is scaffolded.

We are seeing an increasing literature base that shows that humans are moving towards a laziness trap often or even an over reliance, a dependence on artificial intelligence. We’re also seeing neuroimaging studies that are showing how the brain functions differently when the human is interacting with chat GPT or other types of chat bots. And I think that when the default is for the human to outsource judgment, to outsource critical thinking to the GPT or to the AI that they’re using, then that’s a really dangerous game. And when that becomes the norm, when that becomes accepted, then we’re running into the issues in education systems and in companies. But when a system or a workflow is developed in such a way that has gates of human judgment and meta-reflection built into it to help the human think about their strategy formation, to help the human think about thinking with the tool. That’s when you get really meaningful learning in the interaction.

[Sandro Galea] Do you today have a favorite example of AI art or other forms of creativity that would not have existed without AI?

[Janet Rafner] Yes, there are a couple of things that I am really interested in. So I’ve had the pleasure of going to a couple of concerts that have been for artists that have trained their own algorithms on their own voice and have, for example, sung duets with themselves. I think that’s a really nice example of how to take a technology and to personalize it in such a way and create your piece of artwork, in this case audio a way that would not have been possible before. I also think that there are some works, some artistic image production where the artist is actually creating and ideating with the algorithm. So it’s not a static algorithm that they’re using and then doing text image, but you could imagine it in the way of painting with the algorithm. So using multimodal experiences to reflect the individual artist’s contribution in ways that I have never seen before. And so you see it like in the progression of photography, the progression to many types of 3D digital artworks. You’re seeing some things like that that we wouldn’t have been able to achieve without the technologies.

[Sandro Galea] Let me ask you the hardest question of this conversation. Ten years from now, this conversation, what are we going to be talking about?

[Janet Rafner] Yeah, well. The answer for that depends a lot on what we decide to do today. And that’s what you decide, what I decide, what our governments decide to do, because we’re at a critical point. And I don’t believe that it’s predetermined, no. I think that it will come from research, from action, from also regulation to decide if we’re moving towards a path where we continue to move the human further and further out of the loop. So moving towards what is often seen as quick efficiency gains to reduce the number of people working and outsource to versus to develop cultures that focus innovating with the technology. And I don’t know which direction. I’m hopeful that will be in the direction that’s moving towards humans continuing to innovate with technology. But that decision has to be made. I mean, it’s similar to what we see in many parts of industry, that the policy, that the companies, that education around the topic is going to shape where we are in years.

[Sandro Galea] What do you think are key ethical considerations to keep in mind when using AI in creative processes?

[Janet Rafner] Yeah, so a few stand out. I think accountability, who’s responsible for the outputs, decisions, downstream consequences. What we see now a lot is maybe the original or the first step of outputs. But as these technologies have only been commonplace for the past few years, we don’t really know what downstream consequences are. So that accountability, I think bias, generative AI reproducing stereotypes, misleading content. I’ve mentioned this before, but de-skilling and dependency. I think that’s a big problem from an individual cognition side. Data and rights, also another big issue.

Who owns the data that these models are being trained on, and then who owns the outputs? There are many issues around creative labor being exploited. And then lastly, I think a topic that is under-discussed is the climate and environmental impacts of the generative artificial intelligence. This is also an area of my research that I’m working on now because I feel a moral imperative to know the broader effects of the technology. So using generative AI in whatever task is not free and it’s not just the cost of subscribing to a platform or buying API keys, but there is a real environmental impact coming from training these large models and also on the individual inference or the individual use side. So those are some of big things that we need to be keeping our eyes on in the future.

[Sandro Galea] I feel like you’re sort of on the fence about this one. Are you fundamentally wary or hopeful of the disruptions AI could cause?

[Janet Rafner] I have a two-year-old and I don’t think that I could live if I wasn’t going to be fundamentally hopeful about that, looking at my little daughter and thinking, what’s the world going to be like? So I am fundamentally hopeful.

It is scary what’s happening in a lot of ways. And when you see tools, when you see artificial intelligence being used, in ways that are harmful in any of these ways, harmful to individual cognition or harmful in the case of bias or inequity. But I think that if there wasn’t hope, I wouldn’t be continuing to do this research and continuing to try to better understand the ways in which algorithms can be used for human value creation. And that’s really what it gets down to. We humans are in the driver’s seat and we’re making the decisions about what to value. As I said, even if the technology can do something, we’re the ones that decide, do we want to use it? We decide, do we want to buy that product? Do we want to go to the art exhibit? Do we want to listen to the song? I’m hopeful that humans will continue to value other humans.

[Sandro Galea] Well, that’s a terrific way to end. I’m Sandro Galea. I have been speaking with Professor Janet Rafner about AI and human creativity. Thank you Janet for this conversation.

[Janet Rafner] Thank you for today.

[Sandro Galea] And thank you to everybody who has joined us for this Ideas Matter. I look forward to continuing the conversation.

Read more

Meet the guest

Janet Rafner

Janet Rafner is an expert in hybrid intelligence and human–AI co-creativity. She is an assistant professor at the Centre for Integrative Innovation Management at the Department of Business & Management, and also at the Danish Institute for Advanced Studies, at the University of Southern Denmark in Odense.

Ideas Matter

A WashU Podcast for those seeking clarity in a fragmented world. Dr. Sandro Galea hosts thinkers and leaders to challenge assumptions and elevate public discourse.

You Might Also Like