Innovations in Writing

AI’s evolving role in art, storytelling and marketing

In this episode, Ananya Bhargava interviews Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University. The discussion explores imagination in the context of technology, the potential for AI-generated marketing to connect with audiences, and the complex issues of copyright and ownership.

Transcript

[Intro Music]

Ananya Bhargava: Artificial intelligence is changing the way we create, consume, and connect with content. It’s generating images, writing scripts, composing music, and even shaping ad campaigns. As AI takes on a larger role in storytelling and marketing, its progress is sparking new conversations about process, participation, and creative control.  

I’m joined by Dr. Ed Finn, founding director of the Center for Science and the Imagination at Arizona State University. He’s an associate professor in both the School for the Future of Innovation in Society and the School of Arts, Media and Engineering. With a background in literature, technology, and storytelling, Dr. Finn brings a unique perspective to the conversation about creativity in the age of AI.

In this episode, we’ll talk about imagination in the context of technology, the potential for AI-generated marketing to connect with audiences, and the complicated issues of copyright and ownership. We’ll also discuss the social responsibilities AI companies face as they shape the future, and what that future might look like for both artists and the broader creative industry.

Bhargava: Just to start off, could you please talk a bit about your experience with the intersection of AI storytelling and creative collaboration?

Ed Finn: To answer that question, I guess I need to go back to my own education, because when I was applying to graduate school, and even when I was an undergraduate and even in high school, I was very interested in the intersection of technology and culture and all the ways that we are building these tools that are changing fundamental aspect of the human experience, including reading, writing, what it means to interact with other people. If you think of culture as a verb, how do we do culture? How do we interact with one another? All of that is now filtered through these increasingly complex technological systems. 

And when I started out, those systems were not really intelligent in any way, but they were already acting with a lot of agency through algorithms and recommendation systems and filters. I wrote this book called “What algorithms want,” that came out in 2017, that explored some of these questions of how we imagine what computers can do, and how that shapes our interactions with them, and in some ways, actually shapes what they can do, what they can accomplish. And AI has sort of bubbled up from within all of this long running set of questions that I’ve been interested in, and now there are increasingly agentic and powerful systems that are shaping what we read and write and how we interact with each other, but also are capable of creating their own fiction and creative outputs, creating films, creating images and artwork of different kinds. And so, we’re at a real watershed moment where we have created tools that are challenging our boundaries and our notions of creativity and imagination, and forcing us to ask questions like, “Is creativity a human trait?” “Is imagination exclusively a human trait? Can machines do that too?” And so that’s where we are. 

Bhargava: As you said, when we talk about the concept of imagination, it usually implies something that’s very human. So do you think that AI can truly imagine, or is it simply rearranging existing knowledge in ways that appear creative? 

Finn: That’s a really deep question, and I don’t know that I have a definitive answer to it. It also depends a lot on how you define imagination, and imagination is not that easy to define. But as I’ve pursued this area of research, I think about imagination as the ignition system for all this other stuff that we know and care about, like anticipating the future, foresight, empathy and imagining what it’s like to be in somebody else’s shoes, resilience, problem solving, working through challenges. So that’s one definition, that it’s this cognitive system that we don’t spend a lot of time directly analyzing, but we look for all the time we rely on in our everyday lives in all sorts of ways. 

And another way of thinking about imagination is as the, I don’t know if you’re a Star Trek fan, but the mental holodeck that we all have in our heads. So the imagination is a simulation system, and neuroscience, cognitive science, is showing us that when we remember something and when we try to imagine the future, a lot of the same equipment in the brain lights up. So we use the same parts of our brains to access the past and to access the future, and I think those are the simulation systems at work, right? You’re imagining something that isn’t here right now, and you’re thinking about what it would feel like to be in that situation, whether it’s a memory or something that happened in the future. But I would go a little farther, and I actually think that simulation is something that we do all the time. So I construct a model of what the world is that is living in my head. And I get data from my senses. I get data from my own memories and my thoughts. All of it goes into the model. And I create this narrative, this story, about what I think is happening. And by the way, I’m also modeling myself, and I’m modeling you and thinking about who you are and what your thinking and what might happen next. All of this is baked in. 

So if we think of imagination as a kind of simulation of the world, that’s not so different from the kinds of simulation and modeling that AI systems do, but there’s one really crucial difference, which is that we engage in imagination from a position of embodiment, and we have all of this physical experience, embodied experience, and substrate, the neurons and the brains that we’re using to imagine stuff are completely different from the substrate of computational systems that are engaging, think of something like a large language model. You’ve asked it to write a haiku about a duck. Is it imagining the duck? No, I really don’t think it is, but it is simulating something. It’s simulating this big, complicated model of language. And language is a subset of the imagination that humans do. It’s just that we have so much more. You know, we have all of that physical memory, we have sensory memory, we have all this other stuff. So the much shorter answer is no. Machines can’t imagine in the way that humans can. I think the more interesting question is, how can we humans imagine better, imagine in more interesting and rich ways with machines?

Bhargava: An argument that’s given in favor of integrating AI more into artistic fields is that it democratizes creativity, and it’s almost acting as a tool to make art more accessible. So is that the vision that you see, is that the direction that you would want to take AI in creativity?

Finn: I’m not sure that I really subscribe to that. I see the argument for democratizing art and making it more accessible, but I think there’s a deep problem with that. So there’s two, two big problems. The first is that it’s operating on a kind of bell curve. AI generated art is only going to present you with this medium average of all of the data that was ingested into that system in the first place. So it’s much harder to get anything that’s genuinely novel and interesting. It’s much harder to push the boundaries, because you’re only recombining elements. Sometimes they use the metaphor of “freeze dried fragments of human creativity.” You know, it’s these millions of freeze dried fragments of human creativity are then stirred together in a bowl, and you add some water, and you heat it up, and voila, you’ve got this oatmeal. But it’s not the same as an actual meal. So that’s the first problem. 

The second problem is that you’re making art easy. You’re saying, “Look, you can create anything in the style of studios you believe, right, anything in the style of your favorite artist.” So when you turn art into a button that you can push, you’re instrumentalizing it. You’re simplifying it, and you are taking away all of the context, all of the labor that is involved in actually making that art in the first place. And so you’re really devaluing art in some important ways when you do that. And maybe you could say, “Okay, well, let’s just not call it art.” You know, what you’re doing is you’re creating a kind of toy, or you’re creating a consumer product, you’re creating a marketing tool. Those are all very good reasons to use AI, and there’s some good arguments for why you might want to do things like that. But if you say, “Oh, yeah, this is art, and so we don’t need human artists anymore, because we just have this button we can push,” I think that’s really problematic.

Bhargava:  

That’s another argument that’s given on the other side is that idea that since it lacks that human experience, emotion and intent, it can’t be considered real art. So if we were to apply it to, I guess, a more marketing context, do you think that consumers will ever emotionally connect with AI created stories in the same way we connect with human created ones?

Finn: Oh, I think consumers will connect with AI created stories. I think that’s inevitable. It’s probably happened already. Artificial intelligence is not something that only happens on computers. Artificial intelligence is the corporation. It’s the production of an advertisement by a committee or a network of 50 different people who all have different jobs. So we’ve been creating this kind of consumer product of advertising for a very long time, and it’s, I think, often, quite divorced from what I would think of as art in the traditional sense, or something that a person created to express a feeling or to capture an idea. It’s hard to, obviously there’s some blur, right? There’s some gray areas between these two poles, but in general, I think it’s entirely believable that AI will generate narratives, generate artwork that people find really compelling, in part because it’s based on all of this. It has all of this prior work. It has decades of human creativity and different kinds of outputs. It doesn’t seem like it’s going to be very long from now, it’s probably already happening, that people are feeding thousands and thousands of hours of television shows into AI systems, and they’re going to be able to create a new television show. 

And you see that, even setting aside AI in the commercial process of creating mass entertainment, let’s think of a sitcom as an example. Sitcoms are familiar. They’ve been made in similar ways for a long time. People expect them to work in a certain way. They’re created by giant committees. They have directors and producers and stuff, but there are lots and lots of people who are involved in that process, and there’s a real sense of caution and playing to the middle and swimming in your lane, because that’s what the whole medium is, right? Ultimately, a sitcom is a vehicle for selling ads and getting people to watch the show, so that the advertisements will be purchased and the network will make money, or whoever the platform is will make money. That is a totally different model from a painter who decides to paint something and might not be thinking about money at all when they paint. So those are extremes, and of course, there’s a lot of gray area in art and commerce, but I think that it’s entirely plausible that AI will create content like that, because we already have these more than human or non-human systems that are socio-technical systems that have people, and they have business operations and they have computers and software. They’re all the different things in the mix, creating something that people want to buy.

Bhargava: It sounds like we’re separating those two different worlds where something made for profit is a completely different thing from just artistic pursuit. So I’m an artist, and one thing that really made me go into marketing was the fact that I could use those skills that I learned in fine art to take those principles of design, elements of art, and apply that to maybe graphic design, so that I can make more money than a typical artist would make, and I think a lot of artists do that. And so do you think that, yes, we can draw that line. But is it possible that this wave of AI generated art, and like flood the internet with so much work, that this over-saturation creates this environment where artists can’t find real work and they don’t have any ways to sustain themselves?

Finn: I think it’s a real concern. And your story about your own identity as an artist, first of all this is a good reminder that these things are very entangled, and these two polls that I’m identifying are in some ways, ideals. Of course, anyone who wants to work as an artist needs to figure out a way to make money from their art. I think that one of the major problems with the rising tide of AI generated content and creative output is exactly what you’re talking about. It’s taking away a lot of the vehicles for financial support that artists have, because they can do commissions, they can do gig work. And I don’t want to speak for you, but I would assume if you’re getting paid to create a marketing campaign for somebody that has some overlap in your mind with what it means to be an artist. You’re creating something, and you’re enjoying that, and it’s addressing that set of intrinsic goals, but you’re also doing it because somebody’s paying you. And it’s not like you were going to draw this, I don’t know, picture of a cheeseburger, unless somebody was paying you to draw it, right? And so there are these extrinsic motivations, and you’re finding a balance or a compromise between your own interests and what you need to do to pay the rent. 

What’s going to happen with all of these AI tools is that it’s devaluing art, because now you’re competing against systems that can do this for free and can instantly redo work. If you don’t like the first draft, you just edit your prompt, and you push the button, and 30 seconds later, you get something completely different. So it’s completely changing the workflow of how this kind of content is created. So I think it’s a major issue, and I don’t really see a solution within the paradigm that we’re in right now. I think the only solution is if people start to value human creativity in a different way, and companies start to do something like celebrating that they’re working with human artists. 

Or if you think about the space of, say, musical creativity, artists who are coming back to the live show and live performance as a really important part of their economy of being an artist, and also the way that they connect with fans. And we’re also not far from the time when you can push a button and you can instantly get a new song that is in the style of your favorite musician, but you can’t yet have that experience of actually connecting with that human artist, that human creator, hearing them perform live, creating the experience and the direct human connection. I don’t know if that looks like stamps on it that say 100% human content, you know, or different kinds of industry standards, or just a cultural shift to encourage people to embrace our shared humanity, but I think it’s a major challenge for the creative industries going forward.

Bhargava: I know we’ve talked about how AI uses the existing works in order to create its own. So we have to talk about copyright issues, especially with the current ghiblification trend that was happening, as you mentioned previously. So what are your thoughts on copyright issues surrounding AI generated content, and who do you think should own AI created works?

Finn: I think that we need much more transparency and collective access into the way that these systems work. I think that when individual artists’ works are being tapped to create new content, I think there should be some kind of structure for compensating them, especially when we’re looking at something really stark, like the Studio Ghibli example. And I’ll point out again, the Studio Ghibli is a kind of artificial intelligence, right? It’s not one person. It’s a big conglomeration of people. It’s a company that has to balance aesthetic goals and financial goals. It exists in the world, and the world is complicated, but setting all of that aside, it’s also a place that created a bunch of copyrighted content, and that has now all been absorbed into these systems without any form of compensation, as far as I’m aware. I do think that there should be some kind of renegotiation of that. I’m a writer. I know that a bunch of my work has been ingested into these systems without any kind of permission or feedback from me, and I don’t think there are a lot of people out there asking ChatGPT to write an essay in the style of Ed Finn, but if they did, I wouldn’t see a dime. And that doesn’t seem very fair, especially because it’s not like people aren’t going to be making money from these systems. I think the real problem is the way that this has been unfolding, and there are ongoing lawsuits with companies like the New York Times, it seems that this is going to turn into another funnel where a tiny sliver of humanity is reaping huge rewards and benefits from the way these systems are set up, but the systems are harvesting the creative energy, the cultural output, of millions and millions of people without any kind of compensation. So it’s sort of like this giant Ponzi scheme of human energy and attention, and that’s not a sustainable way to organize society in the long term. 

Bhargava: So what do you hope to see in our current system for how this could potentially be reorganized or redistributed?

Finn: I think we’re heading towards a wall. We’re heading towards a wall in terms of how we’re going to collectively grapple with the increasingly capable AI systems that we’re generating, and how those are being used, and how we want to govern human behavior in that context, I think there needs to be a renegotiation around what kinds of uses are permissible, or when people will get compensated for their work being used that’s going to probably require some structural changes, creating more reference points or explainability, or connecting outputs to inputs, so that people can say, “Oh, yes, all right, so yesterday people, there were 30,000 images that were generated that were connected to this one artist’s work. And so there needs to be some kind of royalty arrangement or something like that.”

Now I think that’s what needs to happen. I am not at all confident that that is going to happen, because there’s a huge power imbalance between the major tech companies that are creating the most successful versions of these tools and the millions of artists who are usually not people with a lot of political power. Or collective agency, because, of course, there’s a huge range of responses and reactions to these challenges, and it’s not like every artist agrees on what we should do about it, so it’s a very difficult collective bargaining or collective negotiation problem. But I think that the consequences for human society are really grave, because I think it’s going to continue to disincentivize art, or maybe it will just change artistic practice into a much more direct human to human kind of thing, because nobody’s going to try to work as a concept artist or a graphic designer in the corporate world anymore, because there just won’t be any point. But that means that there will be fewer ways to make money as an artist, to define one’s life around art. 

Bhargava: That brings me to sort of the broader implications. So we talked a bit about the recycling of styles, but a big argument against AI art and just AI content creation in general is that it encourages laziness instead of innovation, and eventually what we see in terms of brand image and things like that will become incredibly homogenized and inauthentic. So do you think that’s what’s going to happen?

Finn: Ironically, I think the positive outlook is that we are still going to have artists, and artists are going to play with all of these AI tools, and artists are going to be the ones who break them and mess with them and push the boundaries, as artists have always done, to create genuinely new content and to innovate and to experiment. And that is a huge part of what art is all about. I think that the edges of human creativity will continue to expand. We’ll continue to do weird, new things, like we always have done as a species. And AI is incredibly empowering for all the negative things that we have been talking about. AI is incredibly empowering as a platform for experimentation, for rapid prototyping, for trying things out. And if there were a better value structure around it, then it would be even more empowering for artists to play with these tools and to make new stuff. I do think that people will continue to push the envelope and create new things. I think that the flattening of creative industries through AI is probably going to have this polarized or maybe idiosyncratic effect. There will be fewer ways for artists to make money, but there will be a greater hunger for the genuine novelty of actual human art and things that are not part of that homogeneous sameness. And so there will be this polling in two different directions, I think.

Bhargava: I definitely agree. I think one of the most fun parts about learning art history is just seeing what was going on at the time and seeing how artists responded, whether that’s like the invention of the camera and then the rise of surrealism, and then what we’re seeing, hopefully now. And so I saw you’ve annotated a version of Frankenstein, so I’m wondering if we can talk a bit about how Frankenstein explores those themes of ambition, creation, and those unintended consequences.

Finn: Yeah, so Frankenstein is a great metaphor for AI, in part because it is just as our current AI systems are assembled out of parts, the creature is assembled out of all of these human body parts – and actually other animal body parts too, it’s suggested in the novel – and the creativity that we see on the screen when we interact with AI systems is, as I said before, just the desiccated parts of actual people. When you look closely at AI, you can see all the people in there, right? The creativity of AI, the genius of AI, is actually human genius. It’s just been reconstituted for us in a new way. I think Frankenstein is a great metaphor in that way. 

But our take on Frankenstein is not just that it’s a cautionary tale, and this is a story about humans playing with fire and doing things that they shouldn’t do, and getting punished for that. So it’s not just a story about hubris, though, of course, that’s a part of it. Our take on Frankenstein is that this is really about scientific creativity and responsibility. What responsibilities do we take on when we become creators, when we make new things in the world? You can look at that through the lens of AI, which is one of the most important questions facing humanity today. What are our responsibilities as the creators of these intelligent systems? But also, what kinds of responsibilities do we have to one another? In the novel, the creature is not born evil. The creature is highly intelligent. Is very eloquent. Creature teaches itself language, teaches itself to read. It reads all of this philosophy and has deep moral perspectives on its role in society, but the creature is shunned by humans. It’s shunned first of all by Victor Frankenstein, its creator, and then other people treat it badly, and the creature becomes this dangerous psychopath, right, this deadly killer. 

You know, another moral of the story is you have to love your monsters. You have to take responsibility for your creations, you have to be a parent. The idea of creativity, creation, making creatures, is all tangled up. The scientific version of that, the biological version of that parenting. And this is something that Alan Turing wrote in one of his most famous papers, where he describes the Imitation Game, the Turing test. He says we should think about intelligent machines, if we ever build them, like children. These are some of the lessons that I think we need to take there is that story of the unintended consequences and destroy things because we didn’t know what we were doing. That is happening. Industries are being disrupted. People are losing their jobs, and I don’t want to make light of that, but we also need to think about that idea of responsibility, and this is what companies should do. So there’s no unringing the bell. We’re not going to just package up AI and bury it deep in the ocean and pretend that it never happened. We have to deal with the consequences of this world that we’ve made, and so that means taking responsibility for our fellow humans, taking responsibility for shifting business models. How do we create a new world that is better or more hopeful than the one that we are burning in the fire?

Bhargava: I like that take a lot. So do you have any other connections you’d like to make between AI and Frankenstein? And what lessons do you think marketers should take away from this?

Finn: I think one important lesson to take away from Frankenstein for marketers is to see the world clearly and be transparent about what you’re trying to accomplish and how you are dealing with people. A very simple version of that is to just be honest when you’re using AI and how you’re using it. I have already had the experience, I’m sure many others have too, of emailing or interacting with a customer service tool of some kind and growing increasingly suspicious that I’m just talking to a robot and not getting any useful answers because I’m getting that bland, middle of the road placating language that doesn’t actually answer my question because AI doesn’t really understand my question. So that’s a very simple version of that, but I think that a deeper version of that transparency would be to reflect on and to understand for yourself what these tools can and can’t do. 

So Victor Frankenstein’s biggest mistake was he didn’t really think through the stakes of what he was up to. He was so obsessed with solving this technical problem, achieving this goal, that he never paused to think about what would happen next. And I think that for marketers, for anyone in business, thinking ahead, playing through the chess game in your mind a little bit farther, thinking a couple of moves farther down, is incredibly important, especially when we’re talking about something like AI, which is changing so rapidly and is having such a transformative impact, you really need to think through the potential consequences of these changes and understand how they’re going to shift the business, how they’re going to shift relationships with customers, what kinds of hopes and fears customers are going to have – because that’s essential to the whole business of marketing – and to take those head on, you know, and not try to run away from them like Victor Frankenstein did.

Bhargava: So keeping that theme of social responsibility in mind. Do we see a possible future where AI can continue to evolve at the pace that it is without jeopardizing the roles of creatives in the workforce?

Finn: I cannot imagine a future that does not jeopardize the jobs of a lot of people in creative industries with AI. I think the best we can hope for is a future where there are many new jobs created, and there are transformations in roles and transformations in work. But advertising already, and I’m speaking to someone who is really an advertising expert, I should just say that. But from my perspective, on the periphery, is an industry that has a huge amount of experimentation, redundancy, guesswork. You show people the same image 50 times in the hope that one time it’s going to break through. You show a thousand people the same idea in the hopes that 10 of them are going to buy the product. There’s all of this repetition and ambiguity and uncertainty in the business of advertising. And in another way, advertising is fascinating because it’s really this cultural id. It’s this ongoing dialog about what we think we want. It’s one of the places where the future is actually most frequently represented in our society. Ads are always showing us a possible future self, a future world. Buy this product, and this is what you’re going to experience. 

So in all of those ways, it’s a really important discourse on what kind of world we want to live in. So we could turn some of those capacities for anticipation, for storytelling, to reflect on what a positive and hopeful future for the industry would be, try to model that future and show it to people. That’s really important. I think that there’s a tendency to race to the lowest common denominator, you know, the cheapest, fastest, simplest version, and whoever gets their first wins, and that, I think, is going to be really destructive for a lot of people in the context of AI. So, you know, trying to think of better futures, rather than the cheapest future, the most efficient future. It’s easy for me to say, because I’m not in the business, and I understand that people feel a lot of pressure to stay competitive and to survive in a cutthroat world, but if we don’t pause to try to figure out what that better future is, we’re never going to get there. 

I think we’ve said this implicitly, but AI is humans. It’s people all the way down. And if we can center that fact and make that knowledge more evident and more present in all of the work that we do with AI, I think we’ll be better off. There’s this fantasy of the omniscient, omnipotent machine, but the machines are just coasting on all the things that people have done in the past. And so if we can reflect on that and celebrate that, especially in creative industries, then, first of all, maybe we can recognize and compensate some of those people in better ways. And second, we can keep the focus on the ultimate goal, which is making human life better. It’s not about building better AI as its own goal. There are a few people who think that’s the goal, and they’re just trying to hand off civilization to the machines. I’m not among their number. I think we need to be building a better future for humanity. And so the starting point is to remember to see the humans, to see all the humans in the picture and not try to erase them.

[Outro Music]


Source link

Related Articles

Back to top button