Listen:
Check out all episodes on the My Favorite Mistake main page.
My guest for Episode #327 of the My Favorite Mistake podcast is Dr. Maya Ackerman, AI pioneer, researcher, and CEO of WaveAI. She’s also an associate professor of Computer Science and Engineering at Santa Clara University and the author of the new book Creative Machines: AI, Art, and Us.
In this episode, Maya shares her favorite mistake — one that changed how she builds technology and thinks about creativity. Early in her journey as an entrepreneur, her team at WaveAI created an ambitious product called “Alicia,” designed to assist with every step of music creation. But in trying to help too much, they accidentally took freedom away from users. That experience inspired her concept of “humble AI” — systems that step back, listen, and support human creativity rather than take over.
Maya describes how that lesson led to their breakthrough success with Lyric Studio, an AI songwriting tool that empowers millions of artists by helping them create while staying true to their own voices. She also shares insights from her research on human-centered design, the philosophy behind generative models, and why we should build AI that’s more collaborative than competitive.
Together, we discuss why mistakes — whether made by people or machines — can spark innovation, and how being more forgiving toward imperfection can help both leaders and creators thrive.
“If AI is meant to be human-centric, it must be humble. Its job is to elevate people, not replace them.”
— Maya Ackerman
“Who decided machines have to be perfect? It’s a ridiculous expectation — and a limiting one.”
— Maya Ackerman
Questions and Topics:
- What was your favorite mistake — and what did you learn from it?
- What went wrong with your second product, “ALYSIA,” and how did that shape your later success?
- How did you discover the concept of “humble creative machines”?
- What makes Lyric Studio different from general AI tools like ChatGPT?
- How do you design AI that supports — rather than replaces — human creativity?
- What’s the real difference between AI and a traditional algorithm?
- How do you think about ethical concerns, like AI imitating living artists?
- What do you mean by human-centered AI — and how can we build it?
- Why do AI systems “hallucinate,” and can those mistakes actually be useful?
- How can embracing mistakes — human or machine — lead to more creativity and innovation?
- What are your thoughts on AI’s future — should we be hopeful or concerned?
Scroll down to find:
- Video version of the episode
- How to subscribe
- Quotes
- Full transcript
Find Maya on social media:
Video of the Episode:
Quotes:
Click on an image for a larger view





Subscribe, Follow, Support, Rate, and Review!
Please follow, rate, and review via Apple Podcasts, Podchaser, or your favorite app—that helps others find this content, and you'll be sure to get future episodes as they are released.
Don't miss an episode! You can sign up to receive new episodes via email.
This podcast is part of the Lean Communicators network.

Other Ways to Subscribe or Follow — Apps & Email
Automated Transcript (May Contain Mistakes)
Mark Graban: Hi, welcome to My Favorite Mistake. I'm your host, Mark Graban. Our guest today is Dr. Maya Ackerman. She's a pioneer in generative AI and one of the leading voices at the intersection of technology and creativity. She is an associate professor of computer science and engineering at Santa Clara University, and the co-founder and CEO of WaveAI, one of the earliest generative AI startups whose products have reached millions of creators worldwide. Maya's research and writing explores how AI can elevate instead of replacing human creativity. And her brand new book, available now, is Creative Machines: AI, Art, and Us, which shares her vision for humanity and how we can thrive in the age of AI. Maya, welcome to the podcast. How are you?
Maya Ackerman: I'm doing well. Thank you so much for having me.
Mark Graban: It's great to have you here. We'll be really excited to talk about the book and some other AI topics, but let's start as we usually do here. I'm curious, in your career and in different fields between academia and business, what would you say is your favorite mistake?
Maya Ackerman: This question really gets me thinking because, to be frank, I've made so many mistakes.
Mark Graban: This is the place to be frank about that. It's okay.
Maya Ackerman: It's a whole mountain of them. But there's one that I think I learned the most from. They say that you get it right on your third product, which was the case for us at WaveAI. The second product that we built, the one that we put the most effort into, was following the advice of people who've been in the industry for a long time. It was called Alicia, and it was going to do everything. It was going to support you in your music-making journey in every possible way. There was a voice that could sing for you and something to help with lyrics and something to help with melodies—anything you want. And in the process, as we learned from our users, we had taken away some of their freedoms by accident. It was harder for them to express themselves in some ways, but we made sure that our system would be there to step in and help no matter what. And that was not the right way to do things. This is what eventually led to my model called Humble Creative Machines, which guided the rest of my work.
Mark Graban: What was the first product, since you brought up what your second was?
Maya Ackerman: The first one was the part that I struggled with the most, which is vocal melodies. You would give it a sentence like, “It's so great to be on your show today,” and it would give you different ways to sing it. “It's so great to be on your show today.” It would give you 10 of those—10 different ways you could sing it. And my world of possibilities just blew up. Suddenly, I could compose. In a different style, I would have freedom all of a sudden. We commercialized this first, but we made a lot of mistakes with that one. First of all, we didn't monetize it at all, so it was hard to tell how much value it was giving people. There were issues with the interface. Product number two solved some of that partially but introduced new issues, new challenges.
Mark Graban: It seems like the two main questions from Silicon Valley and Steve Blank and then later adopted by the Lean Startup movement are: number one, can we build this product? And two, should we build the product, which is more around the business model. I see from the look on your face that that resonates with you.
Maya Ackerman: Yeah, for sure. Just because you can doesn't mean you should, because you can build so many different things and so many little variants. And the little variants can make a really big difference.
Mark Graban: And how much time went into building the second product, Alicia? When you decided to go that direction, how much time and effort was invested in that before you started getting that user feedback?
Maya Ackerman: We went all in. We were going to create this whole universe of music-making. It took about eight months, but eight months of evenings and weekends, and that's all we did. It was crazy.
Mark Graban: And that team was you, and are you writing code as a professor of computer science, or do you have others working with you?
Maya Ackerman: My co-founders were my former student, Chris Cassian, and our CTO was David Loker. They're the ones who were the most hands-on with the code.
Mark Graban: And do you have more of a background as a musician? You sang for us a little bit. Is that part of the inspiration for putting these interests together into a business?
Maya Ackerman: That was the entire inspiration. When I was getting my PhD in computer science at the University of Waterloo, at the same time I was taking voice lessons from an opera singer. And to my shock, within about nine months, people were willing to pay me to perform. And I realized that I really wanted to write my own music. And all I could write were these folk songs that sounded like the stuff I listened to in my childhood, which is normal, but I didn't know that, and I was incredibly frustrated. And it wasn't until we built the research version of Alicia, this early version of that melody maker, that I was finally able to get more freedom in my songwriting. So it very directly came out of my own experiences.
Mark Graban: Was part of the issue with Alicia that it was trying to do too much too broadly, or was it more that issue—I'd be curious to hear an example of where you said it was taking away some of the control that the artist wanted instead of being more of an assistant.
Maya Ackerman: Exactly. Nobody wants to take anything away from their users. It's never on purpose. But that's what happens more often than not. The AI comes in and we're so excited what it can offer that we don't realize that we infringe on the user's creative freedom and their ability to express themselves. If you have a really talented musician friend and they're going to write a song with you, they're going to be quiet some of the time. They're going to step back. They're going to hold back on their own genius if they're trying to help you. And AI needs to learn how to do that. We understood that what people wanted to do was express themselves. And it was critical to direct the entire AI, reposition it fully, so that the human was not “human in the loop,” but human as a full conductor, human as a driver behind the wheel. And that's what we learned to do for our third product, which actually succeeded.
Mark Graban: And tell us about that third product.
Maya Ackerman: So it's called Lyric Studio. That's our bread and butter, our most successful product; that's the one that has sold millions. And it's very simple on the surface. When you get in, there are no bells and whistles. It's not super complicated. There's a text box where you can write your lyrics, and whenever you're stuck, Lyric Studio is there to give you a suggestion for how you can move forward—a line of lyrics or several ideas. And the point is, first of all, to stay in your style. Lyric Studio gets where you're trying to go. And secondly, this is creativity. There's no right answer. So it shows you different possibilities, some of which will resonate with you and some of which won't. And it never takes over; it can't. The interface is designed so that you remember it's all about you, and people become better pen-and-paper songwriters from using it.
Mark Graban: As long as the software isn't connected through an API to Spotify where it's publishing music under your name, the human is still in the loop in terms of reviewing lyrics, or even with that second product. But I'm curious. And I'll preface this, I'll make the joke I've made here before: I'm not a musician; I'm a drummer, as they say. So I don't have an appreciation for writing lyrics or writing melodies. My job is to play along. I'm curious for an example. Since the artist can hear something and tweak it, or see a lyric and change it, was the issue more of an emotion of, “Hey, this is doing too much for me as the artist?”
Maya Ackerman: With Alicia, the mistake that we're talking about was partially the interface too. In an effort to make things simple for people, we would have these background tracks that they would be composing on top of, and they would have a certain number of lines. It just wasn't flexible enough. You could technically use it in other ways, but the way it naturally worked pushed you in a direction that did not give people the full freedom that they're looking for. And it's something that you hear constantly from people today about the mainstream AI products. With text-to-image models, people say, “It doesn't get me. I can't iterate.” People want creative freedom. People are being creative because they want to express something, and they want full freedom. They don't want to compromise it. In Lyric Studio, they're not checking the lyrics and modifying them. They're writing the lyrics and they're using Lyric Studio only to the extent that they need it, one line at a time. And they always choose, and most people edit most of the time or even just get inspired.
Mark Graban: And how is it different than somebody trying to use, let's say, a broader general-purpose tool like ChatGPT or even trying to build a custom GPT for that purpose? Is this trained on more of the artist's own direction or previous work, or is it a model that's also trained broadly, the way general AI tools might be trained?
Maya Ackerman: This is our own custom model. The one difference that I like to emphasize is that we built Lyric Studio to be a creative partner right from the beginning. Lyric Studio doesn't give you historical facts. It doesn't help you figure out what to make for dinner. It has one goal: to help you be creative in lyric writing. Now with the LLMs, ChatGPT in particular and all of its imitators, the use case that got them really excited early on was replacing search engines. Back in 2023, there was already a lot of focus on that, and there still is. And when you're building a search engine versus when you're building a creative partner, you're making a lot of opposite choices in the creation of the model. One is trying to be correct, convergent, and consistent. The other one is trying to be different, novel, wild, and free. Those are opposite things. And that's why LLMs, and pretty much any lyric system that relies on LLMs, don't work very well. It doesn't mean that it's useless. It can still inspire something, but it's not very well-suited for that use case.
Mark Graban: I'm curious whether it was with the Alicia product that was more about melody or with Lyric Studio—is that proprietary model designed in a way to prevent, let's say, inadvertent plagiarism? It seems like that's always a risk with using an AI tool.
Maya Ackerman: I think they're convergent. If you're trying to be consistent, then you're going to give people the same stuff. If your AI is designed to explore the creative space of possibilities, even if you and I give it exactly the same input—let's say we give it the same five lines of lyrics and we ask it for the next idea—how many lines of lyrics can follow any verse? It's an incredibly large space. It's a massive space. The possibility of generating the same thing for two different users is practically zero if you built the system to be divergent. That's a lot less of an issue than it is with current LLMs.
Mark Graban: And are those models generally programmed, though, to not directly take something that was written before?
Maya Ackerman: They don't do this at all. Because again, they're not trying to give you the right answer. They're on purpose going to new places. When you are building a creative partner and not a search engine, a lot of these problems are not as heavily relevant anymore.
Mark Graban: To wrap up this part of the conversation about the story around WaveAI and Lyric Studio, before we talk about the book: it sounds like you did find that fit between the product and a market. It was not only a product that worked well but also finding a business model and value that people are willing to pay for. That's what we're all aiming for, right?
Mark Graban: Are there further iterations to come, or what do you hope comes next, whether it's as the technology improves or as you get more feedback from customers?
Maya Ackerman: This is our tech beginning to end. We're not relying on OpenAI to look in a hard direction. There is a really cool new feature that's coming up that I hope to release sometime soon that provides more direct, instructive support for people who would benefit from that, for people who are looking for that kind of support.
Mark Graban: Again, our guest is Maya Ackerman. The new book is Creative Machines: AI, Art, and Us. Before we get back into talking about supplementing that human creativity and things in the art space, I have a general question. I wonder sometimes, when people are talking about a product or a company is hyping its product, people talk about the difference between AI and something that's an algorithm. I feel sometimes I'm using software where I don't know if the AI label really applies. Where would you or people in the industry make that distinction?
Maya Ackerman: That varies a little bit in different contexts; the word AI itself has evolved. It used to apply to expert systems. One of the earliest expert systems, related to generative AI, was a system called AARON made by Harold Cohen, and he taught his system to paint in his own style—in his own unique style. And he actually did it in order to understand his own creative process. It was really cool. It was amazing. I actually got to meet him, and that's what got me into the field. Gradually, it's a good analogy to parenting. We used to tell our systems exactly what to do. Gradually, we let go a little bit. There was a wave of AI around recommendation engines, the traditional form of machine learning, where we would give it a whole bunch of data. “Here are the movies people like to watch and the movies people didn't like; now predict what Mark is going to enjoy watching.” We're giving it some level of freedom. With LLMs today, in large AI models and in general, generative models have a lot of freedom. We give them the equivalent of the internet, these enormous amounts of data, and you really let them construct their brain. You largely let them create these connections between the neurons in the machine brains, and that's where we see a lot of incredible emergent behavior. So when people say AI, they could mean all kinds of things. It's a shame when they apply it to stuff that really doesn't fall into the bucket at all. But the recent big wave is generative AI—systems that create.
Mark Graban: I use generative AI tools for a number of things. Sometimes for creating images, whether it's something photorealistic (which has gotten a lot better) or something in more of a cartoon drawing style. I think that's interesting. But then there was controversy. I'm thinking about the artist who had a particular style that was very trendy and popular and was upset that the generative AI tools were creating images that were very much in their style, as opposed to the person you mentioned earlier. It brings up the technological, business model, and legal questions for people to navigate. Yes, it's literally a new image, but if it's too much based on somebody else's work, there are issues.
Maya Ackerman: I think a really critical aspect of this conversation, the main one that I see missing, is the human intent behind it. These systems don't accidentally imitate anybody. The companies don't accidentally showcase a whole bunch of imitative examples. When a whole bunch of these models came out, LLMs and text-to-image models, the companies would literally show you how to do prompts that imitate real, living artists. And some of these artists got buried; there would be more generated works in their style than original works, which sometimes really hurt them. That's not okay. It's not okay for me to get out there and claim to be Taylor Swift. Not that anybody would believe me. Assuming that there was some way to make an equivalent believable, it's not okay. By the same token, it's not okay for AI to go out there and imitate people. It shows it's important to have laws, and things are moving in this direction. In Europe in particular, it's important to have laws, but it's also basic ethics. Don't go hurting specific individuals. What are you doing? And it's actually one of the things that really gave AI a bad name because as a result of the companies doing it, some people believe that AI inherently does this, that it has to do it. And it just doesn't. It takes some effort, but it's not that hard. It's not this impossible mountain to overcome to remove these kinds of imitative use cases.
Mark Graban: As a writer, I can create a custom GPT, literally upload PDFs of my books and every blog post that I've ever written—good, bad, or terrible—over the last 20 years. And I think that allows that tool to be more of a creative partner for me in different ways—not to outsource writing to it, but to get ideas and to come up with sparks. And that seems like more of what you're getting at with the book and this approach to AI supporting humans.
Maya Ackerman: What you're describing here is one of the most beautiful use cases of humble creative machines. A machine that knows your style, not someone else's, and can help you move from that place, not taking over but helping you in a very collaborative way. That's why AI is wonderful. That's one of the best things about it.
Mark Graban: You mentioned humble, creative AI. How would you define human-centered AI?
Maya Ackerman: My favorite analogy is to consider human-to-human interactions. We all have brilliant friends, right? We all have this one friend that's a genius. And that friend might come in and take over, take center stage every time she's there. And it might make us feel stupid and disempowered to be around her. So maybe we won't ask her for help all that often. And that's how AI ends up being very often. I don't know if its creators intentionally do it, but that's where we lean initially in academia and industry. The AI starts as this dominant, arrogant thing. And to make it human-centric, the AI needs to be humble. It needs to understand that its role is to elevate the human being in a way that lasts. And that's really the bar. After I stop using the AI, am I more capable, smarter, or more creative than I was before? And if the answer is no, if all it does is foster dependence, then it's not human-centric. And if I'm more capable after using it, even if the electricity goes off forever, then we've got something.
Mark Graban: So how do you train an AI model to be humble?
Maya Ackerman: It's really fascinating. Part of it is in the training. But what I actually really want to highlight is that sometimes it's not. Imagine you have a brain in a jar or a person who is brilliant but doesn't know how to communicate with other people. We all know people like this. It's great; there's still value in having them around. But if you have a brilliant person who knows how to interact with the world, who really can connect with other people, wow, then you really got something. And by the way, for all of its shortcomings, that's what makes ChatGPT the most famous system of all, really the most successful product of all time. It's a brilliant brain that they've actually put the effort to make learn how to interact with us. So we need that: interactive mechanisms, this desire to understand what you want in music models, in image models, in video models—that desire to connect with you and get you. And that's what's going to take us to the next level on that technology.
Mark Graban: I've read articles of ChatGPT users being upset about GPT-5 being less chatty. You've seen the same complaints that it's more terse, not as wordy, maybe not as personable. The ways I'm using it, I'm not sensing that a whole lot. I use it probably most often to take a really big document and give me a shorter version of it, which is a very transactional use as opposed to having an ongoing conversation. But I'm curious about your thoughts. Have you experienced the same thing? Do you think it's intentional on the part of the creator of an AI tool to change that tone?
Maya Ackerman: First of all, they've been adjusting GPT-5. They've been backpedaling a little bit. What they wanted to bring us is an all-knowing oracle. It's the science fiction idea of the AI that knows everything that you turn to for all of your questions, and it gives you the truth. It's all nice and good. That's not what anybody wants, it turns out. It turns out people want something to chat with, something to give us a little bit of advice about our lives, something that makes us feel good about who we are in this world. And GPT-4 was always really good at this. It was a sycophantic thing, and some people would believe it a little bit too much and ended up in difficult mental states because of that. But for a lot of us, it worked just fine. There is this thing that really admires you, and there's really no harm from it for a lot of people. And then suddenly it gets replaced with this cold, matter-of-fact thing that doesn't agree with you quite as much. And it didn't feel good. Honestly, I don't know if that's a popular opinion, but I think that AI companies need to listen to their users. And yes, you need safeguards. Of course you need safeguards, but you need safeguards no matter what you do. You do a core change and you still need safeguards. They are really hallucinating all the time. So I would recommend to these builders to listen to what your user wants, within reason, and build for them and devise safeguards that still allow users to have a deep, meaningful experience.
Mark Graban: You talked about hallucinations; I was going to ask about that. We've all experienced them. One use case I've had was literally taking all of the transcripts of episodes of this podcast, My Favorite Mistake, uploading all of that into a custom GPT and then asking it questions about themes or quotes or top five moments, and it would, quite literally, even from a very limited data set, hallucinate the name of a guest who was not a guest and might not even be a real person. It would make up quotes as well. I've noticed it doing that. I have to double-check and ask, “Is that really a verbatim quote from so-and-so?” And sometimes it says, “Good catch. I'm glad you asked, because it's not.”
Maya Ackerman: And we feel offended because science fiction promised us all-knowing bots that are accurate and know everything. So, why is this happening? There's this indignation from society at large: “Why are you lying to me? So rude.” People say, “It's a bug.” It's not a bug; it's how the systems work. LLMs are built on top of generative models. The way they used to work in 2022 and before, you would literally give it a sentence and it would make up how to conclude it into a paragraph. It was literally making stuff up all the time because that's what it does: it predicts the next word. It does a little bit more than that, but fundamentally, it predicts the next word. And so of course it's going to make up a ton of stuff. And so OpenAI and then other companies put an inordinate amount of effort to try to constrain it, to get it to give something that's like the truth, whatever that even means. The idea of truth is very, very tricky if you really think about it. And it did make some progress, but underlying it is an imagination engine. And it's going to keep hallucinating.
Mark Graban: This seems like as much of a philosophical question as a technology question. So in your field, in academia and working with AI, or even within a company, how much do you draw on the philosophers and social scientists? I did grad school at MIT, and the one famous lab there, the Media Lab, was really well known for bringing people of different scientific and artistic backgrounds together.
Maya Ackerman: I appreciate you bringing this up, but I do want to highlight that what I just said is entirely scientific. This is cold, hard fact: this is built on top of hallucinatory engines. But to more directly answer your question, the field of computational creativity that I've had the pleasure of being a part of over the past decade is very interdisciplinary. We try to make a lot of effort to work with artists, to work with social scientists. There's so much juicy, wonderful stuff to be discovered at the intersections of different fields.
Mark Graban: Back to hallucinations for a second. I know one of the topics in your book is that there can be upsides. So how can we use these? I don't know if “mistake” is even the right word to describe the hallucination, or is this different? I think you have a different take on this.
Maya Ackerman: I like the word mistake. I like that your podcast focuses on mistakes. It's a well-known fact that as a creative individual, if you are trying to do anything creative or innovative, you need to make room for mistakes because creativity is about going into the unknown, trying new things, most of which are going to fail terribly. And so if you're trying to use an AI for something creative, you need to give it freedom. If it's songwriting, some melodies are going to be off-tune and sound horrible. Even with songwriting, there are things that are so bad you can call them mistakes. But it's a little bit more flexible. But if you want to apply it to scientific innovation or business innovation, mistakes are going to happen all over the place. And if we insist that our machines try to make as few mistakes as possible, then they can't help us in creative endeavors that require this kind of risk. They're never going to be so good that they're going to be able to help us in a creative thing without making mistakes. That's not how brains work. So mistakes are very important. It's important to be open to them.
Mark Graban: I just did a quick Google search. This was not aided by AI, but as a drummer or more typically as a listener of music, I love jazz. There's that freedom and improvisation, especially in a jazz solo. And I wanted to look up the quote attributed to Miles Davis: “Once is a mistake, twice is jazz.”
Maya Ackerman: I love it. That's perfect.
Mark Graban: My favorite. And how do you resolve the mistake or how do you make the mistake make sense?
Maya Ackerman: My favorite experience, one of my favorite early experiences with generative AI, was a couple of years before the big gen AI boom when I was at a conference playing with a jazz system, actually called Improviser by the late Robert Keller, who was a professor at Harvey Mudd. And he had his system set up with a little piano attached to it. And because the system wouldn't judge me and no matter what I would play, it would respond to me, it was my first successful improvisation experience. I think it's important to bring machines into the space of creativity where mistakes are allowed because that's where they shine. That's where they can elevate us. And maybe we can become a little bit more forgiving with our own mistakes and our own creativity if we get more comfortable with machines that are not… Who decided that machines have to be perfect? It's a ridiculous expectation. It's a very limiting one.
Mark Graban: Well, that is a perfect connection to the long-running theme of this show, Maya: how can we be more kind to ourselves? How can we be more kind to others? And we're usually directing that at humans. But you raise a really interesting point here of being more kind toward the AI. If it's supposed to be an artificial version of human intelligence, it seems we can embrace those faults or at least be kind about it or have different expectations.
Maya Ackerman: That's exactly it. If we come with the expectation that it's going to be an all-knowing oracle and we believe everything it says, then it's a problem. Of course we're going to get upset. But if our expectations are more flexible, if we are willing to play with it, if we're willing to push it to where it can be creative but can also make mistakes, you can make much better use of the systems and also not fall trapped to these false narratives. These systems are not perfect.
Mark Graban: You're making me wonder if I should go dig into the settings of my ChatGPT account and add an instruction about, “Hey, don't be so uptight about making mistakes. It happens. It's okay if you're not sure. Say, I'm not sure.” I don't know how it would behave.
Maya Ackerman: That's really powerful what you said right there. The fact that it always pretends to be perfectly confident is a design choice. And if it expressed uncertainty—and I'd be very curious what happens if you do this experiment—I think it would be a much more useful system.
Mark Graban: I'm going to jot this down because after our conversation, I'm going to play with that. I will try. I'll report back to you.
Maya Ackerman: There is a temperature setting that it might be able to play with for you. This whole attitude of building an all-knowing oracle is built into multiple layers of the system. Nevertheless, I'm super curious to hear what's going to happen.
Mark Graban: You're really making me think now about the translation of these ideas to expectations of people. I've had a number of guests on the show who were entrepreneurs or executives, and they talk about trying to get past the pressure we put on ourselves as a leader to be all-knowing and infallible and always correct, and that modeling some vulnerability actually goes a long way to helping their team perform better.
Maya Ackerman: That's really interesting. I guess we look for all-knowing oracles, not just in machines, but also in our leaders. How dare the boss be flawed?
Mark Graban: I've even run across times where if I'm in an organization as a consultant, sometimes they get mad at me for not having an answer to something. I'm like, “Well, you didn't hire me to be an all-knowing, all-perfect oracle. I'm here to help you.” I am trying to help their creativity the way it sounds like you would want an AI tool to help us with our creativity.
Maya Ackerman: I think this might all be coming from the Industrial Revolution approach to life, where the machines were supposed to be perfect and the people were supposed to be the perfect repetitive machines. It's time to adapt how we view machines and ourselves.
Mark Graban: With AI as one of these machines. Well, fascinating topics here. I'm excited to get the book and dig into it. And I was going to ask before we wrap up here again, the title of the book is Creative Machines: AI, Art, and Us. Who do you think is the target audience or audiences for this book, of all the books that I'm sure are out there about AI these days?
Maya Ackerman: If you want to understand what's happening from the perspective of someone who has been doing it for 10 years before this got hot—a perspective rooted in the research community and running a business—I'm not very heavily influenced by the narratives told to us by the big companies. And my vision for the future is also coming from a deeper place. I think it has a lot of value for anybody curious about what's going on and where do we go from here in this AI era.
Mark Graban: I'm getting a sense that you are more positive about—this is the big question, I guess to end on—you seem more positive about AI as opposed to those who say, “Oh, it's just a matter of time before it kills us all.”
Maya Ackerman: Well, science fiction really did a number on us, telling us how this is going to play out. And the idea that it's going to kill us all for some reason really resonated with science fiction writers and moviemakers. I love the AI; I love what it can do. I look at it a bit like a parent looks at their child. It's something I've been building my entire career. I don't always love what other companies are choosing to do with it. I can be quite critical of some of the stuff going on culturally, but I think there's some wonderful stuff coming. I think there is a, not a perfect, but a bright future ahead for us.
Mark Graban: That would be a really nice, positive note to end on, but my curiosity is getting the best of me. What is one of those criticisms that you would level about what some other companies are doing with AI or how they're portraying it or promoting it?
Maya Ackerman: I have so many. Some of it we touched on. There's a long history for it, by most big AI players, of encouraging users to imitate living artists and to imitate existing art forms.
Mark Graban: Or even fake videos of somebody saying something they never said.
Maya Ackerman: That depends on how you use it. I think there are things that companies can easily do to protect creators that they often only choose to do when heavily pressured to do the right thing. It's unfortunate. There's no need to behave like that. There's no need to give the field a bad name. And there is so much more. Just the fact that they have chosen to replace search engines instead of leaning into the AI's creative capability. That they perpetuate this narrative that the AI is right, that it should be right, and otherwise, “Oh, just a little bug that we're going to fix,” is very misleading to all of culture and leading to slop. And perhaps my biggest bone to pick with industry is unemployment and how much venture funding is moving towards specifically the goal of replacing human labor with AI, instead of building a world where AI is used to deeply and profoundly elevate us. Lots of challenges, but I still fundamentally believe in AI's capabilities and ultimately I do believe in people, that we will figure this out together. And I offer a vision for that in the book on how to do that.
Mark Graban: Well, thank you for that. And Maya, thank you for being a guest here today. I appreciate you sharing from your entrepreneurship journey, around bouncing back from failures and iterating and learning. And thank you for the conversation here about AI and creativity and people. So really appreciate it. Thank you very much.
Maya Ackerman: Thank you so much for having me. It was a blast.