Ethan Mollick is an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation in entrepreneurship and examines the effects of artificial intelligence on work and education. In addition to his research and teaching, Ethan also leads Wharton Interactive, an effort to democratize education using games, simulations, and AI. Prior to his time in academia, Ethan co-founded a startup company and is an advisor to entrepreneurs and other executives. Ethan’s latest book is Co-Intelligence: Living and Working with AI. He is also the author of a popular blog, OneUsefulThing, which has more than 134,000 followers.
In a recent interview, Mollick discussed his background in entrepreneurship and transition to academia, the most important concepts that need to be taught to entrepreneurs, and the four rules of Co-Intelligence from his new book Co-Intelligence: Living and Working with AI. (This interview has been edited for length and clarity.)
Peter High: You teach entrepreneurship, and are a former entrepreneur. Could you discuss your entrepreneurial journey and your journey back into academia following that?
Ethan Mollick: I became an entrepreneur during the first internet boom in the late 90s, early 2000s. I had a brilliant college roommate who was a technological genius, and knew an industry really well, and he got me to join him in a startup, and we created the world’s first paywall. I feel bad about that still, I feel like that’s what I’m trying to earn off by being in academia. Sorry, we created the paywall. I was the sales/marketing outward facing person trying to convince 500-year-old companies that they literally had the original Gutenberg press to, that they should go online and sell their stuff. It was pretty successful, but we made every mistake possible.
Every hiring mistake you could imagine, every management mistake you could imagine, equity, all kinds of things, like we were successful despite ourselves. I thought, I have to figure out how to do this right. Decided to go do an MBA at MIT, realized nobody actually knew much about how to make entrepreneurship successful, and then decided to get a PhD and study this stuff. That’s where I am and why I was interested.
High: What are some of the most important things that need to be taught or are most useful to learn in a setting like yours for those who aspire to start companies?
Mollick: If we just start from where the data supports things, there’s three or four key things that are definitely teachable. One key thing that’s teachable is teaching people to do disciplined experimentation in entrepreneurship. That is the hypothesis. You hypothesize something relevant about your business and you do testing and you either pivot or continue. That turns out to be really important. People who discipline hypothesis-based testing have exponentially higher revenues than people who don’t engage in that process.
The second major thing is there’s a lot of stuff that actually makes a difference around how you pitch, how you explain, how you raise financing. There’s a complication there. A lot of people view raising VC as a prize, and it’s not, it’s a method of getting somewhere with an advantage and a disadvantage.
The third is management matters. Hiring is hugely important and a lot of people don’t know how to hire or how to make hiring work, building an organizational structure just so you can scale past 20 people where your individual level of energy is no longer enough to cover an entire organization. There’s a whole bunch of stuff around mentoring, networking that all seem to be a big difference. Teaching people skills helps a lot.
High: I want to get into some of the details from the book Co-Intelligence: Living and Working with AI. You talk about four rules for co-intelligence, and I wanted to cover each of the four with you. The first of those is to always invite artificial intelligence to the table. What do you mean by that?
Mollick: AI has what we call a jagged frontier. That means it’s good at some tasks and bad at others. If you ask the AI to give you a 25-word summary of a page, you might get 22 words or 28 words or some other number because the AI doesn’t see words the way we do, it sees tokens, which are words or parts of words. A space is part of a token. The AI might miscount the words. If you ask it to write a sonnet summarizing the work, it’ll do a great sonnet for you. How do we deal with a system that can write an amazing sonnet, but can’t do 25 words?
The idea is that if you use AI enough, you understand what it’s good or bad at, and that lets you navigate this frontier. It also lets you know what difference it makes. Nobody knows how well AI will be applied to the author of Implementing World Class IT Strategy: How IT Can Drive Organizational Innovation. Nobody knows that. You can figure that out. What does it know that you don’t know? What do you know that it doesn’t? The only way to do it is to take it to the podcast that you come to and see how it does summarizing our conversation, and to also have it do prep work for the podcast and compare it to your prep work that you would do, and then help you write your next post or piece of information and help with the next consulting venture or your next speech. That’s how you figure out what this is good or bad for.
High: That leads nicely then to the next of your rules for Co-Intelligence: be the human in the loop. I think it flows nicely from what you’ve described, but please provide some detail as to how best to be the human in the loop.
Mollick: This is an idea for control systems that you want a person involved in working with AI. It’s a problem because AI is pretty solid. In our studies at Boston Consulting Group, we found AI operating at the eighth percentile of BCG consultants in a lot of ways. Not every dimension, but in many dimensions. That’s tough. These are elite consultants. When they came from a place like Wharton, we were highly trained. You need to think about, as a person, what do you want to do? Right now, the good news is the AI is at the eighth percentile of high performance, but not the hundredth.
Whatever you’re probably best in the world at, you’re probably in the top 1%, 5%, 10%, and that’s what you like to do. The AI is not going to be better than you at that, at least not right now. What that gives you is this opportunity to focus on what you do well and give away the stuff you don’t want to do. Being the human in the loop is also about how do you make AI part of your decision-making, but how do you focus on what you do best?
High: Your next rule for Co-Intelligence is to treat artificial intelligence like a person, but tell it what kind of a person it is. Explain that if you would.
Mollick: You make AI do things by prompting it, essentially by giving it a sentence, and it auto-completes everything else afterwards. People make this very hard. There’s all kinds of tricks and prompting. I’m a very good prompter, like you do all kinds of weird stuff, but the easiest way to work with AI is just to talk to it like it’s a human being.
Treat it like a person. Even though it’s not a person. That’s why managers are often so good at working with AI. Give it instructions the way you would a person, correct it the way you would a person, but then also tell it what kind of person it is. You are a marketing manager at an IT company. You are an editor who has a preference for clear writing. You’ll get better results that way.
High: The fourth of the four rules for Co-Intelligence is to assume this is the worst AI you’ll ever use. Again, describe that.
Mollick: Everything you’re using right now is obsolete. There’s better things being trained. One of the fascinating things about AI is all these models are being released and they’re all sort of chatbots. Their interfaces are all slightly broken. They’re not optimized for any particular job one way or another. People think that the story might be, “Oh, maybe we need to launch a startup that makes this better for our business,” or something like that.
The reason why that’s happening is every AI lab is spending all their time building the next generation of AI, and as soon as that evens off, they’ll go back and figure out how to commercialize it more. They’re all building new stuff. Whatever you think the capability limits of AI are today, that’s not going to be the limits in the near future. Everything you’re using today is obsolete.
Peter High is President of Metis Strategy, a business and IT advisory firm. He has written three bestselling books, including his latest Getting to Nimble. He also moderates the Technovation podcast series and speaks at conferences around the world. Follow him on Twitter @PeterAHigh.