Learn to write ChatGPT prompts in 2 hours!
April 27, 2023

ChatGPT turns a history book into a game with 400 questions! Techniques: roleplaying, shot prompting

Lee Chazen used ChatGPT to turn an 800 page World History book into a game, composed of 400+ questions. He's also using ChatGPT to write the social media posts, the website content, and a whole lot more.

Topics and techniques discussed:
- role playing
- step by step
- prompt ideation tool
- and more!

Prompt iteration tool: PromptEngineeringPodcast.com/ide

Promptbase sales analysis: https://gregschwartz.gumroad.com/l/prompt-engineers-promptbase-analysis-gpt Use coupon code "podcast" for 10% off!

To follow up with Lee:
Twitter: https://twitter.com/lee_chazen
Content Strategy Website: https://www.glidercell.com
Game to learn world history, Global Challenge 2.0: https://globalchallenge.mixo.io
Blog: https://rightbrainworld.blogspot.com/

Stay in touch on:
Youtube: youtube.com/@PromptEngineeringPodcast
Telegram: https://t.me/PromptEngineeringMastermind
LinkedIn: https://www.linkedin.com/groups/14231334/

Transcript

Lee:

So my name is Lee Chazen and I was originally a teacher. I've worked in a lot of different professions. I got into content strategy because a friend told me about seven or eight years ago that what I was doing is actually content strategy and I wasn't familiar with the term back in 2013 or 14. Did some work in Silicon Valley as a chief content strategist and now I do consulting on my own. And when ChatGPT came around about two months ago, I said, oh my God, this is gonna change everything. And so I pivoted pretty hard into prompt engineering and I I think I read some article in the New York Times that let me know that this was going to be huge and it was okay to be a creative liberal arts sort of person and do this work like you didn't have to be necessarily technical or have a background in computer science or programming, and it's working just fine.

Greg:

That's awesome. What would you say were the first steps that you took to learn prompt engineering?

Lee:

I think it was two and a half months ago I spent the next three or four days just experimenting with prompts, any type of prompt I could think of. I realized some people were going a weird direction with this. Let's try to undermine the system in some way. Let's let's try to fool it into being something that it's not. And with me, I was mainly trying to get it to finish written products that I had started, but I just needed that extra kind of boost, that personal editor. So my first, yeah, my first few prompts were I just need help with my content. And from there I just thought, wow, I don't have enough hours in the day. I wanna do this all day long.

Greg:

That's amazing. Just give us a little more sense of, when you say helping with content, what kind of content? Like books or podcasts or poems, song lyrics. I don't know.

Lee:

Yeah all of that. And also scheduling, for example, today I wrote down all the major categories of things I need to get done in a day, And then I could easily just create a prompt saying, All right. I need some creative time, some administrative time, some networking time and then some promotional time for my consulting operation. I went on and on and I gave each one a percentage. I could create a prompt now that would say divide my day, and divide my week into segments according to these percentages. And tell me. How I can accomplish all these things in a day, and I will get that as a response. So there's a lot of just real practical, functional stuff that I was doing initially, So if you're like a content strategist a teacher, anyone who puts up website content or social media content that, those were like the first things that I started doing. For example, as a as a former teacher who became what they call an preneur, so I produce educational products. And things, and I needed to finish a lot of that. And one day sitting there, and this is the prompt that I wanted to share with people I thought what is the, just the coolest, most amazing, prompt I can come up with that will solve. Like a teacher's problems. And what I came up with was, now it's over 500 words, but initially it was 447 words. And it was designed based around this game that I had invented with my students. Back when I was teaching social studies, we invented this game called Global Challenge. The idea was to learn the contents of an entire world history textbook, all 800 pages in the form of a game. But in order to do that, you need game questions. So the prompt was and I'm not gonna read the whole thing now because it would take too long, but it was, in the role, you're this omniscient author master of all content related to world history, current events, high school curriculum. So I gave it a role, that, that meta promptt. And then I got into, you're gonna design 20 questions, seven categories, six different levels of learning to cover the span of recorded human history. And it's, at first it told me that it can't do this all in the character limit, but I would have to just keep hitting continue

Greg:

and let me pause you for a second. That is actually one helpful trick for people when it runs out of character output, as it so often does. You can tell it, 18 different ways, but the simplest one is just continue and it will usually mind you pick up where it left off.

Lee:

Yeah, and I just thought, how much time do I have to keep hitting continue because this is going to create 400 multiple choice and short answer questions. Across the span of different intelligence types. Cause I built that into the prompt too. So there'd be something for a math logic kind of person, a visual spatial sort of person. And, this could revolutionize everything. I don't know what textbook publishers are thinking right now, but this could replace the textbook because the corpus text, the body of all content that. Goes into ChatGPT will cover pretty much everything. It's all been if it's digital, if it's put out there in some form on the internet apparently ChatGPT will find it

Greg:

Particularly with this prompt, have you run into problems with hallucination?

Lee:

Yeah. That can be a problem. And so you have to be somewhat knowledgeable to do this to begin with because you're gonna need to fact check things and. I don't know what the percentage is, the accuracy percentage, but my initial guess is like 95% or 97% accurate, just based on what I've seen so far. But here's the kind of the flaw of being a human being is that when something sounds authoritative and. Is using a certain kind of vocabulary, you just automatically think it is correct, and so there's a possibility that disinformation misinformation can happen. You just gotta be on the lookout for it.

Greg:

That makes sense. Walk us through this output.

Lee:

So I had to create a point value based on the the level of learning so that if it was like, so there's this thing in education called Bloom's Taxonomy of Learning, where if you do the simplest thing where you just recall some information, That's like a level one. When you're up at the top and synthesizing it and applying it, evaluating the information, manipulating it into something different, that should be the highest point value because now you're really doing some heavy thinking. You'll see in the right column there, it'll say Bloom's level. And the number of points they're gonna get. So automatically this thing has, I can't even tell you how long this would take a teacher to do. And this took when I was initially doing with the students and students wrote the questions and we created the point values and the charts and everything. This was like a two month long project.

Greg:

Wow, two months.

Lee:

To go through and curate or cull whatever the word is, all of the knowledge from the textbook and turn it into questions. But here's the cool thing, this is why I think teachers shouldn't be afraid of ChatGPT because I think the question is more important than the answer. And if students can form good questions and do this themselves and create great prompts, then that means they're learning. If you know what to ask about a subject, it means you know the subject. If you don't know what to ask, then you haven't read enough.

Greg:

That totally makes sense, and that's a nice way to flip that discussion people have of, every time people freak out about a technology and education, it's that the answers are provided. It's calculators, it's Wikipedia, it's ChatGPT, but thinking about you need to be able to ask the right question. That's really a good point cuz with a calculator, yeah, it'll tell you whatever you want and it'll do it more accurately than ChatGPT will, but if you don't know how to ask the right question, it doesn't matter.

Lee:

Yeah. And what the effect that I think this is going to have on people using ChatGPT and other LLMs is that it's going to improve our thinking. It really will because it'll make you ask better questions, which means you're gonna be thinking at a higher level. You're gonna want to be refining these prompts. To get better answers. So you're not gonna be caught up in the technical side, the coding, the getting the technology to work for, you're gonna be caught up in the how do I take advantage of this superior technology to ask Most fascinating purposeful, meaningful, whatever question. And so I did another one like this for just general content for any, anyone like running a business. Or a content strategist or people working in social media that I generated a question, so that ChatGPT would ask the user a series of questions. And in getting all those answers, it will then create a prompt. So the purpose is to, you're writing a prompt to create a prompt and so based on that, they're gonna have a prompt that they can use forever whenever their company or the idea changes, all they have to do is answer the questions that ChatGPT has generated from that original prompt. And it will give you all of the answers or all of the content you need for a website, for social media, for a book, whatever it is you're working on. The idea was once you have all this knowledge, now create content for the website. Give me a list of the top 50 key words, for seo so I can use that in social media, use that on the site. Or the actual product itself, because I could have it build out all those different sections into the entire book that explains that, that explains the whole concept. And so in about 30 days time, like that business is formed and ready to go, maybe sooner. If I imagine if I had a team of three or four prompt engineers, we could finish any. Major product like this and just, all right guys, we've got two weeks to do a book, a website, and put a product out.

Greg:

Very cool. So specifically, let's go back to this first prompt. Can you call out some of the techniques that you're using in this prompt, just so the audience can understand and see how they're being applied.

Lee:

I don't have to speak perfectly as I'm speaking to a person. I just have to get it all of the information I want in there. So up top I say, this is for a world history game called Global Challenge 2.0 Metamorphosis. Then I explained what the game is. Then I said, all right, now we're gonna move on to the questions for this grade level, and they should move progressively. Meaning I wanted to start at the beginning of the book and go 20 questions per chapter all the way through. So I gave it the order of operations. That, so each thing you're drilling down what do I need next in this process? Now I need them to be divided into seven categories. And here's what those categories are like. It needs to divide it into major events, vocabulary, people, geography, government, current events and trivia. And then I said within those 20 questions, so I keep drilling down generate five questions in category one. Three in categories two, three, and four. So I break it down like, how do I want those 20? And the remarkable thing is that it did it perfectly on the first run. And I did not expect that. It really blew my mind. I've had my mind blown so many times in the last two months by things. And Oh, and then I gave it specific instructions regarding the current events because it's not enough to just ask a student a current event question about what's happening now. I wanted them to, wherever possible, I wanted ChatGPT to relate that question to something that was happening during that period of history that we're studying. And then I gave it an example, which surprisingly helps, if you give ChatGPT, an example of what you want, it somehow models that.

Greg:

Yeah, so that technique is called shot prompting and not doing it is called Zero shot Prompting cuz there's no shot. But one shot prompting is giving a single example and few, or n there's a lot of different terms for it, but either of those is when you give it multiple examples where in this Wherein this is the example. I see where you said connect the news to something that happened in the time period for each set of 20 questions. Eg. There was a conflict or territorial dispute similar to what is happening in today's world. Did you include a specific one?

Lee:

I didn't because I was in a hurry, number one, I also didn't wanna give a wrong example because I haven't, I'm not actively teaching right now I may have forgotten some things. So I just gave it a general, Example, like in history there's a lot, we see a lot of conflicts. So if you can relate a current conflict of something that happened in the past, that's gonna be really good. And then, I asked for it. Now in, when I initially did this on G P T 3.5 and I asked for the table, it said, I can't do a table, but I can give you the code so that you can create the table. In G P T four it produced the table, which I couldn't duplicate on this Google Doc, but it is awesome.

Greg:

Nice. Yes. Table output is a very nice change. One other question actually. I am not a history teacher or in that sort of branch of education it looks like you didn't define Bloom's Taxonomy of learning it just from the name. It was able to go, oh, yes, I know what that is. Is that right or was there prompting beforehand to tell it?

Lee:

No I capitalized it to let it know that it was a proper term for something which may have helped, I don't know. But I found, so when I researched it myself, I found there was a version of Bloom's Te cuz Bloom's Taxonomy goes back to, I think to the 1950s. So I told it to use a more current version, which I just happened to find by Anderson and Krak. I'm saying that which reordered it into remembering, understanding, applying, analyzing, evaluating, and creating. So that creating if you create something with the knowledge you've been given, which is really what ChatGPT does, that's like the highest level of learning. It's synthesizing stuff for us. So it's a great student, if you wanna look at these LLMs in that way.

Greg:

That is very interesting. Okay. I'm seeing role playing as one technique, cuz you say in this role you are the omni mission author and master of all content related to world history, current events and high school curriculum. You are wise, but you are also funny at times, things like that. The other thing that I'm seeing is output constraint. So for example, for each set of 20 questions, generate five questions in category one. Three questions in categories. 2, 3, 4, 5, et cetera. And then the shot prompting that I mentioned earlier. These are awesome examples of these techniques.

Lee:

Thank you. I think in future versions, I was just thinking about this, that if something is undefined, this is going to be a major breakthrough when this finally happens, and I don't think it is happening where it asks questions like how many of these in each category did you want me to produce? If I did not define that now, that would be brilliant. I don't know when we're gonna see that. But if it helps you refine your question, that's gonna be amazing. Yeah.

Greg:

Yeah, definitely. So can you tell us more about the process you took of iterating on it?

Lee:

I came up with this idea of a of a multidisciplinary content matrix. So that all you have to do is answer the questions in each category, and that will tell you how to put your prompt together. Taking that even a step further, I asked ChatGPT. This is something I'm probably gonna put on prompt base once I get it done because it's really elaborate, is I'm going to have it create the code for an app so that this app prompts you. In say, 10 different categories. For example what is your product idea or service? Who is the intended audience? What style of writing do you want to use? And then from that it will create your prompt. And I know there's prompt generators out there, so maybe this already exists. It's moving so quickly. And so if you, kind of go through a checklist. and you go through a checklist. I think that will help people come up with really great prompts, because I think honestly, most people are just doing like the one sentence, create a a poem about McDonald's as, as though written by Shakespeare or something

Greg:

Yes. Yes. That's part of why I created this podcast and the Mastermind is for people to be able to go, oh, wow, I've never thought of doing it that way. Or, oh yeah, let me go read through all of the different prompting techniques and things like that, and. That's also what learn prompting.org does. They're a great resource for this kind of stuff,

Lee:

you I imagine you're probably one of the first 100 in this genre, right? Or on the market.

Greg:

Yep. Yeah, there's quite a few people selling here are my 70 or 500 prompts, but actually teaching you how to do it and the techniques for it. Yeah, I think there's probably around 50 people.

Lee:

Yeah. This is super valuable because, that's one of the things I love about current culture. It's like you go on Reddit like the way I found you and this group, and it's, let's all help each other learn this. It's not as dog eat, dog as it could be. It's oh my God, I am, because initially the thought is like, oh, I think I've discovered something. There's no way I'm gonna share this. But everyone's sharing everything. I don't know how that's gonna play out Exactly. Yeah, it's, but it's like what I was telling a friend the other day that these are, this is simultaneously the scariest time ever. But also the most unimaginably amazing time ever, and those two things are kinda happening simultaneously.

Greg:

What are some common pitfalls you've run into? With building prompts.

Lee:

If anything, it's how to limit the ideas, so that you don't get a a jumbled mess of a response. It's like paring things down. I think you have to know really exactly what the end product is, what you want. And it's, I've never seen more of a, like a mirror of a tech product where you're gonna get exactly out, maybe not exactly out, but very close to exactly out what you put in. And brilliant prompts are gonna get brilliant responses, but like you said before, hallucinations can occur.

Greg:

Have you found any good techniques for catching the hallucinations? Obviously, you know a lot about history. You're gonna be like, no, Julius Caesar did not live in, Africa. I don't know. But have you found any good techniques for that? If it's stuff you're not as familiar with?

Lee:

Yeah I don't know. I think we need good techniques for that. I think there's probably apps and programs out there that are going to be designed to detect that. Yeah, I just recommend to everyone that don't just create a document and then post it. Some are, make sure, human beings still need to read through things and there's gonna be a lot of garbage. Now I. This is gonna date myself. Definitely. But I remember in 2005 when I started my blog and I quickly realized, wow people are reading this and accepting pretty much whatever I'm saying here as the truth. And I had to self-censor. I had to tell myself, you gotta be careful what you're saying because they're taking this to be true. You don't wanna be found out to be like this guy who is a master manipulator and changing information around. And it's that same thing. I think it's going to be up to schools or generally like on in places like Reddit, Hey, let's monitor ourselves here and let's try not to deceive, each other and our, and ourselves.

Greg:

Definitely just teaching students critical thinking as well will be part of that with, here's how you fact check things. Here's how you think about, here's a source I've never heard of. Are they telling the truth? Are they, accurate? Whether it's intentional or not.

Lee:

Like people will be using this the way they have been using Google. Now here's the difference. Google's gonna give you a set of links. I think you can ask ChatGPT four for a source of information. I'm pretty sure I know Bard. You can And I haven't tried.

Greg:

Bing provide that. I haven't seen that in G P T four, but it's certainly possible, and I know I've seen it in systems that augment G P T four.

Lee:

Yeah, that makes sense. So people with medical conditions are just going to inevitably ask Hey, what should I do about toenail fungus or this This repeated headache? If you're getting this ongoing headache I'm not sure what you're gonna get as advice, but I would just say do some follow up questions like where is this information coming from? I'm not sure if that's a lot of the responses I'm getting. I am a language model and I don't have access to this or that, medical journals. So you need to, I think everything comes with this caveat.

Greg:

Which, especially at this stage, that caveat is a good thing. Yeah.

Lee:

Yeah.

Greg:

Can you share an example of a prompt that didn't work the way you wanted it to and how you either learned from that or iterated on it to get it to do what you

Lee:

Yeah, that's a good, I did not come prepared for that one. I've gone back and forth like where I'll say, that's not what I want because it has this, oh, I apologize. Perhaps this will work. And it's weird kind of dialogue.

Greg:

Yes I've seen one version of the say what again? Seen from Pulp Fiction where Samuel Jackson is like, threatening them. Say it again. Say it again. But it's, talking to ChatGPT saying, say, I apologize. But again cuz yeah. Seeing that over and over, especially for me, I do a fair amount of code generation and that's frequently what comes up. Cuz you're like, If that code doesn't work or here's an error, I'm sorry, let me sell, try that again. Like over and

Lee:

I think there's some self-correction going on already. Where I don't know. I've seen this in a few places where it, it will generate some potential errors in what it just produced, and then you'll check those and then it, you'll just go through this kind of iterative process until you get to what you want.

Greg:

Yeah, absolutely. I've definitely seen that in the. Co-generation where you know, I'll ask it, fix this bug of, I don't know, missing semicolon or it's outputting the wrong thing. It does say, I'm sorry, let me, I apologize. Let me give you that again. But then it fixes it and it keeps the fix at least for a while. That does sometimes roll out of the context window though, so that can be a challenge. There's a tool that I'm building. That actually allows you to do iterative testing on prompts, and I'll link to that in the show notes. It's called the Prompt ide, but the idea is if you have a prompt that takes in variables, for example, if you're selling it on prompt base, and so it's, give me suggestions for what to do on a vacation, maybe then you're gonna have a variable for. Where you're going. And another variable for how long? And probably some third variable of, I don't know, flying or driving. Particularly if you're in the us. When you're testing that out, you tweak the prompt, and then if you want to test it, like test the actual output, then you have to copy and paste it into ChatGPT, replace all the variable names with stuff, and then run it, and then do that again with your second test case and your third test case. So anyway, what this tool does is you have a field for the prompt, and then you have some sections for each test case with a variable for each one. Location for test case number one is Paris location for test. Case number two is New York City. And then every time you change the prompt, it automatically reruns all the test cases with these variables placed in it. I built it in both combination of Ruby and type script. And I know both those languages, but I've been enjoying being able to try out the code generation. And it's had some really interesting things because this is too complicated a program to keep entirely in memory, so I can only ask it for, give me a function to call ChatGPT's, API and take in these things. I can't just say give me a page that does all of these features at

Lee:

right.

Greg:

but it has actually been pretty good at being able to do these things. Ironically, one of the places it has repeatedly failed is anytime I ask it to write an API call to open ai. It's wrong because it's old. Like it, it tells me, here's what you want to use to connect and the URLs wrong. A bunch of the parameters are wrong. So I have to go look those up. But that is exactly what people have talked about. That's out of the training window. The training window ended in 2021, I believe. And these are a p i changes that happened three to six months ago. So it doesn't surprise me that it's wrong, but it catches me sometimes and I'm like, oh that is a thing you will not be able to do. So let me go, just look up the correct a p i documentation and put it in myself. But for the most part, it's been pretty powerful.

Lee:

Yeah, absolutely.

Greg:

You mentioned selling on Prompt base earlier. Have you sold any prompts and what do those prompts do or what are the ones you're thinking of selling if you haven't yet?

Lee:

He had that real that first one is something that I want to I wanna refine first, get it to exactly what I needed to do and then put that one up for sale, which is the one that generates all the questions to cover the span of world history. But you could do that for every subject area. And I might just start doing it. It's just that once I start going down that road, I don't know how long. Yeah, I don't know how long it'll be before I stop. Especially if you sell that first prompt, I don't know how successful people have been on there, so I have to be care careful because there's wormholes out there that will just suck you in and you'll think, wow, it was Monday when I started this. How is it Friday afternoon now? Did I, did any of this generate income? I don't know.

Greg:

Yes, I totally understand the feeling about the rabbit hole. I can actually tell you. I don't know off the top of my head, but I did an analysis of prompt bases, text output prompts to look at, basically to categorize it a little better than they did, and also then to look at how many sales does it get, how much revenue based on price. Audience, I'll put the link as well in the show notes. And it's basically it's what I wanted when I said I want to start selling more prompts on prompt base. Which categories, which niches do you prefer are selling well, which ones are selling well, but they're really overloaded. Which ones are like, they don't have one or two, but those get a lot of revenue, et cetera. So that's what this analysis is basically looking at. It does actually have the titles and sales information and all that of. 5,236, I think it is prompts that are available on there. So if, if you want to do different analysis of your own, you can do that. I also did the analysis of here are the niches I would say you probably wanna try and target because they're, high value, low competition, or maybe medium competition, but still medium or high value, things like that. So yeah, that'll be in the show notes.

Lee:

Nice. Yeah, and I haven't even fully explored, image and graphics and art creation. Or even music creation if it's out there. I also play music but I have not done any composing. But I'm wondering that, that's probably right around the corner too.

Greg:

Yeah, the music for this podcast, the intro and outro music is a very short clip, but it's actually AI generated,

Lee:

Oh, wow.

Greg:

people will be hearing that, a few minutes ago and in a few minutes when we wrap the episode.

Lee:

Okay.

Greg:

So thank you so much for coming. Where should people go to stay up to date on what you're building and these prompts and when you start posting on prompt base?

Lee:

Yeah. Twitter at lee underscored chazen. Glider cell.com is my I'm, I did a major pivot and now it's like my prompt engineering content strategy or. My old tiny blog from the one that I mentioned before with without a specific url. It's right brain world.blog spot.com, like the oldest name on the internet Possible.

Greg:

That's awesome. And all of these links will be in the show notes. And you also mentioned global challenge dot m ixo.io

Lee:

Yeah, that is the game and people can get on the wait list because I'm once I get the list to enough people, I will generate the final products that they need to start running this, and then they can build their own versions of the game and hopefully we'll create a platform where they can showcase all the different versions that are out there so that students can learn across the entire curriculum by playing this game.

Greg:

That's awesome.

Lee:

And thanks for, I really appreciate you having me on.

Greg:

You are very welcome. Thanks for coming to the Prompt Engineering Podcast. Podcast dedicated helping you be a better prompt engineer. Episodes are released every Wednesday. I also host weekly masterminds where you can collaborate with me and 50 other people live on Zoom to improve your prompts. Join us@promptengineeringmastermind.com for the schedule of the upcoming masterminds. Finally, please remember to like and subscribe. If you're listening to the audio podcast, rate us five stars. That helps us teach more people, and if you're listening to the podcast, you might want to join us on YouTube so you can actually see the pops. You can do that by going to youtube.com/@PromptEngineeringPodcast. See you next week.