For the course collaboration with Galileo on AI for Marketing, we wanted to experiment with making as much content with AI as possible. We were dedicated to integrating AI into every aspect of the course production:
- Creating learner personas
- Brainstorming and organizing concepts
- Writing scripts and comprehension questions
- Creating graphics and slides
- Recording and editing videos
Luckily, AI development has expanded into each of those areas!
ChatGPT as Thought Partner
Before we could do anything else, we needed to know what we were actually teaching. None of us–an instructional designer, an innovation portfolio manager, and student intern studying mathematics–had any expertise in marketing to share, so we turned to our trusty partner: ChatGPT.
We knew the basics of what we wanted to cover: the history and possibilities of AI technology in the marketing world, its direct applications, and its use as a creativity enhancer. With those goals in mind, we asked ChatGPT to outline the broad strokes of what three short courses focusing on those subjects might include. Once we had the outline, we narrowed down the list by creating learner personas. We asked both ourselves and the GPT why someone would be interested in taking a class on AI in Marketing and chose the most relevant topics from the pool of ideas. We then arranged those ideas into the outlines for three online courses, each one week long, with four to five concepts in each.
ChatGPT as Writer
Once we had the course outlines and four to five concepts that each course would cover, we could begin working on the scripts. Using our learner personas to inform the tone and our research into attention spans to determine a length of five minutes, we instructed ChatGPT to write scripts for each section. This is where iteration and our experience with prompt engineering became absolutely vital. Initially, ChatGPT wrote scripts that were styled after movie scripts, with lines for what the instructor should do at certain points and stock footage that would appear onscreen. We experimented with the prompt until we got a format that worked for us: a simple summary of the subject. We also had to fuss with the requested length of the summaries–a request for a five-minute summary didn’t typically go into much detail, but a request for approximately 600 words was often more in-depth.
The scripts made it exceptionally easy to write comprehension questions. We asked the GPT to highlight the three biggest points in each video, then write reflection or multiple choice questions about those points. We had it write several of each, then chose the two we believed were strongest to include in the course.
DALL-E as Graphics Magician
We knew our videos would need graphics and slides to illustrate the concepts within, so we used DALL-E to create comics, fake brand logos, and general illustrations. For each video, we took the three biggest points that ChatGPT had summarized for the comprehension questions and thought about ways they could be illustrated. We then asked DALL-E (via ChatGPT) to make those illustrations, iterating on previous ones. We found that DALL-E had difficulty distinguishing and modifying skin tones and gender, which was interesting. It also tended to represent AI as a classic old-school robot, with a rectangular head and lots of wires–a beautiful self-portrait!
HeyGen as Film Crew
Now, for what might be the coolest part: video production and editing!
We have the scripts and the graphics, but how do we turn those into engaging videos without an instructor to stand in front of the camera? HeyGen is an AI tool that takes in text and puts out video and audio. Users can select a pre-recorded avatar or go to a recording studio and film themselves saying different phonemes and making gestures for the AI to compile into a fluid, natural-looking video.
Galileo had recorded one of their professors and used his avatar, but we chose to use a pre-recorded voice and avatar for our course and dubbed her Vanessa, our AI instructor. Read more about our selection process. To edit the video and audio HeyGen produced and integrate the graphics and slides, we used Adobe Premiere. Editing was quick and easy because, unlike a human instructor, the AI did not fumble words or need to go back and re-record certain sections.
Thoughts
Our journey of creating an entire course using AI tools has been both enlightening and challenging. After completing this experiment with Galileo, we’ve gained several valuable insights worth sharing:
Strengths and Limitations
While AI excelled at providing broad overviews of marketing concepts, we noticed limitations when we needed deeper, more nuanced discussions. ChatGPT gave us solid foundational content, but the expertise and lived experience of a human marketing professional would have added valuable depth to specific examples and edge cases. This balance suggests that AI works best as a collaborative tool rather than a complete replacement for subject matter expertise.
The AI also tended to return similar responses for each prompt. The AI was unable to “think” beyond stereotypes, especially in image generation. For instance, DALL-E defaulted to images of an old-fashioned robot any time we asked for an image of AI doing something. As the algorithms behind the AI become more complex and nuanced, it is likely that this will change, but currently, human creativity is still required to make the courses more engaging.
Workflow Improvements
The process revealed several opportunities for improvement in AI tools:
- Content Management: We found ourselves manually uploading and organizing content across platforms. An integrated solution that could automatically transfer scripts to video creation tools and then to learning management systems would dramatically streamline the workflow.
- Accessibility Features: HeyGen produced impressive videos, but we had to manually create subtitles. Native accessibility features like automatic captioning would make the content more inclusive and save significant production time.
- Consistency Challenges: Maintaining a consistent voice and style across multiple AI-generated pieces required careful prompt engineering and editing. Future AI systems could benefit from better “memory” of stylistic choices throughout a project.
Cost-Benefit Analysis
The time savings were substantial—what might have taken months of research, writing, and production was compressed into a few weeks. However, we did invest considerable time in prompt refinement and output curation. As these tools mature, we expect the efficiency gains to increase even further.
Ethical Considerations
Our experiment also raised important questions about transparency and attribution. We decided to be fully transparent with learners about Vanessa being an AI avatar and about the AI-generated content. This disclosure seemed important for building trust, though we wonder how perceptions might change as AI-created content becomes more commonplace in education.
Future Possibilities
Looking ahead, we’re excited about how AI tools might evolve to support more sophisticated course creation:
- AI that can generate interactive elements and assessments beyond simple multiple-choice questions
- Systems that can personalize content based on individual learner progress and preferences
- Tools that combine AI capabilities with easier human modification and oversight
This project has convinced us that AI-assisted course creation isn’t just a novelty—it’s potentially transformative for educational content development, especially in rapidly evolving fields like marketing where content needs frequent updating. While the technology isn’t perfect yet, it’s already changing how we approach instructional design and content creation.
What began as an experiment has evolved into a new methodology we plan to refine and expand in future projects. The most effective approach seems to be finding the right balance between AI efficiency and human creativity—leveraging technology to handle the routine aspects while focusing human effort on adding nuance, connection, and authenticity.