
Author
Justin Lambakis
Sep 23, 2024
As a designer and eLearning developer, I'm always on the lookout for ways to streamline my process without sacrificing creativity or quality. When I took on a murder mystery-themed eLearning course recently, I decided to test the limits of AI, using it to generate 90% of the course's visuals: characters, backgrounds, objects—you name it. The results were more than I expected, not only in terms of creative possibilities but also in terms of time and cost savings. Here’s how I made it work.
The Problem with Traditional Design Workflows
In any custom eLearning project, the process of developing characters, creating backgrounds, and ensuring stylistic consistency can eat up a lot of time and budget. I knew that if I approached this murder mystery course the traditional way—hiring an illustrator, working through rounds of revisions, getting assets rigged for animation—it would take weeks, maybe months, to finish. I needed something faster without sacrificing the quality my clients expect.
That’s where AI came in.
I’ve been experimenting with AI tools like ChatGPT and DALL-E 3 for a while, and it hit me: Why not try to generate nearly all the course visuals using AI? It wasn’t just an experiment—it became the foundation of the entire project.

Tackling Character Creation: The AI Way
The murder mystery needed a cast of characters: suspects, victims, scientists. Each character had to fit within a Victorian-era aesthetic, adding to the atmosphere of the mystery. Normally, I’d spend weeks finding the right stock characters or hiring someone to design them. But with DALL-E 3, I could generate these characters with specific prompts, defining every aspect of their appearance: clothing, hairstyles, even their facial expressions.
Take Dr. Adelle Oranto, for example—a 32-year-old scientist with an eerie, mysterious aura. By using DALL-E 3’s prompt system, I generated her appearance exactly the way I envisioned. Once I had her in a basic T-pose, I brought her into Photoshop for rigging. Rigging is essential when you need to animate characters, and the T-pose made it easy to break her body into layers—head, arms, legs—so I could animate her seamlessly in Adobe Character Animator.

Building a Victorian World: AI Takes Over Scenery and Backgrounds
Creating the right atmosphere for the murder mystery was just as important as the characters. The setting had to feel dark and suspenseful—think foggy labs, antique equipment, and shadows that keep the viewer on edge. Instead of sourcing stock images or hiring a background designer, I used DALL-E 3 to generate the scenery.
I crafted complex prompts, outlining not only the aesthetic but also the mood. I wanted soft lighting, with shadows and reflections that felt true to the Victorian period. DALL-E 3 allowed me to generate lab scenes with eerie atmospheres, filled with antique equipment and mysterious lighting. These backgrounds were crucial to setting the tone of the course and pulling learners into the narrative.
The ability to iterate quickly on these backgrounds was a game-changer. In traditional workflows, getting a cohesive set of backgrounds could take weeks. With AI, I was able to generate multiple options in hours, then tweak them until everything fit together stylistically.


Rigging and Animation: Marrying AI with Traditional Tools
Once the characters and scenery were ready, it was time to bring everything to life. This is where the project blended traditional animation techniques with AI-generated visuals. After generating the characters, I rigged them in Photoshop by separating their body parts into individual layers—arms, legs, torso, head, eyes—so I could animate them easily in Adobe Character Animator.
Here’s Maxwell, pre-animation, getting rigged and ready for action in Photoshop!


Why This Process is a Game-Changer
This AI-driven workflow didn’t just save me time—it saved me thousands of dollars in production costs. Traditionally, building a custom set of characters and backgrounds for a project like this would have involved hiring multiple people: illustrators, animators, and maybe even a motion designer. AI allowed me to do it all myself, and I was able to control every element of the design, from the smallest facial details to the way the light hit the background.
AI offers almost limitless creative possibilities. Instead of being limited by the characters or backgrounds available in stock libraries, I was able to design a completely custom world for my murder mystery eLearning course. And, unlike stock assets, everything I created with AI was unique to the project.
Looking Ahead: What AI Could Do Next
This experience opened my eyes to how quickly AI is evolving. In the future, I expect AI to not only generate static visuals but also rig and animate them for me. Imagine being able to ask AI for a fully rigged, animated character ready for immediate use in a project. We’re not there yet, but we’re getting close.
For now, using AI for 90% of the visuals in this eLearning project gave me full creative control, saved me weeks of production time, and delivered a product that my client was thrilled with.
Check out the sample course on YouTube below and see the results for yourself!
Final Thoughts: Don’t Wait to Embrace AI
AI is no longer the future—it’s the present. And if you’re still hesitant to incorporate it into your workflow, you’re already behind. The tools are there, and they’re getting better by the day. If you’re a designer, animator, or eLearning developer, there’s no reason not to start experimenting with AI today. It will save you time, money, and open up creative avenues you might not have even considered.
And if you have questions or want to learn more about how I’m using AI to transform my projects, feel free to reach out. I’m always happy to share what I’ve learned.