I’ve played with AI before, but not to this extend. But this sketch seemed to be a logical one to attempt; all the players are toys, which I can easily reference. Then after thinking about it, I decided to work as much as I could with actual clips from the show. The past projects I created all have the same defect, I can only produce the material in 5-second clips (that’s the default setting for the various AI sites I’ve logged onto) Then trying to make them talk becomes a hassle. First, because they’re “toys” some of the faces aren’t apparent to the AI system and the attempt at generating a video would fail. Sometimes it was just easier to create an AI voice and run it under some original footage. Towards the end, I was tired of caring if the mouth movements matched up. Half the time with AI, it couldn’t match up. Then I maxed out AI’s ability to manipulate characters. You tell AI to have a character exit the scene and it would have a human hand come down and pick it up.
I was able to get the voices I needed from AI. Some sound very good. Others sound like bad actors handling the part. I was able to create variations of the Misfits I needed. Then I had to create some of the “new toys.” That took a couple of attempts. And that's the problem, every attempt costs.
Then came the fun with the editing software. At five minutes, the software was grindingly slow, updates and corrections took forever to make and even the slightest change would send me into buffering hell. I reached the pointed where halfway in, I exported the working video of dozens of clips, audio, stills and music into a single video. Then loaded that into the software to edit and polish and add to it. I had to do it 2 more times for the ending and then music and text. One issue was figuring out the chromakey. I had to create the images. Then I realized I had to create them on a green background. Then I had to deal with the different shades of green that the AI would create.
At least I created some of the clips at the end first, so as I was hitting wall after wall, I had some of the bits already in queue. There was a lot of editing, copying and pasting and altering speeds and playback to get the clips I needed to the audio I had. At some point I’ll have to make a post about my AI outtakes, creations so horribly wrong there was nothing to salvage from them.
I could have tightened it up a bit. When it was written as a stage piece, I had to have the characters introduce themselves to set up the bit and place. But here, using the actual characters and footage, I could have easily begun the sketch with Charlie and Dolly bemoaning another Christmas Eve. Oh, also, the Zimbabwe thing, that was based on actual events about 10 years ago where a dentist caught a lot of online flak for his trophy hunting selfies. So, maybe I could have dropped that, too. It is pretty grim. But, hey, if I don't do it now, the way I wrote it, when will I ever?
As a showcase for my writing, that’s up in the air at the moment. But the fact that I can’t get AI at the level I can afford (which is minimal or “free trial”) to maintain how a dolly looks between clips is something I’m sure the bigwigs at the Hollywood studios are battling with.
But as a hobby? Hobbies are supposed to be fun, no?