Do we need another [HB]ollywood blockbuster? Apparently not if it is up to the future of AI:
...goal of having Benjamin [the AI] "write, direct, perform and score" this short film within 48 hours, without any human intervention...
Maybe it is not perfection yet, but it looks like reality is slowly catching up with science fiction. https://arstechnica.com/gaming/2018/06/this-wild-ai-generated-film-is-the-next-step-in-whole-movie-puppetry
Two years ago, Ars Technica hosted the online premiere of a weird short film called Sunspring, which was mostly remarkable because its entire script was created by an AI. The film's human cast laughed at odd, computer-generated dialogue and stage direction before performing the results in particularly earnest fashion.
That film's production duo, Director Oscar Sharp and AI researcher Ross Goodwin, have returned with another AI-driven experiment that, on its face, looks decidedly worse. Blurry faces, computer-generated dialogue, and awkward scene changes fill out this year's Zone Out, a film created as an entry in the Sci-Fi-London 48-Hour Challenge—meaning, just like last time, it had to be produced in 48 hours and adhere to certain specific prompts.
The result is both awful, funny and impressive. Especially with the background knowledge that it was done by an AI in just 48 hours and limited resources. Maybe we are on the path of robotic entertainment sooner than later. You'll know who'll be the boss when you start hearing discussions for the AI's necessity for copyright ownership of the AI's creation.
(Score: 4, Interesting) by takyon on Tuesday June 12 2018, @01:53PM (4 children)
No need to get hung up on the specifics of plot here.
This will probably look like a weird art project up until strong AI. However, cinematography and procedural generation for virtual worlds is much more down to Earth for the "AI director". If you train it, you'll get similar cuts and angles as a normal film would use.
A randomized virtual world could be created in which to stage an animated film.
Voice synthesis is improving. Emotional speech could be generated by looking at how similar or identical lines have been spoken by real people. For example, there's a certain way to say "I'm on fire!" or "I love you."
AI-generated music has been around for many years. Background/orchestral music and music with lower BPM might be easier to take on. Matching music to on-screen action could be harder.
Machine Learning Used for Character Animation [soylentnews.org]
Algorithm Creates "Movies" From Text Descriptions [soylentnews.org]
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Tuesday June 12 2018, @03:45PM (2 children)
That wouldn't be saying, more like screaming.
(Score: 2) by takyon on Tuesday June 12 2018, @03:54PM (1 child)
Tell that to my mistress Alexa.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by JNCF on Wednesday June 13 2018, @06:10AM
She gets excited when she gets the upper hand on takyon during their late night NBA Jam [youtu.be] sessions.
(Score: 3, Interesting) by HiThere on Wednesday June 13 2018, @12:23AM
Just considering this as a data point (or pair), they need to sharpen up the images, replace the actors with "3-D models", and they'll have excellent background material for any number of games. It the platform has enough spare cycles, you could even have lots of decision points that will alter the background material.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.