Text-to-speech model can preserve speaker's emotional tone and acoustic environment:
On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.
Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.
(Score: 1, Insightful) by Anonymous Coward on Monday January 16 2023, @01:03PM (1 child)
Simulate in the sense that it has a whole bunch of pretrained simulated voices and it uses the three seconds to match your voice up to its database. However, the stuff it says after that is based on the pretrained data, not somehow extracted from your short three second clip.
(Score: 0) by Anonymous Coward on Monday January 16 2023, @03:26PM
Hmm. I wonder...if they don't already have Gilbert Gottfried's annoying voice, is it true that this new thing won't work so well for him?