Text-to-speech model can preserve speaker's emotional tone and acoustic environment:
On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker's emotional tone.
Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.
(Score: 0) by Anonymous Coward on Tuesday January 17 2023, @12:42PM
https://arstechnica.com/information-technology/2016/11/adobe-voco-photoshop-for-audio-speech-editing/ [arstechnica.com]
So maybe the progress after 7 years is that Microsoft's method only needs 3 seconds.