Nvidia and Microsoft have teamed up to create the Megatron-Turing Natural Language Generation model, which the duo claims is the "most powerful monolithic transformer language model trained to date".
The AI model has 105 layers, 530 billion parameters, and operates on chunky supercomputer hardware like Selene. By comparison, the vaunted GPT-3 has 175 billion parameters.
"Each model replica spans 280 NVIDIA A100 GPUs, with 8-way tensor-slicing within a node, and 35-way pipeline parallelism across nodes," the pair said in a blog post.
[...] However, the need to operate with languages and samples from the real world meant an old problem with AI reappeared: Bias. "While giant language models are advancing the state of the art on language generation, they also suffer from issues such as bias and toxicity," the duo said.
Related: OpenAI's New Language Generator GPT-3 is Shockingly Good
A College Student Used GPT-3 to Write a Fake Blog Post that Ended Up at the Top of Hacker News
A Robot Wrote This Entire Article. Are You Scared Yet, Human?
OpenAI's Text-Generating System GPT-3 Is Now Spewing Out 4.5 Billion Words a Day
(Score: 2) by DannyB on Wednesday October 13 2021, @05:25PM
In one corner we have Microsoft's language model which can spew semi coherent sounding language it learned online. You just need a few starting words to trigger it.
In the other corner we have IBM's Watson which analyzes documents for content and answers questions about that content.
Which will be the first to solve unsolvable problems that need solving?
Some people need assistants to hire some assistance.
Other people need assistance to hire some assistants.