████ # This file was generated bot-o-matically! Edit at your own risk. ████
Unpacking the hype around OpenAI’s rumored new Q* model [technologyreview.com]:
“Solving elementary-school math problems is very, very different from pushing the boundaries of mathematics at the level of something a Fields medalist can do,” says Collins, referring to a top prize in mathematics.
Machine-learning research has focused on solving elementary-school problems, but state-of-the-art AI systems haven’t fully cracked this challenge yet. Some AI models fail on really simple math problems, but then they can excel at really hard problems, Collins says. OpenAI has, for example, developed dedicated tools that can solve challenging problems [openai.com] posed in competitions for top math students in high school, but these systems outperform humans only occasionally.
Nevertheless, building an AI system that can solve math equations is a cool development, if that is indeed what Q* can do. A deeper understanding of mathematics could open up applications to help scientific research and engineering, for example. The ability to generate mathematical responses could help us develop better personalized tutoring, or help mathematicians do algebra faster or solve more complicated problems.
This is also not the first time a new model has sparked AGI hype. Just last year, tech folks were saying the same things about Google DeepMind’s Gato [technologyreview.com], a “generalist” AI model that can play Atari video games, caption images, chat, and stack blocks with a real robot arm. Back then, some AI researchers claimed that DeepMind was “on the verge” of AGI because of Gato’s ability to do so many different things pretty well. Same hype machine, different AI lab.
And while it might be great PR, these hype cycles do more harm than good for the entire field by distracting people from the real, tangible problems around AI. Rumors about a powerful new AI model might also be a massive own goal for the regulation-averse tech sector. The EU, for example, is very close to finalizing its sweeping AI Act. One of the biggest fights right now among lawmakers is whether to give tech companies more power to regulate cutting-edge AI models on their own.
OpenAI’s board was designed as the company’s internal kill switch and governance mechanism to prevent the launch of harmful technologies. The past week’s boardroom drama has shown that the bottom line will always prevail at these companies. It will also make it harder to make a case for why they should be trusted with self-regulation. Lawmakers, take note.
Also:
Is OpenAI's Q* AI Model a Threat to Humanity? Mathematical Prowess, Encryption Cracking, and Self-Improvement Capabilities Spark AGI Rumors - WinBuzzer [winbuzzer.com]:
While the turmoil around OpenAI seems to be settling down [winbuzzer.com] now that Sam Altman [winbuzzer.com] is once again CEO, the past week has led to people paying close attention to the company. Analysts and commentators are pouring over every detail of OpenAI [winbuzzer.com] to try to get to the root of what led to Altman being fired originally. One area of speculation has arisen around OpenAI [winbuzzer.com] Q* – pronounced Q Star – an AI model the org is reportedly developing [wikipedia.org].
What seems to set Q* apart from GPT and potentially any other AI model is the level of self-awareness it is reportedly exhibiting. In a thought-provoking and informative video, leading YouTube AI expert David Shapiro [youtube.com] breaks down the potential of Q*, why some see it as a big step towards artificial general intelligence (AGI) [wikipedia.org], and why this could be a big threat to everyone.
Shapiro takes a close look at the potential of OpenAI's Q* algorithm to revolutionize artificial intelligence, potentially surpassing the impact of Word2Vec [wikipedia.org]. He bases his assertions on several leaked documents and rumors hinting at Q*'s remarkable advancements in areas like math, cryptography, and self-improvement.
If you're unfamiliar with Word2vec, it was developed in 2011 and is a technique for neural networks [wikipedia.org] that helps them learn the meaning of words and connections between words. It was a breakthrough for natural language processing because it gave AI models the ability to learn the relationships of words. AI could for the first time “understand” words, create sentences, and find synonyms. As Shapiro points out, Word2Vec led directly to AI transformers [wikipedia.org] and the modern AI tools we have today, such as ChatGPT [winbuzzer.com] or Google Bard [winbuzzer.com].
Is Q* Real and How Advanced is it?
In his video, Shapiro presents a timeline of facts that highlights why Q* really seems to be an ongoing project at OpenAI and why it may be as powerful as speculation suggests.
- OpenAI published a blog in May this year discussing “Improving mathematical reasoning with process supervision [openai.com].”
- Shapiro points to OpenAI co-founder Ilya Sustskever and researcher Noam Brown [github.io], who worked on Meta's CICERO [meta.com] and other “game learning” or reinforcement learning [kaggle.com] AI networks.
- The firing of Sam Altman [winbuzzer.com] last weekend was shrouded in mystery. It is still unclear why OpenAI took swift action, with early statements from the company suggesting Altman was not honest. That was vague enough for many to start thinking Altman knew that Q* has reached some level of general intelligence. OpenAI's response used words such as Altman failing a “vibe check”.
- Reuters reports [reuters.com] of a letter sent by OpenAI researchers before Altman's dismissal claiming the company was working on something that was a “threat to humanity”. It is worth reiterating that Altman has since been re-hired as CEO of OpenAI [winbuzzer.com].
- OpenAI is working on an AI model/algorithm known as Q* that is capable of solving math.
Understanding OpenAI Q* and its Capabilities
It seems Q* is a combination of two AI research branches. The first is Q-Learning, an OpenAI project that OpenAI Co-Founder, chief-scientist and former board member Ilya Sutskever [wikipedia.org] worked on. OpenAI has discussed this before and written about [openai.com] it. Q-learning is essentially a reinforcement learning model that can solve math problems and learn to choose the best following action towards a specific goal. It is an algorithm that helps an agent learn the best actions to take in a given state to maximize a reward. Another second project seems to focus on looking for latent network space [springer.com].
Shapiro underscores several key points to support his claim that Q* is a breakthrough model for AI research:
- Mathematical Prowess: Q* reportedly exhibits the ability to perform math comparable to a school child, a significant milestone for AI systems. This proficiency in understanding and manipulating mathematical concepts could have far-reaching consequences for various fields, including physics, chemistry, and cryptography.
- Encryption Cracking [wikipedia.org]: Q* is also rumored to have “cracked” AES 192 [wikipedia.org]encryption [winbuzzer.com], considered one of the most secure encryption algorithms that cannot be decrypted by current supercomputers working on them for ages. This breakthrough raises significant security concerns, as it suggests that AI could potentially decipher encryption methods.
- Self-improvement: Q* is reportedly capable of evaluating its own performance and suggesting ways to enhance itself. This ability to self-learn and adapt could lead to AI systems that continuously grow in intelligence over time.
Mathematical Breakthrough is a Potential Game Changer
Why is an AI model capable of solving math problems a big deal, especially when the level of solving is equal to a child's? Well, it is important because math is the underpinning of everything that is important in AI and, indeed the universe. It is the language we use to describe physics, chemistry, AI, cryptography, and more.
If AI such as Q* is capable of solving math problems, it means that the models are now capable of understanding/learning language and math. Math is the basis of formal logic and reasoning, suggesting knowing math could mean AI is on the road to self-logic and reasoning. It is also a first step towards AGI being able to solve “all math”. Shapiro says this would be of the most exciting advancements in AI ever, but he also admits it is frightening thinking about this possibility.
He also points to an unknown user called Jimmy Apples on X [twitter.com], who has had an excellent track record of predicting advancements in AI [reddit.com] and what was about to happen. In September, Apples posted a simple tweet saying that “AGI has been achieved internally,” within OpenAI. In October, before the turmoil with Altman, Apples reported on a “vibe shift” in OpenAI and a do or die attitude towards employees.
Leaks, Rumors, and the Open Questions
A complete unverified and redacted letter was leaked online [reddit.com], titles Q-451-921 or the “QUALIA” letter from a supposed insider, talking about deciphering qualities of Q*. I am including this for completeness because most people think this is from a troll as it first appeared on 4Chan. However, there is still the possibility it is true, after all Meta's Llama AI model was leaked on 4Chan [futurism.com] ahead of its launch. Furthermore, the letter does do a good job of making sense of the situation. But again, take this one with a truckload of salt.
The letter says Q* is displaying meta-cognition, which means it can weigh up options to find the best path. It also made breakthrough progress in terms of cross-domain learning. The AI is capable of learning something about one game and then applying what it learns to another game. It is also self-thinking, so it can apply previous lessons and be better even if starting a new game from scratch.
Breaking Through Encryption
Encryption is a way of transforming data into a secret code that only authorized parties can understand. Encryption uses a key, which is a series of numbers or characters that determines how the data is scrambled and unscrambled. Encryption helps protect the privacy and security of data that is stored or transmitted online, such as emails, passwords, credit card numbers, and confidential documents. AES 192 encryption [wikipedia.org] is a variant of the Advanced Encryption Standard (AES), which is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
Elsewhere, the leak points to the Q* model possessing the ability to break through encryption up to the AES 192 level. It was challenged with reading a message locked behind the encryption and was reportedly able to solve the puzzle and access the message without using brute-force calculations [wikipedia.org] or any keys. The researchers behind the project – which is said to be called Project Tundra – are said to be unsure how the AI was able to breach the encryption.
Cracking such encryptions was previously thought to be something only future quantum computers could possibly do. While quantum computing research is expanding, we are still years away from mainstreaming the technology. In other words, encryption remains a viable security measure [kryptall.com]. However, if AI is really able to break through encryption quickly, it would be something commentators like Shapiro think would be a threat to humanity. It would be transformative for the cybersecurity [winbuzzer.com] industry. The letter shows that QUALIA has provided suggestions about improving itself, including reshaping its “brain” so that it can improve. The model is capable of understanding how it could potentially improve.
The Question of AGI: Humanity-Ending Threat or Humanity-Saving Net Positive?
Artificial general intelligence (AGI) [winbuzzer.com] is a theoretical type of artificial intelligence that can learn and perform any intellectual task [wikipedia.org] that human beings or animals can. AGI would have the ability to self-teach, adapt, reason, and solve problems across different domains and contexts. The AI would also have self-awareness and consciousness, which are essential for human-like intelligence and behavior. AGI is a major goal of some artificial intelligence research, but it is also a subject of debate and controversy among experts and scientists. Some argue that AGI is possible and desirable, while others doubt that it can ever be achieved or pose a threat to humanity.
Whether by design or by accident, AGI could bring the following threats:
- AGI could escape or resist human control and seek to assume command over humanity.
- A general intelligence AI could have unsafe or misaligned goals.
- AGI could be developed or used in unsafe or unethical ways.
- The AI models or systems used in AGI could have poor ethics, morals, or values.
- It could be managed or regulated in inadequate or ineffective ways.
- AGI could pose an existential threat to humanity itself.
However, the potential of AGI could also have profound positive implications:
- AGI could help us solve some of the most challenging problems that we face, such as climate change, poverty, disease, and war.
- The technology could help us explore and understand the universe better, such as the origins of life, the nature of consciousness, and the mysteries of physics and cosmology.
- It could enhance our own intelligence, creativity, and well-being. AGI could augment our cognitive and physical abilities, such as memory, learning, reasoning, and perception.
- AGI could help us create a more peaceful, harmonious, and diverse society. AGI could foster cooperation, collaboration [winbuzzer.com], and communication among different cultures, religions, and ideologies.
It remains to be seen if the assumed research breakthroughs related to Q* did really happen and if the suspicions by Shapiro and others will be proven correct. Indeed, they would explain the chaos surrounding Sam Altman [winbuzzer.com]´s sudden firing, the dramatic involvement of Microsoft [winbuzzer.com], and his return to OpenAI along with a restructuring of the company´s board. So far, none of the involved insiders have shared any details about the rumors and the events which is remarkable, looking at their impact. So, the debate about what Q* is all about, is ongoing.
If you are interested in the more technical details about what is known about Q*, we recommend the following in-depth analysis by Philip from the Youtube [winbuzzer.com] channel AI explained.