GPT-3 and DALL·E
- OpenAI’s Greg Brockman, president, chairman and founder, and Alexander Wang, CEO and founder of Scale AI, discussed the development of their general-purpose artificial intelligence research and deployment organization.
- Brockman spoke of the similarities between his roles of CTO of Stripe and president of OpenAI and the application of first principles thinking, which entails re-examining what has already been done and not being fixed to traditional ideas.
- After 5 years of research, OpenAI released its first product in 2020, and has since released generative models GPT-3 and DALL·E, which have helped accelerate the progress of AI and its relevance to the world.
- GPT-3, the overnight success, was a 5-year arc that began with the 2017 “Sentiment Neuron” paper, which was a very novel result at the time.
Strength of GPT-3
- GPT-3 is a Transformer based AI model, developed by OpenAI, that is powering new breakthroughs in natural language processing.
- GPT-3 showed significant improvements over prior models in tasks like sentiment analysis and code generation.
- By providing the model with a huge data set, GPT-3 is able to generalize from it and perform well on new tasks.
- GPT-3 was capable of playing competitive video games and achieved an impressive three-year Arc result.
Competing Professional Esports and Open AI
- Open AI created software to compete in professional esports and eventually beat the pros in the game, Dota 2.
- The team was iterative and creative, finding different aspects of the game to focus on to make the model larger and more complex.
- Open AI was both patient and organized, allocating time and resources to making sure the project wasn’t rushed, so that it could be executed thoroughly and not wasted.
- They also encountered specific problems, such as the final competitor on vacation, and had to quickly solve those problems ahead to be ready for competition.
- Ultimately, their hard work was successful, with Open AI beating the top Pros in the world after an extra day of training.
Greg Brockman’s Optimism for AI Capability
- Greg Brockman was notably optimistic and confident in the improved capabilities of AI algorithms back in 2016-2017, even though they were still relatively weak at the time.
- His confidence in the future of AI technology was due to the fact that neural networks had the right “form factor,” and could absorb data, compute, and learn efficiently.
- The field was reinvigorated in 2012 with the publication of AlexNet, a neural net that “crushed a task,” leading to further optimism regarding its potential capabilities.
Exponential Growth in AI
- AI technology has had a rapid and consistent growth over the past few years, resulting in many old and established fields of research being supplanted by AI models.
- This exponential growth has been demonstrated in the consistent results observed by OpenAI, where their findings were repeatedly successful, despite the amount of time and effort put into them.
- This exponential growth has been achieved by investing resources into building large models, more data, and more intelligent algorithms.
AI Doubts
- While growing, there are still doubts from time to time in the technology, such as when OpenAI made mistakes in understanding the scaling laws of AI.
- However, these doubts can be seen as opportunities for progress, as the mistakes allow for more scientific understanding and the possibility of new conclusions that build upon the existing research.
Future of AI
- The future of AI is both exciting and full of potential change.
- OpenAI’s mission is to help facilitate this potential in a positive way, being careful to consider the implications and possibilities of AI.
Creation of Value with AI
- AI is now being seen as commercially useful, such as with the launch of GPT-3.
- Early customers are already experiencing success with these models, with one customer raising a $1.5B valuation.
- AI models have the potential to create economic value for people and AI models have now become an essential part of distint companies.
- AI allows for creative tasks to be accomplished, such as being able to create images without drawing or taking a picture of something in one’s head and physically creating a 3D sculpture.
Move Towards AGI (Artificial General Intelligence)
- AI has been on the AGI roadmap for a very long time.
- Neural networks have been increasing in both the amount of compute and landmark results over the past decade.
- A study was done to look back on previous results to show how far AI has come.
Moore’s Law and AI
- Moore’s Law states the amount of transistors on a chip tends to double roughly every two years.
- This theory is widely seen in the tech world today, and became a popular benchmark of the industry’s rate of progress.
- AI is governed by a similar law, in that the better the algorithms and data, the more capable AI models become.
- The cost factor has changed since 2012, as greater amounts of money are being poured into the development of massive supercomputers.
- Nevertheless, the underlying curve of increased computer performance remains largely the same.
- Various specifics, such as which model to work on (e.g. GPT-3), data to use, etc. might change, however the pattern of building better models with more compute holds true.
- With continuous progress, AI has the potential to solve harder challenges that were otherwise impossible.
- The development of AI is not limited by Moore’s Law, but is instead the combination of better algorithms, data, scaling, and science alignment.
- AI’s progress is not expected to slow anytime soon, as there are always ways to make advancements in the field.
Fear and Technology
- It is common to have feelings of fear when it comes to advanced AI technology, as evidenced by the common view of AI 10 years ago that focused on the idea of terminators.
- It is important to be aware of the potential of both positive and negative outcomes of AI technology.
- One of the main concerns with AI is that it can be extremely powerful and can be used for good and for bad.
- It is important to recognize the potential of advanced technology, but to also not be too optimistic in thinking that everything will work itself out.
Super powerful technologies requiring careful navigation
- New technologies have the potential to be the best thing we’ve ever created, and help us become the best versions of ourselves, but they must be carefully navigated.
- Because powerful systems still have much potential to misuse, bias, and cause representation issues, the danger lies more in the mind than direct action in the world.
- Even a fairly simple system, like a code writing system, can execute commands and have a direct action in the world. It is therefore vital that systems are aligned with our values and not buggy or writing viruses.
Balancing Super Computers with Open Source Models
- The increasing number of Super Computers with better performance can lead to a game theoretic situation, in which people compete to build the most powerful computers.
- However, even if these powerful systems are in the hands of only a few, there is still a lot of value to be found in the massive amount of application people with create from open source models.
- Therefore, the AI technology of the future will be everywhere, but this should be balanced with careful consideration for the most capable systems.
Empowering Everyone with AI
- Greg believes that the goal of OpenAI is to empower everyone to go through the AI transition.
- The picture they have of their goal has changed as the technology has unfolded, and they are starting to get a sense of where it can go.
- It is exciting to see the energy of all the builders, indicating that people are starting to realize that AI can really work.
- Greg believes that it is time to build.
What do you think?
Show comments / Leave a comment