Applications of GPT-3
So we all know about Generative Pre-trained Transformer 3 or so-called GPT-3, the language model with 175 Billion parameters, the successor of GPT-2 with 1.5 Billion parameters. GPT-3 was released back in June 2020 by OpenAI, a for-profit San Francisco-based artificial intelligence research laboratory founded by Elon Musk and Sam Altman.
Prior to the release of GPT-3, the largest language model was Microsoft’s Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters.
The quality of the text generated by GPT-3 is so high that it is difficult to distinguish from that written by a human, which has both benefits and risks. So now let’s talk about thee advantages and disadvantages of GPT-3.
- Affordance Prediction : GPT-3 can be used in the field of robotics for afforandance prediction. For example you have a robot and you give it an object then using the afforande prediction model it can make out what it can do with the that object. A demo is prepared by Siddharth Karamcheti which can be found here.
2. Question and Answer generation : GPT-3 can be used for generating questions from some given data and even answer those questions. A demo is prepared by Mckay Wrigley which can be found here.
3. Generation of Deep learning models : It can be used generate deep learning models just by getting the information of the dataset. A demo was made by Matt Shumer which can be found here.
4. Text Generation : GPT-3 can generate text that can easily pass the Turing Test. It generated this essay using a prompt. It is indistinguishable from human-written essays.
These are just some of projects created using GPT-3. Now some of you might think that GPT-3 could hurt developers and coders but let me tell you it won’t. GPT-3 can generate some boiler plate code but it can’t replace coder…at least not now.
Do clap if you like it!!