DALL·E: Creating Images from Text DALL·E: Creating Images from Text "DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set …
The best kept secret about OpenAI’s GPT-3 The best kept secret about OpenAI’s GPT-3 When the first demos of GPT-3 content started to circulate it showed the amazing potential for a really smart language model to generate text and do cool things …
GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about Since OpenAI first described its new AI language-generating system called GPT-3 …
Introduction — Spinning Up documentation Introduction — Spinning Up documentation Welcome to Spinning Up in Deep RL! This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL). …
MULTI-AGENT REINFORCEMENT LEARNING ENVIRONMENTS PettingZoo-Team/PettingZoo PettingZoo is a Python library for conducting research in multi-agent reinforcement learning. It's akin to a multi-agent version of OpenAI's Gym library. We model env …
OpenAI debuts gigantic GPT-3 language model with 175 billion parameters OpenAI debuts gigantic GPT-3 language model with 175 billion parameters OpenAI's GPT-3 language model can generate convincing news articles and achieve state-of-the-art res …
OpenAI Microscope OpenAI Microscope We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organisms” which are often studied in interpretability. Source: openai.com/blo …
GPT-2: 1.5B Release GPT-2: 1.5B Release As the final model release of GPT-2’s staged release, we’re releasing the largest version (1.5B parameters) of GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models. …
Subscribe to our Digest