Note
Gemini 2.0 Flash is in versão prévia pública and subject to change. The Termos de licença de pré-lançamento do GitHub apply to your use of this product.
About Gemini 2.0 Flash in GitHub Copilot
Gemini 2.0 Flash is a large language model (LLM) that you can use as an alternative to the default model used by Copilot Chat. Gemini 2.0 Flash is a responsive LLM that can empower you to build apps faster and more easily, so you can focus on great experiences for your users. For information about the capabilities of Gemini 2.0 Flash, see the Google for developers blog and the Google Cloud documentation. For details of Google's data handling policy, see Generative AI and data governance on the Google website.
Gemini 2.0 Flash is currently available in:
- Copilot Chat in Visual Studio Code
- Copilot Chat in Visual Studio 2022 version 17.12 or later
- Immersive mode in Copilot Chat in GitHub
GitHub Copilot uses Gemini 2.0 Flash hosted on Google Cloud Platform (GCP). When using Gemini 2.0 Flash, prompts and metadata are sent to GCP, which makes the following data commitment: Gemini doesn't use your prompts, or its responses, as data to train its models.
When using Gemini 2.0 Flash, input prompts and output completions continue to run through GitHub Copilot's content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
Configuring access
You must enable access to Gemini 2.0 Flash before you can use the model.
Setup for organization and enterprise use
As an enterprise or organization owner, you can enable or disable Gemini 2.0 Flash for everyone who has been assigned a Copilot Enterprise or Copilot Business seat through your enterprise or organization. See Gerenciar políticas do Copilot na sua organização and Gerenciando políticas e recursos do Copilot em sua empresa.
Using Gemini 2.0 Flash
For details of how to change the model that Copilot Chat uses, see Changing the AI model for Copilot Chat.