With GitHub Models extensions, you can call specific AI models from both Copilot Chat and GitHub CLI. These extensions integrate directly into your development workflow, allowing you to prompt models without context switching.
Using AI models in Copilot Chat
If you have a Copilot subscription, you can work with AI models in Copilot Chat in two different ways:
- Using the GitHub Models Copilot Extension. With this extension, you can ask for model recommendations based on certain criteria and chat with specific models. See "Using the GitHub Models Copilot Extension."
- Using multiple model support in Copilot Chat. With multi-model Copilot Chat, you can choose a specific model to use for a conversation, then prompt Copilot Chat as usual. See "Asking GitHub Copilot questions in GitHub" and "在 IDE 中向 GitHub Copilot 提问."
Using the GitHub Models Copilot Extension
Note
The GitHub Models Copilot Extension is in 公共预览版 and is subject to change.
-
Install the GitHub Models Copilot Extension.
- If you have a Copilot Individual subscription, you can install the extension on your personal account.
- If you have access to Copilot through a Copilot Business or Copilot Enterprise subscription:
- An organization owner or enterprise owner needs to enable the Copilot Extensions policy for your organization or enterprise.
- An organization owner needs to install the extension for your organization.
-
Open any implementation of Copilot Chat that supports GitHub Copilot Extensions. For a list of supported Copilot Chat implementations, see "使用扩展将外部工具与 Copilot Chat 集成."
-
In the chat window, type
@models YOUR-PROMPT
, then send your prompt. There are several use cases for the GitHub Models Copilot Extension, including:- Recommending a particular model based on context and criteria you provide. For example, you can ask for a low-cost OpenAI model that supports function calling.
- Executing prompts using a particular model. This is especially useful when you want to use a model that is not currently available in multi-model Copilot Chat.
- Listing models currently available through GitHub Models
Using AI models from the command line
Note
The GitHub Models extension for GitHub CLI is in 公共预览版 and is subject to change.
You can use the GitHub Models extension for GitHub CLI to prompt AI models from the command line, and even pipe in the output of a command as context.
Prerequisites
To use the GitHub Models CLI extension, you need to have GitHub CLI installed. 有关 GitHub CLI 的安装说明,请参阅 GitHub CLI 存储库。
Installing the extension
-
If you have not already authenticated to the GitHub CLI, run the following command in your terminal.
Shell gh auth login
gh auth login
-
To install the GitHub Models extension, run the following command.
Shell gh extension install https://github.com/github/gh-models
gh extension install https://github.com/github/gh-models
Using the extension
To see a list of all available commands, run gh models
.
There are a few key ways you can use the extension:
- To ask a model multiple questions using a chat experience, run
gh models run
. Select your model from the listed models, then send your prompts. - To ask a model a single question, run
gh models run MODEL-NAME "QUESTION"
in your terminal. For example, to ask thegpt-4o
model why the sky is blue, you can rungh models run gpt-4o "why is the sky blue?"
. - To provide the output of a command as context when you call a model, you can join a separate command and the call to the model with the pipe character (
|
). For example, to summarize the README file in the current directory using thegpt-4o
model, you can runcat README.md | gh models run gpt-4o "summarize this text"
.