Note
GitHub Copilot Extensions is in 公共预览版 and subject to change.
A skill within GitHub Copilot is a tool that the model calls to perform a specific task in response to a user query. A skillset is a collection of these skills (up to five per skillset). Github Copilot skillsets provide a streamlined way to extend Copilot’s functionality, allowing builders to integrate external services or custom API endpoints into their Copilot workflow. With skillsets, builders can enable Copilot to perform tasks—such as retrieving data or executing actions in third-party services—without needing to manage complex workflows or architecture.
For a quickstart example of a skillset, see the "skillset-example" repository. For information on building a skillset, see "Building Copilot skillsets."
How skillsets and agents differ
Skillsets and agents are the two ways to extend Copilot's capabilities and context through the Copilot Extensibility Platform. They let you integrate external services and APIs into Copilot Chat, but each one serves different use cases and offers different levels of control and complexity:
- Skillsets are lightweight and streamlined, designed for developers who need Copilot to perform specific tasks (e.g., data retrieval or simple operations) with minimal setup. They handle routing, prompt crafting, function evaluation, and response generation automatically, making them ideal for quick and straightforward integrations.
- Agents are for complex integrations that need full control over how requests are processed and responses are generated. They let you implement custom logic, integrate with other LLMs and/or the Copilot API, manage conversation context, and handle all aspects of the user interaction. While Agents require more engineering and maintenance, they offer maximum flexibility for sophisticated workflows. For more information about agents, see "关于 Copilot 代理."
The extensibility platform
Skillsets and agents both operate on the GitHub Copilot Extensibility Platform, which manages the flow of user requests and function evaluations. With Copilot skillsets, the platform handles routing, prompt crafting, function calls and prompt generation.
Workflow overview
The extensibility platform follows a structured workflow to process user requests and generate responses:
-
User request
A user issues a request in the Copilot Chat interface, such as asking for data or executing a specific action. -
Routing
The request is routed to the appropriate extension. For skillsets, this means the platform agent identifies and invokes the corresponding skillset based on the user’s intent. Each skill’s inference description helps the platform determine which skill to call. -
Dynamic Prompt Crafting
GitHub Copilot generates a prompt using:- The user’s query.
- Relevant thread history.
- Available functions within the skillset.
- Results from any prior function calls.
-
LLM Completion
The language model (LLM) processes the prompt and determines:- Whether the user’s intent matches a skillset function.
- Which function(s) to call and with what arguments.
- If required, the LLM may send additional function calls to gather more context.
-
Function Evaluation
The extension invokes the selected function(s), which may involve:- Gathering relevant context, such as Copilot skillsets repository or user metadata.
- Making an API call to an external service to retrieve data or execute an action.
-
Response generation The platform iteratively refines the output, looping through prompt crafting, LLM completion, and function evaluation as needed. Once the process is complete, Copilot streams a final response back to the user in the chat interface.