Skip to main content

Responsible use of GitHub Copilot in Windows Terminal

Learn how to use GitHub Copilot responsibly by understanding its purposes, capabilities, and limitations.

Кто может использовать эту функцию?

If you have a GitHub Copilot Individual subscription, you have access to GitHub Copilot in Windows Terminal.

Owners of organizations or enterprises with a GitHub Copilot Business or GitHub Copilot Enterprise subscription can decide whether to grant access to GitHub Copilot in Windows Terminal for users in their organization or enterprise under the GitHub Copilot in the CLI policy.

About GitHub Copilot in Windows Terminal

GitHub Copilot in the Terminal Chat chat interface allows you to ask questions about the command line. You can ask GitHub Copilot to provide either command suggestions or explanations of given commands.

The primary supported language for GitHub Copilot is English.

GitHub Copilot works by using a combination of natural language processing and machine learning to understand your question and provide you with an answer. This process can be broken down into a number of steps.

Input processing

The input prompt from the user is pre-processed by Terminal Chat, combined with contextual information (the name of the active shell and the chat history), and sent to a GitHub service that is connected to a large language model that then generates a response based on the context and prompt. User input can take the form of natural language prompts or questions. The system is only intended to respond to command line-related questions. For more information, see "Terminal Chat."

Language model analysis

The input prompt is then passed through the language model, which is a neural network that has been trained on a large body of text data. The language model analyzes the input prompt to find the command or command explanation most relevant to your query.

Response generation

The language model generates a response based on its analysis of the input prompt. This response will take the form of a suggested command or an explanation of the command you asked about. If you want to run a suggested command, you need to click on the command to insert it to your command line. The command does not run automatically. You will need to manually run the command.

Output formatting

The response generated by GitHub Copilot is formatted and presented to you. Terminal Chat and GitHub Copilot use syntax highlighting, indentation, and other formatting features to add clarity to the generated response.

GitHub Copilot is intended to provide you with the most relevant answer to your question. However, it may not always provide the answer you are looking for. Users of GitHub Copilot are responsible for reviewing and validating responses generated by the system to ensure they are accurate and appropriate.

Use cases for GitHub Copilot in Windows Terminal

GitHub Copilot in Terminal Chat can help you by providing either command suggestions or explanations of given commands.

Find the right command to perform a task

GitHub Copilot aims to suggest commands that help you perform the tasks you’re trying to complete. If the result isn’t quite what you’re looking for, you can keep revising your question until the returned command meets your expectations. Once you’ve generated the perfect command for your task, you can insert it to your command line to run it wherever you need.

Explain an unfamiliar command

GitHub Copilot can help explain a command that you asked about by generating a natural language description of the command's functionality and purpose. This can be useful if you want to understand the command's behavior for the specific example provided without having to read or search through the command's documentation. The explanation can include information such as the command's input and output parameters and examples of how it could be used.

By generating explanations, GitHub Copilot may help you to understand the command better, leading to enhanced learning, improved productivity, and less context switching. However, it's important to note that the generated explanations may not always be accurate or complete, so you'll need to review, and occasionally correct, its output. You remain responsible for ensuring the accuracy and appropriateness of the commands you run in the command line.

Improving GitHub Copilot in Windows Terminal

To enhance the experience and address some of the limitations of GitHub Copilot, there are various measures that you can adopt. For more information about the limitations, see "Limitations of GitHub Copilot."

Use GitHub Copilot as a tool, not a replacement

While GitHub Copilot can be a powerful tool for enhancing understanding of commands and the command line, it is important to use it as a tool rather than a replacement for human programming. You should always review and verify the command generated by GitHub Copilot to ensure that it meets your requirements and is free of errors or security concerns.

Provide feedback

If you encounter any issues or limitations with GitHub Copilot in Windows Terminal, we recommend that you provide feedback by opening an issue in the Windows Terminal repository. This can help the developers to improve the tool and address any concerns or limitations.

Limitations of GitHub Copilot in Windows Terminal

Depending on factors such as your operating system and input data, you may encounter different levels of accuracy when using GitHub Copilot in the terminal. The following information is designed to help you understand system limitations and key concepts about performance as they apply to GitHub Copilot.

Limited scope

GitHub Copilot operates within defined boundaries and might struggle with intricate commands, less common ones, or more recently developed tools. The quality of suggestions it provides for each language can be influenced by the availability and diversity of training data. For instance, inquiries about well-documented commands and tools like Git may yield more accurate responses compared to questions about more obscure command line tools.

Potential biases and errors

GitHub Copilot's training data is sourced from existing online sources. It’s important to note that these sources may include biases and errors of the individuals who contributed to the training data. GitHub Copilot may inadvertently perpetuate these biases and errors. Additionally, GitHub Copilot might perform differently depending on the scripting languages or scripting styles, potentially resulting in suboptimal or incomplete command suggestions or explanations.

Inaccurate responses

GitHub Copilot may generate seemingly valid but syntactically or semantically incorrect commands. To avoid issues, always carefully review and verify suggestions, especially for critical or destructive tasks such as deleting content. Ensure generated commands align with best practices and fit your workflow.

Risk management and user accountability in command execution

Additional caution is required with the addition of the functionality to ask GitHub Copilot to execute a command, particularly regarding the potential destructiveness of some suggested commands. You may encounter commands for file deletion or hard drive formatting, which can cause problems if used incorrectly. While such commands may be necessary in certain scenarios, you need to be careful when accepting and running these commands.

Additionally, you are ultimately responsible for the commands executed by GitHub Copilot. It is entirely your decision whether to use commands generated by GitHub Copilot. Despite the presence of fail-safes and safety mechanisms, you must understand that executing commands carries inherent risks. GitHub Copilot provides a powerful tool set, but you should approach its recommendations with caution and ensure that commands align with your intentions and requirements.

Inaccurate responses to non-coding topics

GitHub Copilot in Windows Terminal is not designed to answer questions beyond the scope of command line-related tasks. As a result, its responses might not consistently offer accuracy or assistance when confronted with questions unrelated to coding or general command line use. When you inquire about non-coding topics, GitHub Copilot may express its inability to provide a meaningful response.

Differing performance based on natural language

GitHub Copilot has been trained on natural language content written predominantly in English. As a result, you may notice differing performance when providing GitHub Copilot with natural language input prompts in languages other than English.

Further reading