Skip to main content

Prototypage avec des modèles IA

Recherchez et expérimentez gratuitement des modèles IA.

If you want to develop a generative AI application, you can use GitHub Models to find and experiment with AI models for free. Once you are ready to bring your application to production, you can switch to a token from a paid Azure account. See the Azure AI documentation.

See also "Responsible use of GitHub Models."

Finding AI models

To find AI models, go to GitHub Marketplace, then click Models in the sidebar.

To view details about a model, click on the model's name.

Note

Access to OpenAI's o1 models is in public preview and subject to change.

Experimenting with AI models in the playground

Note

The playground is in public preview and subject to change. To request access, join the waitlist.

GitHub Marketplace provides a free playground where you can adjust model parameters and submit prompts to see how the model responds.

To open the playground, go to GitHub Marketplace, then click Models in the sidebar. Click on a model's name, then click Playground.

To adjust parameters for the model, select the Parameters tab in the sidebar. To see code that corresponds to the parameters that you selected, switch from the Chat tab to the Code tab.

You can also compare two models at once. In the menu bar for your model, click Compare, then select a model for comparison using the Model: MODEL-NAME dropdown menu in the second chat window. When you type a prompt in either chat window, the prompt will automatically be mirrored to the other window, and you can compare the responses from each model.

The playground is rate limited. See Rate limits below.

Experimenting with AI models using the API

Note

The free API usage is in public preview and subject to change. To request access, join the waitlist.

GitHub provides free API usage so that you can experiment with AI models in your own application.

To learn how to use a model in your application, go to GitHub Marketplace, then click Models in the sidebar. Click on a model's name, then click Playground. In the menu bar at the top of your chat window, click Code.

The steps to use each model are similar. In general, you will need to:

  1. Optionally, use the language dropdown to select the programming language.

  2. Optionally, use the SDK dropdown to select which SDK to use.

    All models can be used with the Azure AI Inference SDK, and some models support additional SDKs. If you want to easily switch between models, you should select "Azure AI Inference SDK". If you selected "REST" as the language, you won't use an SDK. Instead, you will use the API endpoint directly.

  3. Either open a codespace, or set up your local environment:

    • To run in a codespace, click Run codespace, then click Create new codespace.
    • To run locally:
      • Create a GitHub personal access token. The token should not have any scopes or permissions. See "Managing your personal access tokens."
      • Save your token as an environment variable.
      • Install the dependencies for the SDK, if required.
  4. Use the example code to make a request to the model.

The free API usage is rate limited. See Rate limits below.

Saving and sharing your playground experiments

You can save and share your progress in the playground with presets. Presets save:

  • Your current state
  • Your parameters
  • Your chat history (optional)

To create a preset for your current context, select the Preset: PRESET-NAME dropdown menu, then click Create new preset. You need to name your preset, and you can also choose to provide a preset description, include your chat history, and allow your preset to be shared.

There are two ways to load a preset:

  • Select the Preset: PRESET-NAME dropdown menu, then click the preset you want to load.
  • Open a shared preset URL

After you load a preset, you can edit, share, or delete the preset:

  • To edit the preset, change the parameters and prompt the model. Once you are satisfied with your changes, select the Preset: PRESET-NAME dropdown menu, then click Edit preset and save your updates.
  • To share the preset, select the Preset: PRESET-NAME dropdown menu, then click Share preset to get a shareable URL.
  • To delete the preset, select the Preset: PRESET-NAME dropdown menu, then click Delete preset and confirm the deletion.

Experimenting with AI models in Visual Studio Code

Note

The AI Toolkit extension for Visual Studio Code is in public preview and is subject to change.

If you prefer to experiment with AI models in your IDE, you can install the AI Toolkit extension for Visual Studio Code, then test models with adjustable parameters and context.

  1. In Visual Studio Code, install the pre-release version of the AI Toolkit for Visual Studio Code.

  2. To open the extension, click the AI Toolkit icon in the activity bar.

  3. Authorize the AI Toolkit to connect to your GitHub account.

  4. In the "My models" section of the AI Toolkit panel, click Open Model Catalog, then find a model to experiment with.

    • To use a model hosted remotely through GitHub Models, on the model card, click Try in playground.
    • To download and use a model locally, on the model card, click Download. Once the download is complete, on the same model card, click Load in playground.
  5. In the sidebar, provide any context instructions and inference parameters for the model, then send a prompt.

Going to production

The rate limits for the playground and free API usage are intended to help you experiment with models and develop your AI application. Once you are ready to bring your application to production, you can use a token from a paid Azure account instead of your GitHub personal access token. You don't need to change anything else in your code.

For more information, see the Azure AI documentation.

Rate limits

The playground and free API usage are rate limited by requests per minute, requests per day, tokens per request, and concurrent requests. If you get rate limited, you will need to wait for the rate limit that you hit to reset before you can make more requests.

Low, high, and embedding models have different rate limits. To see which type of model you are using, refer to the model's information in GitHub Marketplace.

Rate limit tier Rate limits Free and Copilot Individual Copilot Business Copilot Enterprise
Low Requests per minute 15 15 20
Requests per day 150 300 450
Tokens per request 8000 in, 4000 out 8000 in, 4000 out 8000 in, 8000 out
Concurrent requests 5 5 8
High Requests per minute 10 10 15
Requests per day 50 100 150
Tokens per request 8000 in, 4000 out 8000 in, 4000 out 16000 in, 8000 out
Concurrent requests 2 2 4
Embedding Requests per minute 15 15 20
Requests per day 150 300 450
Tokens per request 64000 64000 64000
Concurrent requests 5 5 8
Azure OpenAI o1-preview Requests per minute 1 2 2
Requests per day 8 10 12
Tokens per request 4000 in, 4000 out 4000 in, 4000 out 4000 in, 8000 out
Concurrent requests 1 1 1
Azure OpenAI o1-mini Requests per minute 2 3 3
Requests per day 12 15 20
Tokens per request 4000 in, 4000 out 4000 in, 4000 out 4000 in, 4000 out
Concurrent requests 1 1 1

These limits are subject to change without notice.

Leaving feedback

To leave feedback about GitHub Models, start a new discussion or comment on an existing discussion in the GitHub Community.