About alerts from escaneo de código
You can set up escaneo de código to check the code in a repository using the default CodeQL analysis, a third-party analysis, or multiple types of analysis. When the analysis is complete, the resulting alerts are displayed alongside each other in the security view of the repository. Results from third-party tools or from custom queries may not include all of the properties that you see for alerts detected by GitHub's default CodeQL analysis. For more information, see "Setting up escaneo de código for a repository."
By default, escaneo de código analyzes your code periodically on the default branch and during pull requests. For information about managing alerts on a pull request, see "Triaging escaneo de código alerts in pull requests."
About alert details
Each alert highlights a problem with the code and the name of the tool that identified it. You can see the line of code that triggered the alert, as well as properties of the alert, such as the alert severity, security severity, and the nature of the problem. Alerts also tell you when the issue was first introduced. For alerts identified by CodeQL analysis, you will also see information on how to fix the problem.
The status and details on the alert page only reflect the state of the alert on the default branch of the repository, even if the alert exists in other branches. You can see the status of the alert on non-default branches in the Affected branches section on the right-hand side of the alert page. If an alert doesn't exist in the default branch, the status of the alert will display as "in pull request" or "in branch" and will be colored grey.
If you set up escaneo de código using CodeQL, you can also find data-flow problems in your code. Data-flow analysis finds potential security issues in code, such as: using data insecurely, passing dangerous arguments to functions, and leaking sensitive information.
When escaneo de código reports data-flow alerts, GitHub shows you how data moves through the code. Escaneo de código allows you to identify the areas of your code that leak sensitive information, and that could be the entry point for attacks by malicious users.
About severity levels
Alert severity levels may be Error
, Warning
, or Note
.
If escaneo de código is enabled as a pull request check, the check will fail if it detects any results with a severity of error
. You can specify which severity level of code scanning alerts causes a check failure. For more information, see "Defining the severities causing pull request check failure."
About security severity levels
Escaneo de código displays security severity levels for alerts that are generated by security queries. Security severity levels can be Critical
, High
, Medium
, or Low
.
To calculate the security severity of an alert, we use Common Vulnerability Scoring System (CVSS) data. CVSS is an open framework for communicating the characteristics and severity of software vulnerabilities, and is commonly used by other security products to score alerts. For more information about how severity levels are calculated, see this blog post.
By default, any escaneo de código results with a security severity of Critical
or High
will cause a check failure. You can specify which security severity level for escaneo de código results should cause a check failure. For more information, see "Defining the severities causing pull request check failure."
About analysis origins
You can set up multiple configurations of code analysis on a repository, using different tools and targeting different languages or areas of the code. Each configuration of code scanning is the analysis origin for all the alerts it generates. For example, an alert generated using the default CodeQL analysis with GitHub Actions will have a different analysis origin from an alert generated externally and uploaded via the code scanning API.
If you use multiple configurations to analyze a file, any problems detected by the same query are reported as alerts with multiple analysis origins. If an alert has more than one analysis origin, a icon will appear next to any relevant branch in the Affected branches section on the right-hand side of the alert page. You can hover over the icon to see the names of each analysis origin and the status of the alert for that analysis origin. You can also view the history of when alerts appeared in each analysis origin in the timeline on the alert page. If an alert only has one analysis origin, no information about analysis origins is displayed on the alert page.
Note: Sometimes a code scanning alert displays as fixed for one analysis origin but is still open for a second analysis origin. You can resolve this by re-running the second code scanning configuration to update the alert status for that analysis origin.
About labels for alerts that are not found in application code
GitHub Enterprise Cloud assigns a category label to alerts that are not found in application code. The label relates to the location of the alert.
- Generated: Code generated by the build process
- Test: Test code
- Library: Library or third-party code
- Documentation: Documentation
Escaneo de código categorizes files by file path. You cannot manually categorize source files.
Here is an example from the escaneo de código alert list of an alert marked as occurring in library code.
On the alert page, you can see that the filepath is marked as library code (Library
label).
About experimental alerts
Nota: Las alertas experimentales para el escaneo de código se crean utilizando tecnología experimental en la acción del CodeQL. Esta característica se encuentra disponible actualmente como un lanzamiento beta para el código de JavaScript y está sujeta a cambios.
In repositories that run escaneo de código using the CodeQL action, you may see some alerts that are marked as experimental. These are alerts that were found using a machine learning model to extend the capabilities of an existing CodeQL query.
Benefits of using machine learning models to extend queries
Queries that use machine learning models are capable of finding vulnerabilities in code that was written using frameworks and libraries that the original query writer did not include.
Each of the security queries for CodeQL identifies code that's vulnerable to a specific type of attack. Security researchers write the queries and include the most common frameworks and libraries. So each existing query finds vulnerable uses of common frameworks and libraries. However, developers use many different frameworks and libraries, and a manually maintained query cannot include them all. Consequently, manually maintained queries do not provide coverage for all frameworks and libraries.
CodeQL uses a machine learning model to extend an existing security query to cover a wider range of frameworks and libraries. The machine learning model is trained to detect problems in code it's never seen before. Queries that use the model will find results for frameworks and libraries that are not described in the original query.
Alerts identified using machine learning
Alerts found using a machine learning model are tagged as "Experimental alerts" to show that the technology is under active development. These alerts have a higher rate of false positive results than the queries they are based on. The machine learning model will improve based on user actions such as marking a poor result as a false positive or fixing a good result.
Enabling experimental alerts
The default CodeQL query suites do not include any queries that use machine learning to generate experimental alerts. To run machine learning queries during escaneo de código you need to run the additional queries contained in one of the following query suites.
Conjunto de consultas | Descripción |
---|---|
security-extended | Las consultas de severidad y precisión más baja que aquellas predeterminadas |
security-and-quality | Las consultas de security-extended , mas aquellas de mantenibilidad y confiabilidad |
When you update your workflow to run an additional query suite this will increase the analysis time.
- uses: github/codeql-action/init@v2
with:
# Run extended queries including queries using machine learning
queries: security-extended
For more information, see "Configuring code scanning."
Disabling experimental alerts
The simplest way to disable queries that use machine learning to generate experimental alerts is to stop running the security-extended
or security-and-quality
query suite. In the example above, you would comment out the queries
line. If you need to continue to run the security-extended
or security-and-quality
suite and the machine learning queries are causing problems, then you can open a ticket with GitHub support with the following details.
- Ticket title: "escaneo de código: removal from experimental alerts beta"
- Specify details of the repositories or organizations that are affected
- Request an escalation to engineering