Skip to main content

This version of GitHub Enterprise was discontinued on 2022-06-03. No patch releases will be made, even for critical security issues. For better performance, improved security, and new features, upgrade to the latest version of GitHub Enterprise. For help with the upgrade, contact GitHub Enterprise support.

About high availability configuration

In a high availability configuration, a fully redundant secondary GitHub Enterprise Server appliance is kept in sync with the primary appliance through replication of all major datastores.

When you configure high availability, there is an automated setup of one-way, asynchronous replication of all datastores (Git repositories, MySQL, Redis, and Elasticsearch) from the primary to the replica appliance. Most GitHub Enterprise Server configuration settings are also replicated, including the Management Console password. For more information, see "Accessing the management console."

GitHub Enterprise Server supports an active/passive configuration, where the replica appliance runs as a standby with database services running in replication mode but application services stopped.

After replication has been established, the Management Console is no longer accessible on replica appliances. If you navigate to the replica's IP address or hostname on port 8443, you'll see a "Server in replication mode" message, which indicates that the appliance is currently configured as a replica.

Note: There is a maximum of 8 high availability replicas (both passive and active/geo replicas) allowed for GitHub Enterprise Server.

Targeted failure scenarios

Use a high availability configuration for protection against:

  • Software crashes, either due to operating system failure or unrecoverable applications.
  • Hardware failures, including storage hardware, CPU, RAM, network interfaces, etc.
  • Virtualization host system failures, including unplanned and scheduled maintenance events on AWS.
  • Logically or physically severed network, if the failover appliance is on a separate network not impacted by the failure.

A high availability configuration is not a good solution for:

  • Scaling-out. While you can distribute traffic geographically using geo-replication, the performance of writes is limited to the speed and availability of the primary appliance. For more information, see "About geo-replication."
  • Backing up your primary appliance. A high availability replica does not replace off-site backups in your disaster recovery plan. Some forms of data corruption or loss may be replicated immediately from the primary to the replica. To ensure safe rollback to a stable past state, you must perform regular backups with historical snapshots.
  • Zero downtime upgrades. To prevent data loss and split-brain situations in controlled promotion scenarios, place the primary appliance in maintenance mode and wait for all writes to complete before promoting the replica.

Network traffic failover strategies

During failover, you must separately configure and manage redirecting network traffic from the primary to the replica.

DNS failover

With DNS failover, use short TTL values in the DNS records that point to the primary GitHub Enterprise Server appliance. We recommend a TTL between 60 seconds and five minutes.

During failover, you must place the primary into maintenance mode and redirect its DNS records to the replica appliance's IP address. The time needed to redirect traffic from primary to replica will depend on the TTL configuration and time required to update the DNS records.

If you are using geo-replication, you must configure Geo DNS to direct traffic to the nearest replica. For more information, see "About geo-replication."

Load balancer

A load balancer design uses a network device to direct Git and HTTP traffic to individual GitHub Enterprise Server appliances. You can use a load balancer to restrict direct traffic to the appliance for security purposes or to redirect traffic if needed without DNS record changes. We strongly recommend using a TCP-based load balancer that supports the PROXY protocol. DNS lookups for the GitHub Enterprise Server hostname should resolve to the load balancer. We recommend that you enable subdomain isolation. If subdomain isolation is enabled, an additional wildcard record (*.HOSTNAME) should also resolve to the load balancer. For more information, see "Enabling subdomain isolation."

During failover, you must place the primary appliance into maintenance mode. You can configure the load balancer to automatically detect when the replica has been promoted to primary, or it may require a manual configuration change. You must manually promote the replica to primary before it will respond to user traffic. For more information, see "Using GitHub Enterprise Server with a load balancer."

You can monitor the availability of GitHub Enterprise Server by checking the status code that is returned for the https://HOSTNAME/status URL. An appliance that can service user traffic will return status code 200 (OK). An appliance may return a 503 (Service Unavailable) for a few reasons:

  • The appliance is a passive replica, such as the replica in a two-node high availability configuration.
  • The appliance is in maintenance mode.
  • The appliance is part of a geo-replication configuration, but is an inactive replica.

You can also use the Replication overview dashboard available at:

https://HOSTNAME/setup/replication

Utilities for replication management

To manage replication on GitHub Enterprise Server, use these command line utilities by connecting to the replica appliance using SSH.

ghe-repl-setup

The ghe-repl-setup command puts a GitHub Enterprise Server appliance in replica standby mode.

  • An encrypted WireGuard VPN tunnel is configured for communication between the two appliances.
  • Database services are configured for replication and started.
  • Application services are disabled. Attempts to access the replica appliance over HTTP, Git, or other supported protocols will result in an "appliance in replica mode" maintenance page or error message.
admin@169-254-1-2:~$ ghe-repl-setup 169.254.1.1
Verifying ssh connectivity with 169.254.1.1 ...
Connection check succeeded.
Configuring database replication against primary ...
Success: Replica mode is configured against 169.254.1.1.
To disable replica mode and undo these changes, run `ghe-repl-teardown'.
Run `ghe-repl-start' to start replicating against the newly configured primary.

ghe-repl-start

The ghe-repl-start command turns on active replication of all datastores.

admin@169-254-1-2:~$ ghe-repl-start
Starting MySQL replication ...
Starting Redis replication ...
Starting Elasticsearch replication ...
Starting Pages replication ...
Starting Git replication ...
Success: replication is running for all services.
Use `ghe-repl-status' to monitor replication health and progress.

ghe-repl-status

The ghe-repl-status command returns an OK, WARNING or CRITICAL status for each datastore replication stream. When any of the replication channels are in a WARNING state, the command will exit with the code 1. Similarly, when any of the channels are in a CRITICAL state, the command will exit with the code 2.

admin@169-254-1-2:~$ ghe-repl-status
OK: mysql replication in sync
OK: redis replication is in sync
OK: elasticsearch cluster is in sync
OK: git data is in sync (10 repos, 2 wikis, 5 gists)
OK: pages data is in sync

The -v and -vv options give details about each datastore's replication state:

$ ghe-repl-status -v
OK: mysql replication in sync
  | IO running: Yes, SQL running: Yes, Delay: 0

OK: redis replication is in sync
  | master_host:169.254.1.1
  | master_port:6379
  | master_link_status:up
  | master_last_io_seconds_ago:3
  | master_sync_in_progress:0

OK: elasticsearch cluster is in sync
  | {
  |   "cluster_name" : "github-enterprise",
  |   "status" : "green",
  |   "timed_out" : false,
  |   "number_of_nodes" : 2,
  |   "number_of_data_nodes" : 2,
  |   "active_primary_shards" : 12,
  |   "active_shards" : 24,
  |   "relocating_shards" : 0,
  |   "initializing_shards" : 0,
  |   "unassigned_shards" : 0
  | }

OK: git data is in sync (366 repos, 31 wikis, 851 gists)
  |                   TOTAL         OK      FAULT    PENDING      DELAY
  | repositories        366        366          0          0        0.0
  |        wikis         31         31          0          0        0.0
  |        gists        851        851          0          0        0.0
  |        total       1248       1248          0          0        0.0

OK: pages data is in sync
  | Pages are in sync

ghe-repl-stop

The ghe-repl-stop command temporarily disables replication for all datastores and stops the replication services. To resume replication, use the ghe-repl-start command.

admin@168-254-1-2:~$ ghe-repl-stop
Stopping Pages replication ...
Stopping Git replication ...
Stopping MySQL replication ...
Stopping Redis replication ...
Stopping Elasticsearch replication ...
Success: replication was stopped for all services.

ghe-repl-promote

The ghe-repl-promote command disables replication and converts the replica appliance to a primary. The appliance is configured with the same settings as the original primary and all services are enabled.

Promoting a replica does not automatically set up replication for existing appliances. After promoting a replica, if desired, you can set up replication from the new primary to existing appliances and the previous primary.

admin@168-254-1-2:~$ ghe-repl-promote
Enabling maintenance mode on the primary to prevent writes ...
Stopping replication ...
  | Stopping Pages replication ...
  | Stopping Git replication ...
  | Stopping MySQL replication ...
  | Stopping Redis replication ...
  | Stopping Elasticsearch replication ...
  | Success: replication was stopped for all services.
Switching out of replica mode ...
  | Success: Replication configuration has been removed.
  | Run `ghe-repl-setup' to re-enable replica mode.
Applying configuration and starting services ...
Success: Replica has been promoted to primary and is now accepting requests.

ghe-repl-teardown

The ghe-repl-teardown command disables replication mode completely, removing the replica configuration.

Further reading