ドキュメントには頻繁に更新が加えられ、その都度公開されています。本ページの翻訳はまだ未完成な部分があることをご了承ください。最新の情報については、英語のドキュメンテーションをご参照ください。本ページの翻訳に問題がある場合はこちらまでご連絡ください。

Configuring high availability replication for a cluster

GitHub Enterprise Server クラスタ全体のパッシブレプリカを別の場所に設定することで、クラスタを冗長ノードにフェイルオーバーできるようにすることができます。

ここには以下の内容があります:

About high availability replication for clusters

High Availability を実現するために、GitHub Enterprise Server のクラスタデプロイメントを設定できます。この場合、パッシブノードの同一のセットがアクティブクラスタ内のノードと同期されます。 If hardware or software failures affect the datacenter with your active cluster, you can manually fail over to the replica nodes and continue processing user requests without data loss.

In high availability mode, each active node syncs regularly with a corresponding passive node. The passive node runs in standby and does not serve applications or process user requests.

GitHub Enterprise Server の包括的なシステム災害復旧計画の一部として High Availability を設定することをお勧めします。 We also recommend performing regular backups. 詳しくは、" アプライアンスでのバックアップの設定。"を参照してください。

必要な環境

Hardware and software

For each existing node in your active cluster, you'll need to provision a second virtual machine with identical hardware resources. For example, if your cluster has 11 nodes and each node has 12 vCPUs, 96 GB of RAM, and 750 GB of attached storage, you must provision 11 new virtual machines that each have 12 vCPUs, 64 GB of RAM, and 750 GB of attached storage.

新しい仮想マシンごとに、アクティブクラスタ内のノードで実行されているものと同じバージョンの GitHub Enterprise Server をインストールします。 You don't need to upload a license or perform any additional configuration. 詳細は「GitHub Enterprise Serverインスタンスをセットアップする」を参照してください。

Note: High Availability レプリケーションに使用する予定のノードは、スタンドアロンの GitHub Enterprise Server インスタンスである必要があります。 Don't initialize the passive nodes as a second cluster.

ネットワーク

You must assign a static IP address to each new node that you provision, and you must configure a load balancer to accept connections and direct them to the nodes in your cluster's front-end tier.

We don't recommend configuring a firewall between the network with your active cluster and the network with your passive cluster. The latency between the network with the active nodes and the network with the passive nodes must be less than 70 milliseconds. For more information about network connectivity between nodes in the passive cluster, see "Cluster network configuration."

Creating a high availability replica for a cluster

Assigning active nodes to the primary datacenter

Before you define a secondary datacenter for your passive nodes, ensure that you assign your active nodes to the primary datacenter.

  1. SSH into any node in your cluster. 詳しい情報については「管理シェル(SSH)にアクセスする」を参照してください。

  2. Open the cluster configuration file at /data/user/common/cluster.conf in a text editor. For example, you can use Vim.

    sudo vim /data/user/common/cluster.conf
    
  3. Note the name of your cluster's primary datacenter. The [cluster] section at the top of the cluster configuration file defines the primary datacenter's name, using the primary-datacenter key-value pair. By default, the primary datacenter for your cluster is named default.

    [cluster]
      mysql-master = HOSTNAME
      redis-master = HOSTNAME
      primary-datacenter = default
    • Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of primary-datacenter.
  4. The cluster configuration file lists each node under a [cluster "HOSTNAME"] heading. 各ノードの見出しの下に、新しいキー/値ペアのペアを追加して、ノードをデータセンターに割り当てます。 Use the same value as primary-datacenter from step 3 above. For example, if you want to use the default name (default), add the following key-value pair to the section for each node.

    datacenter = default
    

    When you're done, the section for each node in the cluster configuration file should look like the following example. The order of the key-value pairs doesn't matter.

    [cluster "HOSTNAME"]
      datacenter = default
      hostname = HOSTNAME
      ipv4 = IP ADDRESS
      ...
    ...

    Note: If you changed the name of the primary datacenter in step 3, find the consul-datacenter key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter primary, use the following key-value pair for each node.

    consul-datacenter = primary
    
  5. Apply the new configuration. This command can take some time to finish, so we recommend running the command in a terminal multiplexer like screen or tmux.

    ghe-cluster-config-apply
    
  6. After the configuration run finishes, GitHub Enterprise Server displays the following message.

    Finished cluster configuration

GitHub Enterprise Server がプロンプトに戻ったら、ノードをクラスタのプライマリデータセンターに割り当てます。

Adding passive nodes to the cluster configuration file

To configure high availability, you must define a corresponding passive node for every active node in your cluster. The following instructions create a new cluster configuration that defines both active and passive nodes. You will:

  • Create a copy of the active cluster configuration file.
  • Edit the copy to define passive nodes that correspond to the active nodes, adding the IP addresses of the new virtual machines that you provisioned.
  • Merge the modified copy of the cluster configuration back into your active configuration.
  • Apply the new configuration to start replication.

For an example configuration, see "Example configuration."

  1. クラスタ内のノードごとに、同じバージョンの GitHub Enterprise Server を実行して、同じ仕様で一致する仮想マシンをプロビジョニングします。 Note the IPv4 address and hostname for each new cluster node. For more information, see "Prerequisites."

    Note: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.

  2. SSH into any node in your cluster. 詳しい情報については「管理シェル(SSH)にアクセスする」を参照してください。

  3. Back up your existing cluster configuration.

    cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
    
  4. Create a copy of your existing cluster configuration file in a temporary location, like /home/admin/cluster-passive.conf. Delete unique key-value pairs for IP addresses (ipv*), UUIDs (uuid), and public keys for WireGuard (wireguard-pubkey).

    grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
    
  5. Remove the [cluster] section from the temporary cluster configuration file that you copied in the previous step.

    git config -f ~/cluster-passive.conf --remove-section cluster
    
  6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace SECONDARY with the name you choose.

    sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf
  7. Decide on a pattern for the passive nodes' hostnames.

    Warning: Hostnames for passive nodes must be unique and differ from the hostname for the corresponding active node.

  8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.

    sudo vim ~/cluster-passive.conf
  9. In each section within the temporary cluster configuration file, update the node's configuration. The cluster configuration file lists each node under a [cluster "HOSTNAME"] heading.

    • Change the quoted hostname in the section heading and the value for hostname within the section to the passive node's hostname, per the pattern you chose in step 7 above.
    • Add a new key named ipv4, and set the value to the passive node's static IPv4 address.
    • Add a new key-value pair, replica = enabled.
    [cluster "NEW PASSIVE NODE HOSTNAME"]
      ...
      hostname = NEW PASSIVE NODE HOSTNAME
      ipv4 = NEW PASSIVE NODE IPV4 ADDRESS
      replica = enabled
      ...
    ...
  10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.

    cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
  11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace REPLICA MYSQL PRIMARY HOSTNAME and REPLICA REDIS PRIMARY HOSTNAME with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.

    git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA MYSQL PRIMARY HOSTNAME
    git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA REDIS PRIMARY HOSTNAME
  12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.

    git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true

    Warning: Review your cluster configuration file before proceeding.

    • In the top-level [cluster] section, ensure that the values for mysql-master-replica and redis-master-replica are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
    • In each section for an active node named [cluster "<em>ACTIVE NODE HOSTNAME</em>"], double-check the following key-value pairs.
      • datacenter should match the value of primary-datacenter in the top-level [cluster] section.
      • consul-datacenter should match the value of datacenter, which should be the same as the value for primary-datacenter in the top-level [cluster] section.
    • Ensure that for each active node, the configuration has one corresponding section for one passive node with the same roles. In each section for a passive node, double-check each key-value pair.
      • datacenter should match all other passive nodes.
      • consul-datacenter should match all other passive nodes.
      • hostname should match the hostname in the section heading.
      • ipv4 should match the node's unique, static IPv4 address.
      • replica should be configured as enabled.
    • Take the opportunity to remove sections for offline nodes that are no longer in use.

    To review an example configuration, see "Example configuration."

  13. Initialize the new cluster configuration. This command can take some time to finish, so we recommend running the command in a terminal multiplexer like screen or tmux.

    ghe-cluster-config-init
  14. 初期化が完了すると、GitHub Enterprise Server は次のメッセージを表示します。

    Finished cluster initialization
  15. Apply the new configuration. This command can take some time to finish, so we recommend running the command in a terminal multiplexer like screen or tmux.

    ghe-cluster-config-apply
    
  16. After the configuration run finishes, GitHub Enterprise Server displays the following message.

    Finished cluster configuration
  17. Configure a load balancer that will accept connections from users if you fail over to the passive nodes. For more information, see "Cluster network configuration."

You've finished configuring high availability replication for the nodes in your cluster. Each active node begins replicating configuration and data to its corresponding passive node, and you can direct traffic to the load balancer for the secondary datacenter in the event of a failure. For more information about failing over, see "Initiating a failover to your replica cluster."

Example configuration

The top-level [cluster] configuration should look like the following example.

[cluster]
  mysql-master = HOSTNAME OF ACTIVE MYSQL MASTER
  redis-master = HOSTNAME OF ACTIVE REDIS MASTER
  primary-datacenter = PRIMARY DATACENTER NAME
  mysql-master-replica = HOSTNAME OF PASSIVE MYSQL MASTER
  redis-master-replica = HOSTNAME OF PASSIVE REDIS MASTER
  mysql-auto-failover = true
...

The configuration for an active node in your cluster's storage tier should look like the following example.

...
[cluster "UNIQUE ACTIVE NODE HOSTNAME"]
  datacenter = default
  hostname = UNIQUE ACTIVE NODE HOSTNAME
  ipv4 = IPV4 ADDRESS
  consul-datacenter = default
  consul-server = true
  git-server = true
  pages-server = true
  mysql-server = true
  elasticsearch-server = true
  redis-server = true
  memcache-server = true
  metrics-server = true
  storage-server = true
  vpn = IPV4 ADDRESS SET AUTOMATICALLY
  uuid = UUID SET AUTOMATICALLY
  wireguard-pubkey = PUBLIC KEY SET AUTOMATICALLY
...

The configuration for the corresponding passive node in the storage tier should look like the following example.

  • Important differences from the corresponding active node are bold.
  • GitHub Enterprise Server は、vpnuuidwireguard-pubkey の値を自動的に割り当てるため、初期化するパッシブノードの値を定義しないでください。
  • The server roles, defined by *-server keys, match the corresponding active node.
...
[cluster "UNIQUE PASSIVE NODE HOSTNAME"]
  replica = enabled
  ipv4 = IPV4 ADDRESS OF NEW VM WITH IDENTICAL RESOURCES
  datacenter = SECONDARY DATACENTER NAME
  hostname = UNIQUE PASSIVE NODE HOSTNAME
  consul-datacenter = SECONDARY DATACENTER NAME
  consul-server = true
  git-server = true
  pages-server = true
  mysql-server = true
  elasticsearch-server = true
  redis-server = true
  memcache-server = true
  metrics-server = true
  storage-server = true
  vpn = DO NOT DEFINE
  uuid = DO NOT DEFINE
  wireguard-pubkey = DO NOT DEFINE
...

Monitoring replication between active and passive cluster nodes

Initial replication between the active and passive nodes in your cluster takes time. 時間は、複製するデータの量と GitHub Enterprise Server のアクティビティレベルによって異なります。

GitHub Enterprise Server 管理シェルから利用できるコマンドラインツールを使用して、クラスタ内の任意のノードの進行状況を監視できます。 For more information about the administrative shell, see "Accessing the administrative shell (SSH)."

  • Monitor replication of databases:

    /usr/local/share/enterprise/ghe-cluster-status-mysql
    
  • Monitor replication of repository and Gist data:

    ghe-spokes status
    
  • Monitor replication of attachment and LFS data:

    ghe-storage replication-status
    
  • Monitor replication of Pages data:

    ghe-dpages replication-status
    

You can use ghe-cluster-status to review the overall health of your cluster. For more information, see "Command-line utilities."

Reconfiguring high availability replication after a failover

After you fail over from the cluster's active nodes to the cluster's passive nodes, you can reconfigure high availability replication in two ways.

Provisioning and configuring new passive nodes

After a failover, you can reconfigure high availability in two ways. The method you choose will depend on the reason that you failed over, and the state of the original active nodes.

  1. Provision and configure a new set of passive nodes for each of the new active nodes in your secondary datacenter.

  2. Use the old active nodes as the new passive nodes.

The process for reconfiguring high availability is identical to the initial configuration of high availability. For more information, see "Creating a high availability replica for a cluster."

Disabling high availability replication for a cluster

GitHub Enterprise Server のクラスタデプロイメントのパッシブノードへのレプリケーションを停止できます。

  1. SSH into any node in your cluster. 詳しい情報については「管理シェル(SSH)にアクセスする」を参照してください。

  2. Open the cluster configuration file at /data/user/common/cluster.conf in a text editor. For example, you can use Vim.

    sudo vim /data/user/common/cluster.conf
    
  3. In the top-level [cluster] section, delete the mysql-auto-failover, redis-master-replica, and mysql-master-replica key-value pairs.

  4. Delete each section for a passive node. For passive nodes, replica is configured as enabled.

  5. Apply the new configuration. This command can take some time to finish, so we recommend running the command in a terminal multiplexer like screen or tmux.

    ghe-cluster-config-apply
    
  6. After the configuration run finishes, GitHub Enterprise Server displays the following message.

    Finished cluster configuration

GitHub Enterprise Server がプロンプトに戻ったら、High Availability レプリケーションの無効化が完了したことになります。

Did this doc help you?

Privacy policy

Help us make these docs great!

All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request.

Make a contribution

OR, learn how to contribute.