Skip to content

Advanced Cluster Management based installation

Advanced Cluster Management (ACM) is yet another method the dci-openshift-agent can use to install OpenShift clusters. If you are curious about it please read Red Hat Advanced Cluster Management for Kubernetes.

This document will focus on explaining how the ACM can be used to install an OpenShift cluster through the DCI Agent.

Table of contents

Supported deployments

We are constantly adding new ways to deploy OCP, currently, the agent supports

  • SNO
  • Hypershfit (experimental)

Requirements

  • An installed OCP cluster configured with the ACM operator and its dependencies. A default storage class is mandatory to save information about the clusters managed by ACM. This will act as the Hub Cluster.
  • A kubeconfig file to interact with the Hub Cluster. There's no need for a provisioning node or a dedicated jumphost when using ACM.

SNO requirements

  • The target node with support for Virtual Media at its Baseboard Management Controller (BMC).
  • CPU: 6
  • RAM: 16 GB
  • At least 20GB of storage

Roles

The ACM integration with DCI uses the acm_setup role to deploy a cluster hub. And the following roles are used to deploy different types of clusters through ACM

Please read the role's documentation for more information.

SNO configuration

  1. Create a directory for the SNO instance on the path defined by dci_cluster_configs_dir. Set the directory name as the name of the target instance to host the SNO deployment.

    ShellSession mkdir ${dci_cluster_configs_dir}/clusterX

  2. A Hub cluster is deployed with support for ACM. It can be achieved by setting enable_acm=true during an OCP deployment. Please see the example of an ACM Hub pipeline.

  3. The kubeconfig file of the Cluster Hub is exported as HUB_KUBECONFIG: export HUB_KUBECONFIG=/<kubeconfig_path>
  4. Define the inventory file with the information of the instance to be used to deploy SNO. See an example of an SNO inventory file
    • To deploy in a disconnected environment, set dci_disconnected to true
  5. Define the deployment settings for the new SNO instance. See the example of an ACM SNO Pipeline.
  6. Use dci_pipeline or the DCI Agent to initiate the deployment using the values defined in the acm-sno-pipeline.

NOTE: Operators can be deployed on top of the SNO instance as it is described in Deploying operators. DCI will perform the proper operator mirroring and complete its deployment. Please take into consideration that not all operators may be suitable for SNO instances.

SNO deployment process

  1. The process starts, and the agent creates a new job in the DCI Web UI.
  2. Some checks are performed to make sure the installation can proceed.
  3. If this is a disconnected/restricted network environment:
  4. The OCP release artifacts are downloaded.
  5. Container/operator images are mirrored to the local registry. The acm_local_repo variable can be used to set a local registry repository. Defaults to ocp-/
  6. The cluster hub is inspected to extract settings used in the SNO instance, e.g. pull secrets, registry host, web server, among others.
  7. The ACM installation is set up and started. The required ACM resources are created.
  8. BMC secret.
  9. The Agent Service Config is patched with information for the new requested cluster.
  10. InfraEnv.
  11. Cluster deployment.
  12. Bare Metal Controller.
  13. The target node's BMC is provisioned by ACM. A base RHCOS image will be used to boot the server, start the ACM agents and complete the initial bootstrap.
  14. The node is discovered by ACM and auto-approved.
  15. Network settings and NTP are validated.
  16. A new cluster installation starts. Deployment should complete in around 60 minutes.
  17. If DNS is properly configured, the new instance is registered as a managed cluster in the ACM console.
  18. The KUBECONFIG and admin credentials are fetched and uploaded to DCI. Those files are also stored in the dci_cluster_configs_dir directory.
  19. The KUBECONFIG is used to interact with the new cluster and perform the deployment of the desired operators.
  20. The process ends and the job is completed in the DCI Web UI.

Hypershift configuration

⚠️ Currently, Hypershift only supports the "kvirt" hosted cluster type.

  1. A Hub cluster is deployed with support for ACM. It can be achieved by setting enable_acm=true during an OCP deployment. Please see the example of an ACM Hub pipeline.
  2. The Hub cluster must have the CNV and metallb operators installed.
  3. The OCP release images for the HCP cluster will be mirrored to the same registry path as the Hub cluster images.
  4. The kubeconfig file of the Cluster Hub is exported as HUB_KUBECONFIG: export HUB_KUBECONFIG=/<kubeconfig_path>
  5. Define the deployment settings for the new Hosted Cluster instance. See the example of an ACM Hypershift Pipeline.
  6. As part of the installation, the agent will deploy a MetalLB instance in L2 mode. The variable metallb_ipaddr_pool_l2 with the range of IPs for the LoadBalancer is required and can be defined at the pipeline or inventory.
  7. The metallb setup can be skipped if there is one already running but it will validated during the hypershift installation.
  8. Use dci_pipeline or the DCI Agent to initiate the deployment using the values defined in the acm-hypershift-pipeline.

Pipeline Examples

ACM Hub pipeline

This pipeline includes NFS storage

---
- name: openshift-acm-hub
  stage: ocp
  ansible_playbook: /usr/share/dci-openshift-agent/dci-openshift-agent.yml
  ansible_cfg: /var/lib/dci/pipelines/ansible.cfg
  ansible_inventory: /var/lib/dci/inventories/<lab>/<pool>/@RESOURCE
  dci_credentials: /etc/dci-openshift-agent/<dci_credentials.yml>
  pipeline_user: /etc/dci-openshift-agent/pipeline_user.yml
  ansible_extravars:
    dci_config_dirs:
      - /var/lib/dci/<lab>-config/dci-openshift-agent
    dci_local_log_dir: /var/lib/dci-pipeline/upload-errors
    dci_gits_to_components:
      - /var/lib/dci/<lab>-config/dci-openshift-agent
      - /var/lib/dci/inventories
      - /var/lib/dci/pipelines
    dci_tags: []
    dci_cache_dir: /var/lib/dci-pipeline
    dci_base_ip: "{{ ansible_default_ipv4.address }}"
    dci_baseurl: "http://{{ dci_base_ip }}"
    cnf_tests_mode: offline
    enable_acm: true
    enable_nfs_storage: true
    nfs_server: nfs.example.com
    nfs_path: /path/to/exports
    dci_teardown_on_success: false
  topic: OCP-4.14
  components:
    - ocp
  outputs:
    kubeconfig: "kubeconfig"
  success_tag: ocp-acm-hub-4.14-ok

ACM SNO pipeline

---
- name: openshift-acm-sno
  type: ocp
  ansible_playbook: /usr/share/dci-openshift-agent/dci-openshift-agent.yml
  ansible_cfg: /var/lib/dci/pipelines/ansible.cfg
  dci_credentials: /etc/dci-openshift-agent/<dci_credentials.yml>
  pipeline_user: /etc/dci-openshift-agent/pipeline_user.yml
  ansible_inventory: /home/dciteam/inventories/<lab>/sno/<inventory_file.yml>
  ansible_extravars:
    install_type: acm
    acm_cluster_type: sno
    dci_local_log_dir: /var/lib/dci-pipeline/upload-errors
    dci_gits_to_components:
      - /var/lib/dci/<lab>-config/dci-openshift-agent
      - /var/lib/dci/inventories
      - /var/lib/dci/pipelines
    dci_tags: []
    dci_cache_dir: /var/lib/dci-pipeline
    dci_base_ip: "{{ ansible_default_ipv4.address }}"
    dci_baseurl: "http://{{ dci_base_ip }}"
    dci_teardown_on_success: false
    enable_sriov: true
  topic: OCP-4.15
  components:
    - ocp
  success_tag: ocp-acm-sno-4.15-ok

ACM Hypershift pipeline

---
- name: openshift-acm-hypershift
  type: ocp
  ansible_playbook: /usr/share/dci-openshift-agent/dci-openshift-agent.yml
  ansible_cfg: /var/lib/dci/pipelines/ansible.cfg
  dci_credentials: /etc/dci-openshift-agent/<dci_credentials.yml>
  pipeline_user: /etc/dci-openshift-agent/pipeline_user.yml
  ansible_extravars:
    install_type: acm
    acm_cluster_type: hypershift
    dci_local_log_dir: /var/lib/dci-pipeline/upload-errors
    dci_gits_to_components:
      - /var/lib/dci/<lab>-config/dci-openshift-agent
      - /var/lib/dci/inventories
      - /var/lib/dci/pipelines
    dci_tags: []
    dci_cache_dir: /var/lib/dci-pipeline
    dci_base_ip: "{{ ansible_default_ipv4.address }}"
    dci_baseurl: "http://{{ dci_base_ip }}"
    dci_teardown_on_success: false
  topic: OCP-4.14
  components:
    - ocp
  success_tag: ocp-acm-hypershift-4.14-ok

Inventory Examples

ACM hypershift kvirt Inventory file

all:
  hosts:
    jumphost:
      ansible_connection: local
    # All task for ACM run from localhost, so making provisioner equals to localhost
    provisioner:
      ansible_connection: local
      ansible_python_interpreter: "{{ansible_playbook_python}}"
      ansible_user: <ansible_user>
  vars:
    cluster: cluster<X>-hcp
    webserver_url: "http://webcache.<domain>.lab:8080"
    provision_cache_store: "/opt/cache"
    # MetalLB in L2 mode
    metallb_ipaddr_pool_l2:
      - <ipv4-start>-<ipv4-end>
      - <<ipv6-start>-<ipv6-end>

SNO Inventory file

all:
  hosts:
    jumphost:
      ansible_connection: local
    # All task for ACM run from localhost, so making provisioner equal to localhost
    provisioner:
      ansible_connection: local
      ansible_user: <ansible_user>
  vars:
    cluster: clusterX-sno
    dci_disconnected: true
    acm_force_deploy: true
    acm_cluster_name: sno1
    acm_base_domain: sno.<mydomain>
    acm_bmc_address: 192.168.10.48
    acm_boot_mac_address: 3c:fd:fe:c2:0f:fx
    acm_machine_cidr: 192.168.82.0/24
    acm_bmc_user: REDACTED
    acm_bmc_pass: REDACTED
    provision_cache_store: "/opt/cache"