How to initialise MicroCloud

The initialisation process bootstraps the MicroCloud cluster. You run the initialisation on one of the machines, and it configures the required services on all of the machines that have been joined.

Pre-initialisation requirements

  • Complete the steps in How to install MicroCloud before initialisation.

  • If you intend to use full disk encryption (FDE) on any cluster member, that member must meet the prerequisites listed on this page: Full disk encryption.

    • Follow only the instructions in the Prerequisites section on that page. Skip its Usage section; the MicroCloud initialisation process handles the disk encryption.

Interactive configuration

If you run the initialisation process in interactive mode (the default), you are prompted for information about your machines and how you want to set them up. The questions that you are asked might differ depending on your setup. For example, if you do not have the MicroOVN snap installed, you will not be prompted to configure your network; if your machines don’t have local disks, you will not be prompted to set up storage.

The following instructions show the full initialisation process.

Tip

During initialisation, MicroCloud displays tables of entities to choose from.

To select specific entities, use the Up and Down keys to choose a table row and select it with the Space key. To select all rows, use the Right key. You can filter the table rows by typing one or more characters.

When you have selected the required entities, hit Enter to confirm.

Complete the following steps to initialise MicroCloud:

  1. On one of the machines, enter the following command:

    sudo microcloud init
    
  2. Select whether you want to set up more than one machine.

    This allows you to create a MicroCloud using a single cluster member. It will skip the Trust establishment session if no more machines should be part of the MicroCloud.

    Additional machines can always be added at a later point in time. See How to add a machine for more information.

  3. Select the IP address that you want to use for MicroCloud’s internal traffic (see Network interface for intra-cluster traffic). MicroCloud automatically detects the available addresses (IPv4 and IPv6) on the existing network interfaces and displays them in a table.

    You must select exactly one address.

  4. On all the other machines, enter the following command and repeat the address selection:

    sudo microcloud join
    

    It will automatically detect the machine acting as the initiator. See Trust establishment session for more information and Automatic server detection in case the network doesn’t support multicast.

  5. Select the machines that you want to add to the MicroCloud cluster.

    MicroCloud displays all machines that have reached out during the trust establishment session. Make sure that all machines that you select have the required snaps installed.

  6. Select whether you want to set up local storage.

    Note

    • To set up local storage, each machine must have a local disk.

    • The disks must not contain any partitions.

    • A disk used for local storage will not be available for distributed storage.

    If you choose yes, configure the local storage:

    1. Select the disks that you want to use for local storage.

      You must select exactly one disk from each machine.

    2. Select whether you want to wipe any of the disks. Wiping a disk will destroy all data on it.

  7. Select whether you want to set up distributed storage (using MicroCeph).

    Note

    • You can set up distributed storage on a single cluster member.

    • High availability requires a minimum of 3 cluster members, with 3 separate disks across 3 different cluster members.

    • The disks must not contain any partitions.

    • A disk that was previously selected for local storage will not be shown for distributed storage.

    If you choose yes, configure the distributed storage:

    1. Select the disks that you want to use for distributed storage.

      You must select at least one disk.

    2. Select whether you want to wipe any of the disks. Wiping a disk will destroy all data on it.

    3. Select whether you want to encrypt any of the disks. Encrypting a disk will store the encryption keys in the Ceph key ring inside the Ceph configuration folder.

      Warning

      Cluster members with disks to be encrypted require a kernel with dm-crypt enabled. The snap dm-crypt plug must also be connected. See the Prerequisites section of this page for more information: Full disk encryption.

      If you have not enabled and connected dm-crypt on any cluster member that you want to encrypt, do so now before you continue.

    4. You can choose to optionally set up a CephFS distributed file system.

  8. Select either an IPv4 or IPv6 CIDR subnet for the Ceph internal traffic. You can leave it empty to use the default value, which is the MicroCloud internal network (see How to configure Ceph networking for how to configure it).

  9. Select either an IPv4 or IPv6 CIDR subnet for the Ceph public traffic. You can leave it empty to use the default value, which is the MicroCloud internal network if you chose this as default for the Ceph internal network question, or the Ceph internal network if you chose to set a custom network other than the MicroCloud internal network (see How to configure Ceph networking for how to configure it).

  10. Select whether you want to set up distributed networking (using MicroOVN).

    If you choose yes, configure the distributed networking:

    1. Select the network interfaces that you want to use (see Network interface to connect to the uplink network).

      You must select one network interface per machine.

    2. If you want to use IPv4, specify the IPv4 gateway on the uplink network (in CIDR notation) and the first and last IPv4 address in the range that you want to use with LXD.

    3. If you want to use IPv6, specify the IPv6 gateway on the uplink network (in CIDR notation).

    4. If you chose to set up distributed networking, you can choose to setup an underlay network for the distributed networking:

      If you choose yes, configure the underlay network:

      1. Select the network interfaces that you want to use (see Network interface to connect to an OVN underlay network).

        You must select one network interface with an IP address per machine.

  11. MicroCloud now starts to bootstrap the cluster. Monitor the output to see whether all steps complete successfully. See Bootstrapping process for more information.

    Once the initialisation process is complete, you can start using MicroCloud.

See an example of the full initialisation process in the Get started with MicroCloud tutorial.

Excluding MicroCeph or MicroOVN from MicroCloud

If the MicroOVN or MicroCeph snap is not installed on the system that runs microcloud init, you will be prompted with the following question:

MicroCeph not found. Continue anyway? (yes/no) [default=yes]:

MicroOVN not found. Continue anyway? (yes/no) [default=yes]:

If you choose yes, only existing services will be configured on all systems. If you choose no, the setup will be cancelled.

All other systems must have at least the same set of snaps installed as the system that runs microcloud init, otherwise they will not be available to select from the list of systems. Any questions associated to these systems will be skipped. For example, if MicroCeph is not installed, you will not be prompted for distributed storage configuration.

Reusing an existing MicroCeph or MicroOVN with MicroCloud

If some of the systems are already part of a MicroCeph or MicroOVN cluster, you can choose to reuse this cluster when initialising MicroCloud when prompted with the following question:

"micro01" is already part of a MicroCeph cluster. Do you want to add this cluster to MicroCloud? (add/skip) [default=add]:

"micro01" is already part of a MicroOVN cluster. Do you want to add this cluster to MicroCloud? (add/skip) [default=add]:

If you choose add, MicroCloud will add the remaining systems selected for initialisation to the pre-existing cluster. If you choose skip, the respective service will not be set up at all.

If more than one MicroCeph or MicroOVN cluster exists among the systems, the MicroCloud initialisation will be cancelled.

Non-interactive configuration

If you want to automate the initialisation process, you can provide a preseed configuration in YAML format to the microcloud preseed command:

cat <preseed_file> | microcloud preseed

Make sure to distribute and run the same preseed configuration on all systems that should be part of the MicroCloud.

The preseed YAML file must use the following syntax:

# `initiator` defines which system takes over the role of the initiator during the trust establishment using multicast discovery.
# Make sure to also set `lookup_subnet`.
# The field cannot be set together with `initiator_address`.
# Required if `initiator_address` isn't specified.
initiator: micro01

# `initiator_address` defines which system takes over the role of the initiator during the trust establishment.
# It also allows joining systems to learn about the address they have to connect to.
# The field cannot be set together with `initiator`.
# Required if `initiator` isn't specified.
initiator_address: 10.0.0.1

# `lookup_subnet` is required and limits the subnet when looking up systems using multicast discovery.
# The first assigned address of this subnet is used for MicroCloud itself.
lookup_subnet: 10.0.0.0/24

# `lookup_timeout` is optional and configures how long the joining system will wait for a system to be discovered using multicast discovery.
# The value has to be provided in seconds.
# It defaults to 60 seconds.
lookup_timeout: 300

# `session_passphrase` is required and configures the passphrase used during the trust establishment session.
session_passphrase: 83P27XWKbDczUyE7xaX3pgVfaEacfQ2qiQ0r6gPb

# `session_timeout` is optional and configures how long the trust establishment session will last.
# The value has to be provided in seconds.
# It defaults to 60 minutes.
session_timeout: 300

# `systems` is required and lists the systems we expect to find by their host name.
#   `name` is required and represents the host name.
#   `address` sets the address used for MicroCloud and is required in case `initiator_address` is present.
#   `ovn_uplink_interface` is optional and represents the name of the interface reserved for use with OVN.
#   `ovn_underlay_ip` is optional and represents the Geneve Encap IP for each system.
#   `storage` is optional and represents explicit paths to disks for each system.
systems:
- name: micro01
  address: 10.0.0.1
  ovn_uplink_interface: eth1
  ovn_underlay_ip: 10.0.2.101
- name: micro02
  address: 10.0.0.2
  ovn_uplink_interface: eth1
  ovn_underlay_ip: 10.0.2.102
  storage:
    local:
      path: /dev/nvme5n1
      wipe: true
    ceph:
      - path: /dev/nvme4n1
        wipe: true
      - path: nvme3n1
        wipe: true
        encrypt: true
- name: micro03
  address: 10.0.0.3
  ovn_uplink_interface: eth1
  ovn_underlay_ip: 10.0.2.103
- name: micro04
  address: 10.0.0.4
  ovn_uplink_interface: eth1

# `ceph` is optional and represents the Ceph global configuration
# `cephfs: true` can be used to optionally set up a CephFS file system alongside Ceph distributed storage.
# `internal_network: subnet` optionally specifies the internal cluster network for the Ceph cluster. This network handles OSD heartbeats, object replication, and recovery traffic.
# `public_network: subnet` optionally specifies the public network for the Ceph cluster. This network conveys information regarding the management of your Ceph nodes. It is by default set to the MicroCloud lookup subnet.
ceph:
  cephfs: true
  internal_network: 10.0.1.0/24
  public_network: 10.0.0.0/24

# `ovn` is optional and represents the OVN & uplink network configuration for LXD.
ovn:
  ipv4_gateway: 192.0.2.1/24
  ipv4_range: 192.0.2.100-192.0.2.254
  ipv6_gateway: 2001:db8:d:200::1/64
  dns_servers: 192.0.2.1,2001:db8:d:200::1

# `storage` is optional and is used as basic filtering logic for finding disks across all systems.
# Filters will only apply to systems which do not have an explicitly defined disk above for the corresponding storage type.
# Filters are checked in order of appearance.
# The names and values of each key correspond to the YAML field names for the `api.ResouresStorageDisk`
# struct here:
# https://github.com/canonical/lxd/blob/c86603236167a43836c2766647e2fac97d79f899/shared/api/resource.go#L591
# Supported operands: &&, ||, <, >, <=, >=, ==, !=, !
# String values must not be in quotes unless the string contains a space.
# Single quotes are fine, but double quotes must be escaped.
# `find_min` and `find_max` can be used to validate the number of disks each filter finds.
storage:
  local:
    - find: size > 10GiB && size < 50GiB && type == nvme
      find_min: 1
      find_max: 1
      wipe: true
    - find: size > 10GiB && size < 50GiB && type == hdd && block_size == 512 && model == 'Samsung %'
      find_min: 3
      find_max: 3
      wipe: false
  ceph:
    - find: size > 10GiB && size < 50GiB && type == nvme
      find_min: 1
      find_max: 2
      wipe: true
    - find: size > 10GiB && size < 50GiB && type == hdd && partitioned == false && block_size == 512 && model == 'Samsung %'
      find_min: 3
      find_max: 8
      wipe: false

Minimal preseed using multicast discovery

You can use the following minimal preseed file to initialise a MicroCloud across three machines. In this case micro01 takes over the role of the initiator. Multicast discovery is used to find the other machines on the network.

On each of the machines eth1 is used as uplink for the OVN network. For local storage the disk /dev/sdb is occupied. In case of remote storage /dev/sdc will be used by MicroCeph:

lookup_subnet: 10.0.0.0/24
initiator: micro01
session_passphrase: foo
systems:
- name: micro01
  ovn_uplink_interface: eth1
  storage:
    local:
      path: /dev/sdb
    ceph:
      - path: /dev/sdc
- name: micro02
  ovn_uplink_interface: eth1
  storage:
    local:
      path: /dev/sdb
    ceph:
      - path: /dev/sdc
- name: micro03
  ovn_uplink_interface: eth1
  storage:
    local:
      path: /dev/sdb
    ceph:
      - path: /dev/sdc