About storage pools, volumes and buckets¶
LXD stores its data in storage pools, divided into storage volumes of different content types (like images or instances). You could think of a storage pool as the disk that is used to store data, while storage volumes are different partitions on this disk that are used for specific purposes.
In addition to storage volumes, there are storage buckets, which use the Amazon S3 protocol. Like storage volumes, storage buckets are part of a storage pool.
Storage pools¶
During initialization, LXD prompts you to create a first storage pool. If required, you can create additional storage pools later (see Create a storage pool).
Each storage pool uses a storage driver. The following storage drivers are supported:
See the following how-to guides for additional information:
Data storage location¶
Where the LXD data is stored depends on the configuration and the selected storage driver. Depending on the storage driver that is used, LXD can either share the file system with its host or keep its data separate.
Storage location |
Directory |
Btrfs |
LVM |
ZFS |
Ceph (all) |
Dell PowerFlex |
---|---|---|---|---|---|---|
Shared with the host |
✓ |
✓ |
- |
✓ |
- |
- |
Dedicated disk/partition |
- |
✓ |
✓ |
✓ |
- |
- |
Loop disk |
- |
✓ |
✓ |
✓ |
- |
- |
Remote storage |
- |
- |
- |
- |
✓ |
✓ |
Dedicated disk or partition¶
Having LXD use an empty partition on your main disk or a full dedicated disk keeps its storage completely independent from the host.
This option is supported for the btrfs
driver, the lvm
driver and the zfs
driver.
Loop disk¶
LXD can create a loop file on your main drive and have the selected storage driver use that. This method is functionally similar to using a disk or partition, but it uses a large file on your main drive instead. This means that every write must go through the storage driver and your main drive’s file system, which leads to decreased performance.
The loop files reside in /var/snap/lxd/common/lxd/disks/
if you are using the snap, or in /var/lib/lxd/disks/
otherwise.
Loop files usually cannot be shrunk. They will grow up to the configured limit, but deleting instances or images will not cause the file to shrink. You can increase their size though; see Resize a storage pool.
Remote storage¶
The ceph
, cephfs
and cephobject
drivers store the data in a completely independent Ceph storage cluster that must be set up separately.
The same applies to the powerflex
driver.
Default storage pool¶
There is no concept of a default storage pool in LXD.
When you create a storage volume, you must specify the storage pool to use.
When LXD automatically creates a storage volume during instance creation, it uses the storage pool that is configured for the instance. This configuration can be set in either of the following ways:
Directly on an instance:
lxc launch <image> <instance_name> --storage <storage_pool>
Through a profile:
lxc profile device add <profile_name> root disk path=/ pool=<storage_pool>
andlxc launch <image> <instance_name> --profile <profile_name>
Through the default profile
In a profile, the storage pool to use is defined by the pool for the root disk device:
root:
type: disk
path: /
pool: default
In the default profile, this pool is set to the storage pool that was created during initialization.
Storage volumes¶
When you create an instance, LXD automatically creates the required storage volumes for it. You can create additional storage volumes.
See the following how-to guides for additional information:
Storage volume types¶
Storage volumes can be of the following types:
container
/virtual-machine
LXD automatically creates one of these storage volumes when you launch an instance. It is used as the root disk for the instance, and it is destroyed when the instance is deleted.
This storage volume is created in the storage pool that is specified in the profile used when launching the instance (or the default profile, if no profile is specified). The storage pool can be explicitly specified by providing the
--storage
flag to the launch command.image
LXD automatically creates one of these storage volumes when it unpacks an image to launch one or more instances from it. You can delete it after the instance has been created. If you do not delete it manually, it is deleted automatically ten days after it was last used to launch an instance.
The image storage volume is created in the same storage pool as the instance storage volume, and only for storage pools that use a storage driver that supports optimized image storage.
custom
You can add one or more custom storage volumes to hold data that you want to store separately from your instances. Custom storage volumes of content type
filesystem
oriso
can be shared between instances, and they are retained until you delete them.You can also use custom storage volumes to hold your backups or images.
You must specify the storage pool for the custom volume when you create it.
Content types¶
Each storage volume uses one of the following content types:
filesystem
This content type is used for containers and container images. It is the default content type for custom storage volumes.
Custom storage volumes of content type
filesystem
can be attached to both containers and virtual machines, and they can be shared between instances.block
This content type is used for virtual machines and virtual machine images. You can create a custom storage volume of type
block
by using the--type=block
flag.Custom storage volumes of content type
block
can only be attached to virtual machines. By default, they can only be attached to one instance at a time, because simultaneous access can lead to data corruption. Sharing a custom storage volumes of content typeblock
is made possible through the usage of thesecurity.shared
configuration key.iso
This content type is used for custom ISO volumes. A custom storage volume of type
iso
can only be created by importing an ISO file usinglxc storage volume import
.Custom storage volumes of content type
iso
can only be attached to virtual machines. They can be attached to multiple machines simultaneously as they are always read-only.
Storage buckets¶
Storage buckets provide object storage functionality via the S3 protocol.
They can be used in a way that is similar to custom storage volumes. However, unlike storage volumes, storage buckets are not attached to an instance. Instead, applications can access a storage bucket directly using its URL.
Each storage bucket is assigned one or more access keys, which the applications must use to access it.
Storage buckets can be located on local storage (with dir
, btrfs
, lvm
or zfs
pools) or on remote storage (with cephobject
pools).
To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the core.storage_buckets_address
server setting.
See the following how-to guide for additional information: