Jump to content

Migrate from OpenVZ to LXC/LXD


s4uliu5
 Share

Recommended Posts

Hi,

currently I'm running Funtoo server with several OpenVZ containers.

Funtoo moving to LXC/LXD, so I think it's time for me too.

The upgrade of the host system to new kernel and LXC/LXD is very straightforward, no questions there.

But what is the best way to migrate OpenVZ  containers to LXC/LXD?

Thank you,

--

Saulius

Link to comment
Share on other sites

Sorry, have more questions.

In the https://www.funtoo.org/LXD/OpenVZ_migration there is a paragraph

Quote

Now let's switch the rootfs. Go to your storage pool for LXD (default location: /var/lib/lxd/storage-pools/default/containers/) and locate our openvz-migrant directory. Delete the rootfs and replace it with openvz conatiner's rootfs.

When container is running /var/lib/lxd/storage-pools/default/containers/<container> contains files "backup.yaml"  and "metadata.yaml"; and directories  "rootfs"  "templates".

But if container is stopped the  directory is empty.

So I'm wandering - how to "replace" the rootfs?

And does "replace"  means just copy files from /vz/private/<ctid> ?

My system's

# lxc info

returns

config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFPTCCAyWgAwIBAgIQTiCRM+vTvY28b70rgjuSlTANBgkqhkiG9w0BAQsFADAx
    ...
    vg==
    -----END CERTIFICATE-----
  certificate_fingerprint: a1d240ce836f964a92433d9df8441bdcd9584b61d6c0f928be71d0931d4ddb8f
  driver: lxc
  driver_version: 2.1.1
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.15.17-1
  server: lxd
  server_pid: 32662
  server_version: "2.21"
  storage: zfs
  storage_version: 0.7.6-r0-gentoo

 

Thank you,

--

Saulius

 

Link to comment
Share on other sites

Thank you.

I successfully copied the files and be able to start the container.

Now I'm trying to setup networking for migrated container so that its could accessed from the network directly ( no NAT).

According LXC documentation config file should be located in container's directory, but I see no such file.

Where can I find the config file for container?

Thank you,

--

Saulius

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...