Delta File Restore - LVM

  • I see: you have a LV that is in fact using 2 virtual disks at once (samba_data volume group is using 2 physical volumes).

    We don't support this type of configuration: we mount one disk and then try to parse the content/partition type. I suppose we can't read the content if there is only one VHD mounted.

  • Yes,

    They are multi disk Vms.

    Most are running as either two disk or even three disk.

    Is it possible to identify a multi disk LVM and mount all of the required disks?

    Will this configuration prevent backup and restore in any way? (excluding file level restore)

  • This won't affect restore on the VM level, because it's not something we are aware of (that's living inside the VM).

    In general, because Xen Orchestra is agentless -you haven't installed anything in your VMs-, there is no way to know/guess what's happening inside your VM (disk setup for example).

    So "multi disk VMs", aka a VM with multiple disk are not a problem at all. But if you have specific setup INSIDE it, there is little we could do.

  • With an agent it would be possible to support this configuration?
    If you have advanced disk configuration you would need to use an agent if not then it could be agent less?

    Is this something you have ever considered?

  • Maybe there is a solution to detect LVM settings while restoring but it will be probably a lot of hacks (if we could detect that it needs another PV, trying to find a VHD that could match, IDK if it's possible).

    Can you explain why you are using this setup? I mean, in virtualization, you can create virtual disks very easily, so why making a kind of "JBOD" of 2 or 3 disks and not creating one virtual disk only?

    Anyway, in general, we don't want to move to an agent based solution, but especially for just a unique case like this. Obviously, if we get more and more customers complaining about it, we could deviate current ressources to check if it's possible and then do it if we could. Congrats, you are the first user to report this 😉

    edit: to be clear, an agent means a program you must install in your VM, that will rsync your files to a remote storage (NFS or else). We are at the hypervisor level only in XO.

  • They originally started life on ESXI free 5.5, which has no easy backup method. I added the critical data to a separate virtual disk if the VM failed to boot etc it would be easier to recover the data from the separate drive.

    Is it possible to merge the data back to one drive and then remove the disk?

  • You don't have a soft RAID: your LVM setup is like a JBOD, it's 2 disks underneath the volume. So even if you lose one virtual disk, all your LV will be unreadable.

    For now, your best option is to:

    1. create a new virtual disk of 200GiB into this VM
    2. format it in whatever you like (ext4 or XFS)
    3. copy all data from your LV "samba" to this new disk.
    4. when it's done, shutdown samba
    5. umount the "samba" LV
    6. mount the new disk into the previous mount point
    7. start samba

    If it works, remove the 2 old disks, add the new disk into your fstab. And then you are good to go 🙂

  • Thanks for the information.

    There goes the weekend

  • haha sorry 😛

    In fact, having 2 physicals disks for the same volume group in LVM makes sense in a non-virtual scenario. In fact, LVM itself makes sense before. It brought the flexibility/abstraction needed before the rise of the machines virtualization.

    Now, you can create virtual disks in 2 clicks, grow them and adapt the FS then. IHMO, LVM is less relevant today.

  • Hi,

    Just thought I would check the other VMs.

    This one only has a single virtual disk.

    If I try the restore on this one, I get offered Linux, Root and Swap_1

    Selecting any of them results in an error

      "remote": "b39d9031-bd3d-4ca5-b513-35a8d8b3dd9b",
      "disk": "vm_delta_Backup_59b8a473-aa4c-25ef-39c7-e381331f0bed/vdi_e748211b-3959-4506-a124-594469dd34a4/20170508T110035Z_delta.vhd",
      "path": "/",
      "partition": "000f0648-05/dc01-vg/root"
      "message": "Command failed: mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4
    mount: cannot mount /dev/loop1 read-only
      "stack": "Error: Command failed: mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4
    mount: cannot mount /dev/loop1 read-only
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: cannot mount /dev/loop1 read-only
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4",
      "timedOut": false
    Disk /dev/xvda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000f0648
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvda1   *        2048      499711      248832   83  Linux
    /dev/xvda2          501758    41940991    20719617    5  Extended
    /dev/xvda5          501760    41940991    20719616   8e  Linux LVM
    Disk /dev/mapper/dc01--vg-root: 20.1 GB, 20124270592 bytes
    255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/mapper/dc01--vg-root doesn't contain a valid partition table
    Disk /dev/mapper/dc01--vg-swap_1: 1069 MB, 1069547520 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/mapper/dc01--vg-swap_1 doesn't contain a valid partition table


     VG      #PV #LV #SN Attr   VSize  VFree
      dc01-vg   1   2   0 wz--n- 19.76g 20.00m


    LV     VG      Attr      LSize    Pool Origin Data%  Move Log Copy%  Convert
      root   dc01-vg -wi-ao---   18.74g
      swap_1 dc01-vg -wi-ao--- 1020.00m

  • As you can see, it's not the same error. This time, the mount command fails because your XFS has a dirty log.

    So in order to mount it, XFS "driver" need to replay the log and therefore need to write inside it. However, we mount it in read only, because we don't want to modify the data inside your VHD.

    That's why it doesn't work in this case.

    The real struggle on our side, is to be able to detect which FS is existing on a disk, and to pass specific options to mount. In theory, mount can handle this itself, but there is some case where it's not possible (like this one). Also, if we modify the VHD, we need to handle this by creating a new delta to avoid writing directly on the backup VHD. As you can see, it evolves a LOT of stuff.

    BTW, we could avoid this issue in ext FS because we can discard this error by passing an extra flag to the mount command. Seems not doable with XFS.

    edit: maybe norecovery flag can do it. See

    edit2: checking the code right now to see if a fix is doable quickly

  • Oh well another one to the list of tasks for a weekend.

  • Good news: we used noload for ext filesystems, but it seems the norecovery option would work for both ext AND xfs.

    If you are using it from sources @dlucas46 , please switch to the branch xfs-norecovery of xo-server (don't forget to yarn just after) and tell me if it works now.

  • Whats the command to change to that release branch?

  • git checkout xfs-norecovery

  • That gave me :

    fatal: Not a git repository (or any of the parent directories): .git

  • @dlucas46 Were you in the correct subdirectory (/opt/xo-server)?

  • @Danp


    I can change to next release.

    but xfs-norecovery gives me this

    root@xen-orc:/opt/xo-server# git checkout xfs-norecovery
    error: pathspec 'xfs-norecovery' did not match any file(s) known to git.

  • @dlucas46 Then git fetch origin first.

  • @dlucas46 Try issuing git fetch first.

Log in to reply