Delta File Restore - LVM



  • Hi I have a problem with delta file restore.
    I have three machines that originally started life on Vmware and were exported as ovf files and then imported into Xenserver. Two of them are multi disk LVM. I unable to restore anything from the file system on these backups. I have deleted all existing backups and the result is always the same.
    I am able to restore from other backups.
    So far I have tried this on Debian 8, Ubuntu 16.04, 16.10 and 17.10.
    I have fuse installed and libvhdi-utils is the latest version.
    Just in case this was an issue with compiling from source, I downloaded and activated the 15 day trial version. The result is the same.

    Any suggestions would be helpful.

    The error messages are below:

    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_44d46e03-84f2-410e-8e3f-f635a1515858/20170507T132717Z_delta.vhd",
      "path": "/",
      "partition": "000ddef0-05/fs01-vg/root"
    }
    {
      "message": "Command failed: mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-2504RMKJArhtHny2
    mount: cannot mount /dev/loop1 read-only
    ",
      "stack": "Error: Command failed: mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-2504RMKJArhtHny2
    mount: cannot mount /dev/loop1 read-only
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: cannot mount /dev/loop1 read-only
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-2504RMKJArhtHny2",
      "timedOut": false
    }
    
    

    Next error:

    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_0865d814-07b6-403d-9c74-c7a80aedee66/20170507T132717Z_delta.vhd",
      "path": "/",
      "partition": "000bf0fb-01"
    }
    {
      "message": "Cannot read property 'start' of undefined",
      "stack": "TypeError: Cannot read property 'start' of undefined
        at /opt/xo-server/dist/xo-mixins/xo-mixins/backups.js:211:12
        at run (/opt/xo-server/node_modules/core-js/library/modules/es6.promise.js:87:22)
        at /opt/xo-server/node_modules/core-js/library/modules/es6.promise.js:100:28
        at flush (/opt/xo-server/node_modules/core-js/library/modules/_microtask.js:18:9)
        at _combinedTickCallback (internal/process/next_tick.js:73:7)
        at process._tickCallback (internal/process/next_tick.js:104:9)"
    }
    

    Next:

    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_0865d814-07b6-403d-9c74-c7a80aedee66/20170507T132717Z_delta.vhd",
      "path": "/"
    }
    {
      "message": "Command failed: mount --options=loop,ro --source=/tmp/tmp-2504rd5fBemIUNpj/vhdi2 --target=/tmp/tmp-2504Ouii3yZ18Dm9
    mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ",
      "stack": "Error: Command failed: mount --options=loop,ro --source=/tmp/tmp-2504rd5fBemIUNpj/vhdi2 --target=/tmp/tmp-2504Ouii3yZ18Dm9
    mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/tmp/tmp-2504rd5fBemIUNpj/vhdi2 --target=/tmp/tmp-2504Ouii3yZ18Dm9",
      "timedOut": false
    }
    
    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_ebe50e94-785a-40c2-8f39-3f9302d4c0fb/20170507T132717Z_delta.vhd",
      "path": "/",
      "partition": "000bf0fb-01"
    }
    {
      "message": "Command failed: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-2504JXEO6NsKvBk4/vhdi2 --target=/tmp/tmp-2504erHEgl76DkON
    mount: unknown filesystem type 'LVM2_member'
    ",
      "stack": "Error: Command failed: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-2504JXEO6NsKvBk4/vhdi2 --target=/tmp/tmp-2504erHEgl76DkON
    mount: unknown filesystem type 'LVM2_member'
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: unknown filesystem type 'LVM2_member'
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-2504JXEO6NsKvBk4/vhdi2 --target=/tmp/tmp-2504erHEgl76DkON",
      "timedOut": false
    }
    
    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_b4d023bd-64e4-f4a9-9331-62b5c12b5373/vdi_27be479e-ed4f-4c09-9580-b58f3f408458/20170507T130017Z_delta.vhd",
      "path": "/",
      "partition": "00018563-05/peagusus-vg/root"
    }
    {
      "message": "Command failed: vgchange -ay peagusus-vg
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
      1 logical volume(s) in volume group \"peagusus-vg\" now active
    ",
      "stack": "Error: Command failed: vgchange -ay peagusus-vg
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
      1 logical volume(s) in volume group \"peagusus-vg\" now active
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 5,
      "killed": false,
      "stdout": "  1 logical volume(s) in volume group \"peagusus-vg\" now active
    ",
      "stderr": "File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
    ",
      "failed": true,
      "signal": null,
      "cmd": "vgchange -ay peagusus-vg",
      "timedOut": false
    }
    
    backup.scanFiles
    {
      "remote": "6c0929aa-7338-47b0-9212-f7c2440ca57a",
      "disk": "vm_delta_Backup_b4d023bd-64e4-f4a9-9331-62b5c12b5373/vdi_27be479e-ed4f-4c09-9580-b58f3f408458/20170507T130017Z_delta.vhd",
      "path": "/",
      "partition": "00018563-05/peagusus-vg/swap_1"
    }
    {
      "message": "Command failed: vgchange -ay peagusus-vg
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
      1 logical volume(s) in volume group \"peagusus-vg\" now active
    ",
      "stack": "Error: Command failed: vgchange -ay peagusus-vg
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
      1 logical volume(s) in volume group \"peagusus-vg\" now active
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 5,
      "killed": false,
      "stdout": "  1 logical volume(s) in volume group \"peagusus-vg\" now active
    ",
      "stderr": "File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 15 (/var/lib/xo-server/data/leveldb/LOCK) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000037.log) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
    File descriptor 18 (/var/lib/xo-server/data/leveldb/MANIFEST-000036) leaked on vgchange invocation. Parent PID 2504: /usr/local/bin/node
      Couldn't find device with uuid 09CPj5-t3qD-YvPG-HOlw-8UEJ-U75p-76aa8O.
      Refusing activation of partial LV peagusus-vg/root.  Use '--activationmode partial' to override.
    ",
      "failed": true,
      "signal": null,
      "cmd": "vgchange -ay peagusus-vg",
      "timedOut": false
    }
    
    

    Errors from Xen Orchestra premium trial:

    backup.scanFiles
    {
      "remote": "c2eb9cc3-d3ea-4c03-90c1-d60c357be68a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_0865d814-07b6-403d-9c74-c7a80aedee66/20170507T132717Z_delta.vhd",
      "path": "/",
      "partition": "000ddef0-05/fs01-vg/root"
    }
    {
      "message": "Command failed: pvs --noheading --nosuffix --nameprefixes --unbuffered --units b -o vg_name /dev/loop0
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 14 (/var/lib/xo-server/data/leveldb/LOCK) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000007.log) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 19 (/var/lib/xo-server/data/leveldb/MANIFEST-000006) leaked on pvs invocation. Parent PID 3344: node
      Failed to find device \"/dev/loop0\"
    ",
      "stack": "Error: Command failed: pvs --noheading --nosuffix --nameprefixes --unbuffered --units b -o vg_name /dev/loop0
    File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 14 (/var/lib/xo-server/data/leveldb/LOCK) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000007.log) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 19 (/var/lib/xo-server/data/leveldb/MANIFEST-000006) leaked on pvs invocation. Parent PID 3344: node
      Failed to find device \"/dev/loop0\"
    
        at Promise.all.then.arr (/usr/local/lib/node_modules/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 5,
      "killed": false,
      "stdout": "",
      "stderr": "File descriptor 12 (/var/lib/xo-server/data/leveldb/LOG) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 14 (/var/lib/xo-server/data/leveldb/LOCK) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 17 (/var/lib/xo-server/data/leveldb/000007.log) leaked on pvs invocation. Parent PID 3344: node
    File descriptor 19 (/var/lib/xo-server/data/leveldb/MANIFEST-000006) leaked on pvs invocation. Parent PID 3344: node
      Failed to find device \"/dev/loop0\"
    ",
      "failed": true,
      "signal": null,
      "cmd": "pvs --noheading --nosuffix --nameprefixes --unbuffered --units b -o vg_name /dev/loop0",
      "timedOut": false
    }
    
    backup.scanFiles
    {
      "remote": "c2eb9cc3-d3ea-4c03-90c1-d60c357be68a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_44d46e03-84f2-410e-8e3f-f635a1515858/20170507T132717Z_delta.vhd",
      "path": "/",
      "partition": "000ddef0-05/fs01-vg/root"
    }
    {
      "message": "Command failed: mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-3344Hj1ZbxtHkZpV
    mount: unknown filesystem type 'xfs'
    ",
      "stack": "Error: Command failed: mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-3344Hj1ZbxtHkZpV
    mount: unknown filesystem type 'xfs'
    
        at Promise.all.then.arr (/usr/local/lib/node_modules/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: unknown filesystem type 'xfs'
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/dev/fs01-vg/root --target=/tmp/tmp-3344Hj1ZbxtHkZpV",
      "timedOut": false
    }
    
    backup.scanFiles
    {
      "remote": "c2eb9cc3-d3ea-4c03-90c1-d60c357be68a",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_0865d814-07b6-403d-9c74-c7a80aedee66/20170507T132717Z_delta.vhd",
      "path": "/"
    }
    {
      "message": "Command failed: mount --options=loop,ro --source=/tmp/tmp-3344aaHabCIRzw2Z/vhdi2 --target=/tmp/tmp-3344FqQ1DPTrA6Jd
    mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ",
      "stack": "Error: Command failed: mount --options=loop,ro --source=/tmp/tmp-3344aaHabCIRzw2Z/vhdi2 --target=/tmp/tmp-3344FqQ1DPTrA6Jd
    mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    
        at Promise.all.then.arr (/usr/local/lib/node_modules/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/usr/local/lib/node_modules/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: wrong fs type, bad option, bad superblock on /dev/loop0,
           missing codepage or helper program, or other error
    
           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/tmp/tmp-3344aaHabCIRzw2Z/vhdi2 --target=/tmp/tmp-3344FqQ1DPTrA6Jd",
      "timedOut": false
    }
    


  • You may want to review this thread where we previously discussed similar errors.

    @olivierlambert Your input please.



  • Hi,

    It seems mount cannot detect the file system in your LVM partition.

    edit: can be because of a corrupted FS too.



  • @olivierlambert Since there are multiple users experiencing the same issue, isn't it more likely that the issue originated in XO?



  • That's not what we got from XOA customers, otherwise be sure we would have been solving this. We can't reproduce the issue too.

    edit: and we use the mount command, it's not directly related to XO as you can see. This is a classical FS/corrupted FS thing.

    edit 2: a good way to understand more about it is to try to mount it manually with mount and see if you can do it.



  • Hi,

    I have just deleted all of my backups and run a fresh backup from Ubuntu 17.10.

    Upon trying to mount the disk I get this error:

    backup.scanFiles
    {
      "remote": "b39d9031-bd3d-4ca5-b513-35a8d8b3dd9b",
      "disk": "vm_delta_Backup_d0f657d3-58fb-5892-e5c3-bdc2db0149e0/vdi_ebe50e94-785a-40c2-8f39-3f9302d4c0fb/20170508T110233Z_delta.vhd",
      "path": "/",
      "partition": "000bf0fb-01"
    }
    {
      "message": "Command failed: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-1546unTxKGmx7Q2T/vhdi2 --target=/tmp/tmp-1546cWzpV3eGVEIE
    mount: unknown filesystem type 'LVM2_member'
    ",
      "stack": "Error: Command failed: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-1546unTxKGmx7Q2T/vhdi2 --target=/tmp/tmp-1546cWzpV3eGVEIE
    mount: unknown filesystem type 'LVM2_member'
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: unknown filesystem type 'LVM2_member'
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-1546unTxKGmx7Q2T/vhdi2 --target=/tmp/tmp-1546cWzpV3eGVEIE",
      "timedOut": false
    }
    

    Could this be caused by it being originally a Vmware virtual machine that was exported?

    I appear to be able to open machines that were created on Xenserver without issue.

    If I wanted to mount the vhd export file from the Xen orchestra VM, do I need to mount the remote first or is it always mounted?



  • @dlucas46 said in Delta File Restore - LVM:

    LVM2_member

    It's like you disk is a kind of LVM inside LVM. The root partition should contain a FS, not LVM. That's a bit strange.



  • I can run commands on the running host through ssh if you would like further information



  • Inside the VM, as root:

    • fdisk -l
    • vgs
    • lvs


  • Output on one VM:

    Disk /dev/xvdb: 107.4 GB, 107374182400 bytes
    255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xc4e74f77
    
        Device Boot      Start         End      Blocks   Id  System
    
    Disk /dev/xvda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000ddef0
    
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvda1   *        2048      499711      248832   83  Linux
    /dev/xvda2          501758    41940991    20719617    5  Extended
    /dev/xvda5          501760    41940991    20719616   8e  Linux LVM
    
    Disk /dev/xvdd: 107.4 GB, 107374182400 bytes
    255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000bf0fb
    
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvdd1            2048   209715199   104856576   83  Linux
    
    Disk /dev/mapper/samba_data-samba: 214.7 GB, 214739976192 bytes
    255 heads, 63 sectors/track, 26107 cylinders, total 419414016 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/samba_data-samba doesn't contain a valid partition table
    
    Disk /dev/mapper/fs01--vg-root: 20.1 GB, 20124270592 bytes
    255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/fs01--vg-root doesn't contain a valid partition table
    
    Disk /dev/mapper/fs01--vg-swap_1: 1069 MB, 1069547520 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/fs01--vg-swap_1 doesn't contain a valid partition table
    

    VGS:

      VG         #PV #LV #SN Attr   VSize   VFree
      fs01-vg      1   2   0 wz--n-  19.76g 20.00m
      samba_data   2   1   0 wz--n- 199.99g     0
    
     LV     VG         Attr      LSize    Pool Origin Data%  Move Log Copy%  Convert
      root   fs01-vg    -wi-ao---   18.74g                                          
      swap_1 fs01-vg    -wi-ao--- 1020.00m                                          
      samba  samba_data -wi-ao---  199.99g
    


  • I see: you have a LV that is in fact using 2 virtual disks at once (samba_data volume group is using 2 physical volumes).

    We don't support this type of configuration: we mount one disk and then try to parse the content/partition type. I suppose we can't read the content if there is only one VHD mounted.



  • Yes,

    They are multi disk Vms.

    Most are running as either two disk or even three disk.

    Is it possible to identify a multi disk LVM and mount all of the required disks?

    Will this configuration prevent backup and restore in any way? (excluding file level restore)



  • This won't affect restore on the VM level, because it's not something we are aware of (that's living inside the VM).

    In general, because Xen Orchestra is agentless -you haven't installed anything in your VMs-, there is no way to know/guess what's happening inside your VM (disk setup for example).

    So "multi disk VMs", aka a VM with multiple disk are not a problem at all. But if you have specific setup INSIDE it, there is little we could do.



  • With an agent it would be possible to support this configuration?
    If you have advanced disk configuration you would need to use an agent if not then it could be agent less?

    Is this something you have ever considered?



  • Maybe there is a solution to detect LVM settings while restoring but it will be probably a lot of hacks (if we could detect that it needs another PV, trying to find a VHD that could match, IDK if it's possible).

    Can you explain why you are using this setup? I mean, in virtualization, you can create virtual disks very easily, so why making a kind of "JBOD" of 2 or 3 disks and not creating one virtual disk only?

    Anyway, in general, we don't want to move to an agent based solution, but especially for just a unique case like this. Obviously, if we get more and more customers complaining about it, we could deviate current ressources to check if it's possible and then do it if we could. Congrats, you are the first user to report this 😉

    edit: to be clear, an agent means a program you must install in your VM, that will rsync your files to a remote storage (NFS or else). We are at the hypervisor level only in XO.



  • They originally started life on ESXI free 5.5, which has no easy backup method. I added the critical data to a separate virtual disk if the VM failed to boot etc it would be easier to recover the data from the separate drive.

    Is it possible to merge the data back to one drive and then remove the disk?



  • You don't have a soft RAID: your LVM setup is like a JBOD, it's 2 disks underneath the volume. So even if you lose one virtual disk, all your LV will be unreadable.

    For now, your best option is to:

    1. create a new virtual disk of 200GiB into this VM
    2. format it in whatever you like (ext4 or XFS)
    3. copy all data from your LV "samba" to this new disk.
    4. when it's done, shutdown samba
    5. umount the "samba" LV
    6. mount the new disk into the previous mount point
    7. start samba

    If it works, remove the 2 old disks, add the new disk into your fstab. And then you are good to go 🙂



  • Thanks for the information.

    There goes the weekend



  • haha sorry 😛

    In fact, having 2 physicals disks for the same volume group in LVM makes sense in a non-virtual scenario. In fact, LVM itself makes sense before. It brought the flexibility/abstraction needed before the rise of the machines virtualization.

    Now, you can create virtual disks in 2 clicks, grow them and adapt the FS then. IHMO, LVM is less relevant today.



  • Hi,

    Just thought I would check the other VMs.

    This one only has a single virtual disk.

    If I try the restore on this one, I get offered Linux, Root and Swap_1

    Selecting any of them results in an error

    backup.scanFiles
    {
      "remote": "b39d9031-bd3d-4ca5-b513-35a8d8b3dd9b",
      "disk": "vm_delta_Backup_59b8a473-aa4c-25ef-39c7-e381331f0bed/vdi_e748211b-3959-4506-a124-594469dd34a4/20170508T110035Z_delta.vhd",
      "path": "/",
      "partition": "000f0648-05/dc01-vg/root"
    }
    {
      "message": "Command failed: mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4
    mount: cannot mount /dev/loop1 read-only
    ",
      "stack": "Error: Command failed: mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4
    mount: cannot mount /dev/loop1 read-only
    
        at Promise.all.then.arr (/opt/xo-server/node_modules/execa/index.js:210:11)
        at tryCatcher (/opt/xo-server/node_modules/bluebird/js/release/util.js:16:23)
        at Promise._settlePromiseFromHandler (/opt/xo-server/node_modules/bluebird/js/release/promise.js:512:31)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:569:18)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Promise._fulfill (/opt/xo-server/node_modules/bluebird/js/release/promise.js:638:18)
        at PromiseArray._resolve (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:126:19)
        at PromiseArray._promiseFulfilled (/opt/xo-server/node_modules/bluebird/js/release/promise_array.js:144:14)
        at Promise._settlePromise (/opt/xo-server/node_modules/bluebird/js/release/promise.js:574:26)
        at Promise._settlePromise0 (/opt/xo-server/node_modules/bluebird/js/release/promise.js:614:10)
        at Promise._settlePromises (/opt/xo-server/node_modules/bluebird/js/release/promise.js:693:18)
        at Async._drainQueue (/opt/xo-server/node_modules/bluebird/js/release/async.js:133:16)
        at Async._drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:143:10)
        at Immediate.Async.drainQueues (/opt/xo-server/node_modules/bluebird/js/release/async.js:17:14)",
      "code": 32,
      "killed": false,
      "stdout": "",
      "stderr": "mount: cannot mount /dev/loop1 read-only
    ",
      "failed": true,
      "signal": null,
      "cmd": "mount --options=loop,ro --source=/dev/dc01-vg/root --target=/tmp/tmp-1546Lxfq9I0wo0L4",
      "timedOut": false
    }
    
    Disk /dev/xvda: 21.5 GB, 21474836480 bytes
    255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000f0648
    
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvda1   *        2048      499711      248832   83  Linux
    /dev/xvda2          501758    41940991    20719617    5  Extended
    /dev/xvda5          501760    41940991    20719616   8e  Linux LVM
    
    Disk /dev/mapper/dc01--vg-root: 20.1 GB, 20124270592 bytes
    255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/dc01--vg-root doesn't contain a valid partition table
    
    Disk /dev/mapper/dc01--vg-swap_1: 1069 MB, 1069547520 bytes
    255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/mapper/dc01--vg-swap_1 doesn't contain a valid partition table
    

    VGS:

     VG      #PV #LV #SN Attr   VSize  VFree
      dc01-vg   1   2   0 wz--n- 19.76g 20.00m
    

    LVS:

    LV     VG      Attr      LSize    Pool Origin Data%  Move Log Copy%  Convert
      root   dc01-vg -wi-ao---   18.74g
      swap_1 dc01-vg -wi-ao--- 1020.00m
    


  • As you can see, it's not the same error. This time, the mount command fails because your XFS has a dirty log.

    So in order to mount it, XFS "driver" need to replay the log and therefore need to write inside it. However, we mount it in read only, because we don't want to modify the data inside your VHD.

    That's why it doesn't work in this case.

    The real struggle on our side, is to be able to detect which FS is existing on a disk, and to pass specific options to mount. In theory, mount can handle this itself, but there is some case where it's not possible (like this one). Also, if we modify the VHD, we need to handle this by creating a new delta to avoid writing directly on the backup VHD. As you can see, it evolves a LOT of stuff.

    BTW, we could avoid this issue in ext FS because we can discard this error by passing an extra flag to the mount command. Seems not doable with XFS.

    edit: maybe norecovery flag can do it. See https://serverfault.com/questions/839898/cannot-mount-block-device-dev-loop-read-only

    edit2: checking the code right now to see if a fix is doable quickly


Log in to reply