Error Continous Replications 5.18.0



  • yes, of course
    on develop pool
    xe pool-ha-disable

    tried again the backup Continuous replication backup-legacy.

    received the same error "interrupted"

    the log:

    Apr  6 15:56:17 XCP1 xapi: [debug|XCP1|4367 INET :::80|handler:http/rrd_updates D:0b2544c2a79f|xmlrpc_client] stunnel pid: 16092 (cached = true) connected to 192.168.222.230:443
    Apr  6 15:56:17 XCP1 xapi: [debug|XCP1|4367 INET :::80|handler:http/rrd_updates D:0b2544c2a79f|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=16092
    Apr  6 15:56:17 XCP1 xapi: [debug|XCP1|4367 INET :::80|handler:http/rrd_updates D:0b2544c2a79f|xmlrpc_client] stunnel pid: 16092 (cached = true) returned stunnel to cache
    Apr  6 15:56:17 XCP1 xapi: [debug|XCP1|4367 INET :::80|Get RRD updates. D:df56aef76915|xapi] hand_over_connection GET /rrd_updates to /var/lib/xcp/xcp-rrdd.forwarded
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] session.login_with_password D:bd4561f87253 failed with exception Server_error(HOST_IS_SLAVE, [ 192.168.222.230 ])
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] Raised Server_error(HOST_IS_SLAVE, [ 192.168.222.230 ])
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 1/8 xapi @ XCP1 Raised at file ocaml/xapi/xapi_session.ml, line 383
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 2/8 xapi @ XCP1 Called from file ocaml/xapi/xapi_session.ml, line 39
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 3/8 xapi @ XCP1 Called from file ocaml/xapi/xapi_session.ml, line 39
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 4/8 xapi @ XCP1 Called from file ocaml/xapi/server_helpers.ml, line 69
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 5/8 xapi @ XCP1 Called from file ocaml/xapi/server_helpers.ml, line 91
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 6/8 xapi @ XCP1 Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 22
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 7/8 xapi @ XCP1 Called from file lib/xapi-stdext-pervasives/pervasiveext.ml, line 26
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace] 8/8 xapi @ XCP1 Called from file lib/backtrace.ml, line 177
    Apr  6 15:56:32 XCP1 xapi: [error|XCP1|4368 INET :::80|dispatch:session.login_with_password D:4f8aeaec976e|backtrace]
    Apr  6 15:56:35 XCP1 xapi: [debug|XCP1|63 heartbeat|Heartbeat D:b70ceab1b744|mscgen] xapi=>xapi [label="host.tickle_heartbeat"];
    Apr  6 15:56:35 XCP1 xapi: [debug|XCP1|63 heartbeat|Heartbeat D:b70ceab1b744|stunnel] stunnel start
    Apr  6 15:56:35 XCP1 xapi: [debug|XCP1|63 heartbeat|Heartbeat D:b70ceab1b744|xmlrpc_client] stunnel pid: 29155 (cached = false) connected to 192.168.222.230:443
    Apr  6 15:56:35 XCP1 xapi: [debug|XCP1|63 heartbeat|Heartbeat D:b70ceab1b744|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=29155
    


  • Please use three backtick around your text, otherwise it's hard to read it



  • Sorry. Didn't know that. Corrected



  • Can you double check that you connect to this pool with only one server in XOA?



  • @olivierlambert hi. What do you mean? Only 1 XO? At this moment I have 2 XO (one created today i began with problems 3 days ago when first installed) in each pool. Also I connect with XenCenter.

    Or do you mean in Settgins->servers ?
    in this option, I have 2 servers... and I have 3 in the pool, gonna add this one and check.
    or should only be 1 server connected ?



  • No I mean, in the "Settings/Server" view. You should have only 1 server added for one pool.



  • @olivierlambert thanks for your patience, this is what I have now there , the three servers of the pool. its only needed one here ? which one ? 0_1523031460864_Selección_012.jpg



  • You need to ONLY have the pool master. Remove the others and restart xo-server.



  • ok I removed all of them, only master remained. restarted EVERYTHING.
    tested again, and de continuous replication fails.
    in one specific vm, i can see that exporting is at 100% and importing stops at 42%, then it stops wit error. this time is VDI_IO_ERROR(Device I/O errors)



  • well, may be its a problem on my side, I will keep trying.
    If I find it I will tell you.
    Thanks for your time !



  • @bsastre Have you tried this where both hosts are running the same version of XS?



  • hi @Danp yes, I've done a lot of combinations.

    nevermind, maybe in summer I will try to reinstall everything from scratch and try again.

    thanks again for your time and effort.



  • well tried again.

    but this time with a XS 7.1 instead XCP 7.4

    there are 3 servers
    192.168.222.210
    192.168.222.220
    192.168.222.230

    192.168.222.210 is the master
    192.168.222.220 is the host that holds de VM being CR.

    in XO there is only one server connected at the settings 192.168.222.210 (master)

     [{"message":"VDI_IO_ERROR(Device I/O errors)","stack":"XapiError: VDI_IO_ERROR(Device I/O errors)\n at wrapError (/opt/xen-orchestra/packages/xen-api/src/index.js:111:9)\n at getTaskResult (/opt/xen-orchestra/packages/xen-api/src/index.js:191:26)\n at Xapi._addObject (/opt/xen-orchestra/packages/xen-api/src/index.js:797:23)\n at /opt/xen-orchestra/packages/xen-api/src/index.js:835:13\n at arrayEach (/opt/xen-orchestra/node_modules/lodash/_arrayEach.js:15:9)\n at forEach (/opt/xen-orchestra/node_modules/lodash/forEach.js:38:10)\n at Xapi._processEvents (/opt/xen-orchestra/packages/xen-api/src/index.js:830:12)\n at onSuccess (/opt/xen-orchestra/packages/xen-api/src/index.js:853:11)\n at run (/opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:66:22)\n at /opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:83:30\n at flush (/opt/xen-orchestra/node_modules/core-js/modules/_microtask.js:18:9)\n at process._tickCallback (internal/process/next_tick.js:112:11)","code":"VDI_IO_ERROR","params":["Device I/O errors"],"url":"https://192.168.222.230/import_raw_vdi/?format=vhd&vdi=OpaqueRef%3Abedee949-add4-2d80-57b3-b525b17ae751&session_id=OpaqueRef%3Aa855be20-04e6-8b24-04d9-8133b0ec4bde&task_id=OpaqueRef%3A7c1febb8-07f8-e9e6-6b1e-56bdd174a324"}]
    


  • as I've been seen that the problem seems related to 192.168.222.230 (host)
    I tried 2 things.
    1- put all the VM on him (error)
    2-stopped that host... error

     [{"message":"connect EHOSTUNREACH 192.168.222.230:443","stack":"Error: connect EHOSTUNREACH 192.168.222.230:443\n at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1173:14)","errno":"EHOSTUNREACH","code":"EHOSTUNREACH","syscall":"connect","address":"192.168.222.230","port":443,"url":"https://192.168.222.230/export_raw_vdi/?format=vhd&vdi=OpaqueRef%3A31a4a757-7c2a-a54e-78c1-42c154b3aeaf&session_id=OpaqueRef%3A3d4e433b-562c-3a09-55e0-58c20fd78611&task_id=OpaqueRef%3Ad530059f-4a3f-7326-6aa4-33fbf4203e93"}]```


  • still 230 stopped.
    tried to do the same CR but with de VM also stopped.
    another error.
    Job canceled to protect the VDI chain



  • removed 230 from Pool

    now the error is on 220

        [{"message":"VDI_IO_ERROR(Device I/O errors)","stack":"XapiError: VDI_IO_ERROR(Device I/O errors)\n at wrapError (/opt/xen-orchestra/packages/xen-api/src/index.js:111:9)\n at getTaskResult (/opt/xen-orchestra/packages/xen-api/src/index.js:191:26)\n at Xapi._addObject (/opt/xen-orchestra/packages/xen-api/src/index.js:797:23)\n at /opt/xen-orchestra/packages/xen-api/src/index.js:835:13\n at arrayEach (/opt/xen-orchestra/node_modules/lodash/_arrayEach.js:15:9)\n at forEach (/opt/xen-orchestra/node_modules/lodash/forEach.js:38:10)\n at Xapi._processEvents (/opt/xen-orchestra/packages/xen-api/src/index.js:830:12)\n at onSuccess (/opt/xen-orchestra/packages/xen-api/src/index.js:853:11)\n at run (/opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:66:22)\n at /opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:83:30\n at flush (/opt/xen-orchestra/node_modules/core-js/modules/_microtask.js:18:9)\n at process._tickCallback (internal/process/next_tick.js:112:11)","code":"VDI_IO_ERROR","params":["Device I/O errors"],"url":"https://192.168.222.220/import_raw_vdi/?format=vhd&vdi=OpaqueRef%3A8c783ae0-1aec-93c6-3f7a-dfcff996eddc&session_id=OpaqueRef%3A3d4e433b-562c-3a09-55e0-58c20fd78611&task_id=OpaqueRef%3Abcec4201-e08d-be22-09f3-6f8b5c7b4a9d"}]
    
    


  • only one server in the pool.
    same error, now pointing to the 210

     [{"message":"VDI_IO_ERROR(Device I/O errors)","stack":"XapiError: VDI_IO_ERROR(Device I/O errors)\n at wrapError (/opt/xen-orchestra/packages/xen-api/src/index.js:111:9)\n at getTaskResult (/opt/xen-orchestra/packages/xen-api/src/index.js:191:26)\n at Xapi._addObject (/opt/xen-orchestra/packages/xen-api/src/index.js:797:23)\n at /opt/xen-orchestra/packages/xen-api/src/index.js:835:13\n at arrayEach (/opt/xen-orchestra/node_modules/lodash/_arrayEach.js:15:9)\n at forEach (/opt/xen-orchestra/node_modules/lodash/forEach.js:38:10)\n at Xapi._processEvents (/opt/xen-orchestra/packages/xen-api/src/index.js:830:12)\n at onSuccess (/opt/xen-orchestra/packages/xen-api/src/index.js:853:11)\n at run (/opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:66:22)\n at /opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:83:30\n at flush (/opt/xen-orchestra/node_modules/core-js/modules/_microtask.js:18:9)\n at process._tickCallback (internal/process/next_tick.js:112:11)","code":"VDI_IO_ERROR","params":["Device I/O errors"],"url":"https://192.168.222.210/import_raw_vdi/?format=vhd&vdi=OpaqueRef%3Ab0bd681d-783e-4ae3-20fb-2be3a37d442e&session_id=OpaqueRef%3A3d4e433b-562c-3a09-55e0-58c20fd78611&task_id=OpaqueRef%3A4b975017-2b56-e68b-2099-ccb59d24dbd3"}]```


  • tried with backup legacy, only 1 host.

    host.listMissingPatches
    {
      "host": "d64f7775-2682-4c46-bb42-ab565fc66f37",
      "id": "d64f7775-2682-4c46-bb42-ab565fc66f37"
    }
    {
      "message": "no such object",
      "stack": "XoError: no such object
        at factory (/opt/xen-orchestra/packages/xo-common/src/api-errors.js:21:31)
        at Xo.getObject (/opt/xen-orchestra/packages/xo-server/src/xo.js:65:25)
        at /opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:121:24
        at /opt/xen-orchestra/node_modules/lodash/_createBaseFor.js:17:11
        at baseForOwn (/opt/xen-orchestra/node_modules/lodash/_baseForOwn.js:13:20)
        at /opt/xen-orchestra/node_modules/lodash/_createBaseEach.js:17:14
        at forEach (/opt/xen-orchestra/node_modules/lodash/forEach.js:38:10)
        at Xo.resolveParams (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:115:10)
        at /opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:277:49
        at Generator.next (<anonymous>)
        at step (/opt/xen-orchestra/packages/xo-server/dist/xo-mixins/api.js:40:221)
        at _next (/opt/xen-orchestra/packages/xo-server/dist/xo-mixins/api.js:40:409)
        at run (/opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:66:22)
        at /opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:83:30
        at flush (/opt/xen-orchestra/node_modules/core-js/modules/_microtask.js:18:9)
        at process._tickCallback (internal/process/next_tick.js:112:11)",
      "code": 1,
      "data": {
        "id": "d64f7775-2682-4c46-bb42-ab565fc66f37",
        "type": "host"
      }
    } ```


  • trying to copy to another SR. 1 host, same error

    host.listMissingPatches
    {
      "host": "d64f7775-2682-4c46-bb42-ab565fc66f37",
      "id": "d64f7775-2682-4c46-bb42-ab565fc66f37"
    }
    {
      "message": "no such object",
      "stack": "XoError: no such object
        at factory (/opt/xen-orchestra/packages/xo-common/src/api-errors.js:21:31)
        at Xo.getObject (/opt/xen-orchestra/packages/xo-server/src/xo.js:65:25)
        at /opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:121:24
        at /opt/xen-orchestra/node_modules/lodash/_createBaseFor.js:17:11
        at baseForOwn (/opt/xen-orchestra/node_modules/lodash/_baseForOwn.js:13:20)
        at /opt/xen-orchestra/node_modules/lodash/_createBaseEach.js:17:14
        at forEach (/opt/xen-orchestra/node_modules/lodash/forEach.js:38:10)
        at Xo.resolveParams (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:115:10)
        at /opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:277:49
        at Generator.next (<anonymous>)
        at step (/opt/xen-orchestra/packages/xo-server/dist/xo-mixins/api.js:40:221)
        at _next (/opt/xen-orchestra/packages/xo-server/dist/xo-mixins/api.js:40:409)
        at run (/opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:66:22)
        at /opt/xen-orchestra/node_modules/core-js/modules/es6.promise.js:83:30
        at flush (/opt/xen-orchestra/node_modules/core-js/modules/_microtask.js:18:9)
        at process._tickCallback (internal/process/next_tick.js:112:11)",
      "code": 1,
      "data": {
        "id": "d64f7775-2682-4c46-bb42-ab565fc66f37",
        "type": "host"
      }
    } ```


  • @txsastre said in Error Continous Replications 5.18.0:

    Job canceled to protect the VDI chain

    Job canceled to protect the VDI chain is NOT an error. Please read the documentation related to this 😉


Log in to reply