nfs backup



  • Connection failed
    Command failed: mount -t nfs 10.0.0.50:/home/backup01 /run/xo-server/mounts/690eacd3-8c0d-45d0-bceb-a315ec8320d6 -o vers=3 mount.nfs: mount system call failed
    


  • Are you using XOA or Xen Orchestra from the sources?



  • nope, I'm testing paid version
    I've setup nfs share using this tut
    link text



  • Check dmesg for more info on why the mount failed. Maybe you have a firewall somewhere, anyway, this is not XOA related.



  • no firewall, same switch, same network, what to search for in dmesg?



  • Anything related to why the mount failed. This is really basic stuff. You can use sudo showmount -a <IP ADDRESS OF THE NFS SHARE> in your XOA to see if you can see them.



  • [05:49 11] xoa01:xoa$ showmount -a 10.0.0.50
    
    All mount points on 10.0.0.50:
    10.0.0.18:/home/backup01
    [05:52 11] xoa01:xoa$
    
    


  • Let's try to mount it manually:

    sudo mount -t nfs -o vers=3 10.0.0.18:/home/backup01 /mnt
    


  • there are some issues within xoa vm or xcp and xoa, I think its related with continous disconnecting me from xoa servers in XOA appliance

    [06:34 11] xoa:xoa$ mount -t nfs -o vers=3 10.0.0.50:/home/backup01 /mnt
    
    
    mount.nfs: Connection timed out
    [06:38 11] xoa:xoa$
    [06:38 11] xoa:xoa$
    [06:38 11] xoa:xoa$ ^C
    [06:38 11] xoa:xoa$ ping 10.0.0.50
    PING 10.0.0.50 (10.0.0.50) 56(84) bytes of data.
    64 bytes from 10.0.0.50: icmp_seq=1 ttl=64 time=1.06 ms
    64 bytes from 10.0.0.50: icmp_seq=2 ttl=64 time=1.24 ms
    64 bytes from 10.0.0.50: icmp_seq=3 ttl=64 time=1.09 ms
    ^C
    --- 10.0.0.50 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2002ms
    rtt min/avg/max/mdev = 1.067/1.136/1.248/0.079 ms
    [06:38 11] xoa:xoa$
    
    

    .50 - debian with nfs share



  • Check your MTU settings.



  • This post is deleted!


  • 10.0.0.50 (debian nfs) -> mtu 1500
    xoa vm -> 1500
    xcp host ->

    [root@XCP01 ~]# ifconfig | grep mtu
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    eth1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    eth3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    eth4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    eth5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
    vif13.0: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
    vif18.0: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
    xapi7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    xenbr1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    
    


  • So the problem is in the network between your XOA and the NFS share. Again, it's not a XOA issue.

    Don't forget to check your switch configuration too 🙂 There is something blocking NFS mount, and as you said, the problem is probably bigger than that!



  • @olivierlambert switch have default vaule (no jumbo frame enabled)



  • @akurzawa said in nfs backup:

    mount.nfs: Connection timed out

    Until you find the reason for that, you'll have problems. Double check your network (physical side, cables etc.), configuration, firewalls, etc. The culprit is somewhere there 🙂



  • I think its XCP host itself - remebmer when I could not get upgrade xoa? the xoa host was not pingable from xoa vm, but from xcp host - no problem
    but I have no knowledge to debug xcp host



  • Oliver - every other network service just works fine - I have only problems with xoa on xcp host....



  • I see. You can try to halt XOA VM, remove the virtual card, recreate it (double check it's on the right pool network), boot again and check. If you continue to have issues, feel free to open a post on XCP-ng forum describing your network issues on this VM again a NFS server (it's a good start).



  • Did issue get resolved. I have the same problem. Thanks



  • @marcusrushy please open a support ticket, we can access your XOA and push a workaround on your current install, until a next release 🙂


Log in to reply