It would seem there is something odd about the connection to the other DC.
I can see the server as "green" but... I can't get a console, nor any stats so something isn't happy.
Don't like to be a "me too" but I've just started mucking around with continuous replications and have just found that I have 40 stuck tasks that all say
[XO] Xapi#getResource /rrd_updates (on xcp-ng-sydney1)
I think this is to do with replicating from one DC to the other although the status is OK for the backups. I've turned them off for the moment.
This is the latest XO from source and xcp-ng patched.
XO is from sources and has the latest code pulled in last night and re-compiled. Version is showing as 5.38.1
XCP-NG is fully patched installation of the latest release (as of a moth ago).
Storage is NFS remote storage. Snapshots are working happily, backups to both this storage and a remote NFS system in another DC are working happily.
I have managed to spin up a snapshot in the other CD using a backup from this DC happily.
Everything else seems to be working happily, I just can't clone here.
After 5 hours the tasks view is just the way it was when I took the screen shot.
Similarly to this question,
I have just tried to clone a machine in my pool and the tasks just hang at 0%.
I tried create a new vm from the snapshot created and the same thing happened.
Task screen attached.
Any ideas on how to fix this. Its a bit of a show stopper at the moment.
in our further attempt to get an XCP-NG and XOA setup fully tested before we jump into production, I am trying to set up a remote nfs storage location (in another DC) to try and set up some backups.
I have tried adding the remote using XOA and am getting
Command failed: mount -t nfs 192.168.xx.yyy:/thetank/nfs/backup /run/xo-server/mounts/21fd20c2-401c-4ed3-aee1-3cdb338ec9c4 -o vers=3 mount.nfs: mount to NFS server '192.168.xx.yyy:/thetank/nfs/backup' failed: RPC Error: Unable to receive
On one of the hosts in the pool I can however mount the location manually with
mount -t nfs 192.168.xx.yyy:/thetank/nfs/backup /root/test -o vers=3
and can create files on there quite happily.
Any ideas what I'm missing here?
We will certainly use XOA when we get to the production roll out. We are close now, the only thing that is still holding us up is the networking to be honest.
We just discovered that because the test boxes we are using have different numbers of nics in them that the pool wide network configurations are really stuffed up so I think I need to get some more nics in one of the hosts so they line up (or find a way to tell the pool what nics on what hosts are for what job if that makes sense) and then work out a way of making pool wide private networks easily for groups of vm's to use.
The lack of "dynamic switches" seems to be the only thing than VMWare has over XOA/XCP-NG at the moment from our point of view.
I'm in the process of setting up a new virtualization setup for my company and we have set up two racks in two different DC's and are looking to have XCP-NG and XO handle everything for us.
Now, at the moment I have XO running as a vm in DC1 and XO running as a vm in DC2. The DC's each have a shared storage box and some hosts with VM's installed on them.
The DC's have a link between them that allows the storage boxes to see each other, but not the rest of the network (management etc).
The idea is that we use XO to back up DC1 -> DC2's storage and vise versa and in the event of a plane crash, we can spin up the backups in the opposite DC (if that makes sense).
I've been looking at the backup-ng capabilities and that all seems to be possible, but what I'm not sure about is where XO should sit in all this. Should there be one XO that can see the pools at both sides (that could be done I guess with some network changes) or can the XO in DC1 be made to see the backups from DC2 and spin them up if need be?
Or.. am I barking up the wrong tree entirely and should be doing something else entirely?
Any thoughts welcome...