How fast should a backup run?
So it's very likely to be a XenServer bottleneck on export speed due to intensive I/O usage.
Cody0292 last edited by
@danp I believe the bottleneck was the storage backend. my source test pool storage is RAID10 (12-disk SATA), but only single-path 1Gb iSCSI, trying to replicate as closely as I can the client's current environment. i have since discovered that iometer's Default Access Spec is 100% read 100% random, so certainly nothing close to a real-world workload. and i had 8 VMs running that.
and I believe i saw @olivierlambert state in another post the backup traffic has to go through the management NICs, which are 1Gb. if i had 10Gb mgmt NICs in each pool, and 10Gb to the source storage, it certainly COULD go faster. but, again, i think this comes back to disk I/O. 12-disk RAID10 should be able to saturate 1Gb with ease if it were sequential data. but with what i had iometer doing, the disk just couldn't keep up.
Cody0292 last edited by
now that i go back and look at my test setup, i actually DO have the traffic going in and out separate NICs. my LAN and MGMT are on different subnets, and i have a NIC for both networks attached to XO. XO connects to the source pool over the LAN NIC and the destination pool over the MGMT NIC. backup traffic is coming in to XO over the LAN at around 50-65 MBps, and out on the MGMT NIC at the same speed.
Beginning to wonder about the Synology device. Even on a closed storage network, an export from the command line from the xen host, directly to the synology via NFS is only around 25 MBytes/sec write and maybe 10 read. It seems to have real low utilization though.
Is compression enabled on your test?
@olivierlambert I have not been trying to use compression. And at this point haven't been using XO for it. Just the command line export /import commands. I do notice that the CPU usage does go higher when exporting, on the xen host, and if other VMs ramp up, my write speed to the NFS volume slows a bit.
In test environment ( local in-house network), I attached a second 1 GB ethernet cable, switch, and a CentOS 7 machine serving NFS, to a xen host. Was able to max out the 1 GB connection easily (~100 Mbyte /sec), exporting directly to NFS filesystem via command line on host.
In production environment, (at a co-location facility across town) Did the same thing, the only change is the NFS volume is on a Synology 1815+, with 6 drives in RAID 10. Substantially slower: 25Mbytes/sec writing the export, and around 10 trying to read it, to import on another host.
I was talking about XenServer/XCP-ng compression (
compressparameter, which is maybe by default in
xeCLI). We don't do compression in XO.
Spent a saturday, and upgraded all our hosts to 7.1, from a mix of 6.2 and 6.5, and a few 7.1...
happened to need to run an export, on a newly upgraded host. What I think we are learning, is that exports run a near full network speed, after you get past 7.0....
LPJon last edited by
@olivierlambert Not trying to revive an old topic but I have some questions about the xcp/xoa pairing. So essentially, the ideal network is one for web access, one for xs->xoa communication, and one for nfs data transfers? Is that correct? 3 networks for optimal throughput of nfs? I'm trying to figure out a problem I had since updating XOA and then trying to use CR (Continuous Replication). I have never used CR up until the latest update. Iperf3 tests and nfs remotes tests show 98 -101MiB/s. When we move on to an nfs repository that drops to 38MiB/s and the 8 cores dedicated to the vm rocket to 100% and stay there the whole time a CR is running. Eventually the CR backup is "interrupted" and fails. What is going on under the hood to cause such a network speed difference? Is this a common thing? I have read and seen Backup-NG do this much better and faster even on a 1GB network, what changed? I'm not complaining, I have just been trying to troubleshoot this issue for a week now. I need a fresh perspective to help refocus my efforts. Simple file copyies with nfs from a standard nfs client work fine with 89MiB/s. Maybe it's related to processor intensive processing of the vdi? Anyways, I appreciate any help you might be able to offer or anyone else still listening to this topic.
XOA 5.52.1-server 5.52.0-web
XO Forum is closed here, migrated to https://xcp-ng.org/forum in the Xen Orchestra section.
To give you quick answers:
- 2 networks in general is fine (web+management, and remote data share for the rest)
- CR isn't using a remote, it's streaming from a host to another one, passing through XO in the middle. There's a backup pressure, meaning the stream is going at the speed of the slowest element in the chain.
Don't try to dig a lot: the speed you have is normal and due to how XS/XCP is exporting VHDs.