I've been tasked with moving about 6T of data from an aging SUSE box (9.x) to a CIFS share on a NetApp. The data is currently being shared out via SAMBA, but we want to retire the box because it is a few Dell PowerEdge 220s trays going out of support.
My initial thought was to mount the CIFS share on the NetApp from the SUSE box and use rsync to do the data migration. I figured that if I only moved the data across the network once I would get the best performance. I had a ton of trouble with this. I'm guessing the mount_smb in this old SUSE box just wasn't up to snuff. I got all kinds of permissions problems and timeouts on the writes. For the data that actually did get copied, I only got about 100G-200G per day throughput. I know the NetApp is capable of significantly higher throughput, so I looked for another way to move the data.
Since my normal tool of choice is Solaris, I looked for a Solaris solution. Solaris 10 update 6 is the current version we use in my shop. I couldn't find an elegant solution on Solaris 10. What I did come up with was mount_smbfs on OpenSolaris.
I downloaded a copy of OpenSolaris 2009.06 B108 and installed it in a VM. I mounted the old and new locations via mount_smbfs and I'm using rsync to do the copies. I don't have the performance information yet, but after about four hours there have been no errors and the speed looks very good. Even though I added another network hop, I was able to remove the weakest link from the data flow, mount_smb on SUSE 9.x.
If I can remember, I'll update with some performance numbers when I have them.