1. In the Backup > Settings tab set the client to backup to a local vault only (can be a USB drive or a UNC path if needed):
2. Start a backup from the Backup > Schedule tab.
3. Wait for the backup to finish.
4. On the client, at the same time you take the local vault, make sure to re-enable remote backups, and either keep making local backups or remove the local backup.
4a. If you are keeping the local backup, make a copy of 'vault0000.db', which found in the client install directory, and rename the copy 'vault0105.db'.
4b. If you are removing the local backup, rename 'vault0000.db' to 'vault0105.db', which is again found in the client install directory.
Note: If your backup client is connecting to Amazon S3 or Google Cloud Storage, then you should use 'vault0101.db' instead of 'vault0105.db'.
5. Copy the folder that was chosen in step 1 to the backup server. It will have a number of subfolders (including vault, metadata, settings, logs). Copy the entire folder structure, including all subfolders, to the server.
For example, using the above screenshot, the local backup is at C:\Program Files\DivinsaCloud\vault\, and it will have subfolders "\DivinsaCloud\acct_name\" (full path of C:\Program Files\DivinsaCloud\vault\DivinsaCloud\acct_name\).
The entire contents of these folders should be copied to the server-side, which for this example is saved to E:\. On E:\ you will see a matching "\DivinsaCloud\acct_name\" folder, which should be the target for the copy.
Caution: When you re-enable remote backups on clients, do not enable rigorous sync on the client until their local seed has been successfully copied to their remote vault. Enabling rigorous sync before all the data is in place will cause the client to try to regenerate and resend all the "missing" blocks.
You are done! The next time the computer backs up only the changes will be uploaded to the server!
Please see the following article for factors that effect performance and suggestions for optimizing: https://support.wholesalebackup.com/hc/en-us/articles/202092304
Importing into Amazon S3
The primary difference between seeding to your own hosted Windows server and to Amazon S3 is how you get the seeded data into the remote vaults. With your own hosted server, you just have to copy their seed to the appropriate folder on your server, as explained in step #4 above. With Amazon S3, you will need to get the data transferred to the correct bucket.
Amazon's AWS has recently begun offering their own hardware to assist with importing large amounts of data into S3; this device is named Snowball. If you have multiple large initial backups to send your S3 storage, this is an option you can consider.
First, you will need to request a Snowball drive from AWS, read about getting started with AWS snowball here: https://aws.amazon.com/snowball/getting-started/
When the device arrives, you will set it up on your LAN, transfer data to it, and then ship it back to Amazon for upload. The process should take about a week to complete, after which the imported user data will need to be moved to the correct bucket and folder.
Alternately, if you have a fast internet connection, you can use an S3 browsing application to transfer the data from one of your own computers up to S3.