1. In the Backup > Settings tab set the client to backup to a local vault only (can be a USB drive or a UNC path if needed):
2. Start a backup from the Backup > Schedule tab.
3. Wait for the backup to finish.
4. On the backup client, at the sametime, you take the local vault, make sure to re-enable remote backups, and either keep making local backups or remove the local backup from the backup settings tab.
4a. If you are keeping the local on-site backup vault, make a copy of 'vault0000.db', which is found in the client's install directory, and rename the copy to the new name of 'vault0105.db'.
4b. If you are removing the local on-site backup vault and only plan on performing offsite backups, rename 'vault0000.db' to 'vault0105.db', which is again found in the client install directory.
Note: If your backup client is connecting to Amazon S3, Google Cloud Storage or Wasabi, then you should use 'vault0101.db' instead of 'vault0105.db'.
5. Copy the folder that was chosen in step 1 onto your backup server. It will have a number of subfolders (including vault, metadata, settings, logs). Copy the entire folder structure, including all subfolders, to the server, where the account is set up for that computer, the default location is generally '/userdata' of the installation folder.
Once all of the files are uploaded from your portable drive under the user account folder on your server, when the next backup occurs, it should start performing incremental backups.
For example, using the above screenshot, the local backup is at C:\Program Files\DivinsaCloud\vault\, and it will have subfolders "\DivinsaCloud\acct_name\" (full path of C:\Program Files\DivinsaCloud\vault\DivinsaCloud\acct_name\).
The entire contents of this folder should be copied to the server-side, which for this example is saved to E:\. On E:\ you will see a matching "\DivinsaCloud\acct_name\" folder, which should be the target for the copy.
Caution: When you re-enable remote backups on clients, do not enable rigorous sync on the client until their local seed has been successfully copied to their remote vault. Enabling rigorous sync before all the data is in place will cause the client to try to regenerate and resend all the "missing" blocks.
You are done! The next time the computer backs up only the changes will be uploaded to the server!
Please see the following article for factors that effect performance and suggestions for optimizing: https://support.wholesalebackup.com/hc/en-us/articles/202092304
Importing into Amazon S3
The primary difference between seeding to your own hosted Windows server and to Amazon S3 is how you get the seeded data into the remote vaults. With your own hosted server, you just have to copy their seed to the appropriate folder on your server, as explained in step #4 above. With Amazon S3, you will need toaffecthe data transferred to the correct bucket.
Amazon's AWS has recently begun offering their own hardware to assist with importing large amounts of data into S3; this device is named Snowball. If you have multiple large initial backups to send your S3 storage, this is an option you can consider.
First, you will need to request a Snowball drive from AWS, read about getting started with AWS snowball here: https://aws.amazon.com/snowball/getting-started/
When the device arrives, you will set it up on your LAN, transfer data to it, and then ship it back to Amazon for upload. The process should take about a week to complete, after which the imported user data will need to be moved to the correct bucket and folder.
Alternately, if you have a fast internet connection, you can use an S3 browsing application to transfer the data from one of your own computers up to S3.