mirror of
https://github.com/jkl1337/duplicacy.git
synced 2026-01-02 11:44:45 -06:00
Compare commits
68 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
799b040913 | ||
|
|
41e3843bfa | ||
|
|
9e1d2ac1e6 | ||
|
|
bc40498d1b | ||
|
|
446bb4bcc8 | ||
|
|
150ea13a0d | ||
|
|
8c5b7d5f63 | ||
|
|
315dfff7d6 | ||
|
|
0bc475ca4d | ||
|
|
a0fa0fe7da | ||
|
|
01db72080c | ||
|
|
22ddc04698 | ||
|
|
2aa3b2b737 | ||
|
|
76f75cb0cb | ||
|
|
ea4c4339e6 | ||
|
|
fa294eabf4 | ||
|
|
0ec262fd93 | ||
|
|
db3e0946bb | ||
|
|
c426bf5af2 | ||
|
|
823b82060c | ||
|
|
4308e3e6e9 | ||
|
|
0391ecf941 | ||
|
|
7ecf895d85 | ||
|
|
a43114da99 | ||
|
|
caaff6b4b2 | ||
|
|
18964e89a1 | ||
|
|
2d1ea86d8e | ||
|
|
d881ac9169 | ||
|
|
1aee9bd6ef | ||
|
|
f3447bb611 | ||
|
|
9be4927c87 | ||
|
|
a0fcb8802b | ||
|
|
58cfeec6ab | ||
|
|
0d442e736d | ||
|
|
b32bda162d | ||
|
|
e6767bfad4 | ||
|
|
0b9e23fcd8 | ||
|
|
7f04a79111 | ||
|
|
211c6867d3 | ||
|
|
4a31fcfb68 | ||
|
|
6a4b1f2a3f | ||
|
|
483ae5e6eb | ||
|
|
f8d879d414 | ||
|
|
c2120ad3d5 | ||
|
|
f8764a5a79 | ||
|
|
736b4da0c3 | ||
|
|
0aa122609a | ||
|
|
18462cf585 | ||
|
|
e06283f0b3 | ||
|
|
b4f3142275 | ||
|
|
cdd1f26079 | ||
|
|
199e312bea | ||
|
|
88141216e9 | ||
|
|
f9ede565ff | ||
|
|
93a61a6e49 | ||
|
|
7d31199631 | ||
|
|
f2451911f2 | ||
|
|
ac655c8780 | ||
|
|
c31d2a30d9 | ||
|
|
83da36cae0 | ||
|
|
96e2f78096 | ||
|
|
593b409329 | ||
|
|
5334f45998 | ||
|
|
b56baa80c3 | ||
|
|
74ab8d8c23 | ||
|
|
a7613ab7d9 | ||
|
|
65127c7ab7 | ||
|
|
09f695b3e1 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,3 +0,0 @@
|
||||
.idea
|
||||
duplicacy_main
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Duplicacy is based on the following open source project:
|
||||
Duplicacy is based on the following open source projects:
|
||||
|
||||
| Projects | License |
|
||||
|--------|:-------:|
|
||||
@@ -8,7 +8,9 @@ Duplicacy is based on the following open source project:
|
||||
|https://github.com/Azure/azure-sdk-for-go | Apache-2.0 |
|
||||
|https://github.com/tj/go-dropbox | MIT |
|
||||
|https://github.com/aws/aws-sdk-go | Apache-2.0 |
|
||||
|https://github.com/goamz/goamz | LGPL with static link exception |
|
||||
|https://github.com/howeyc/gopass | ISC |
|
||||
|https://github.com/tmc/keyring | ISC |
|
||||
|https://github.com/pcwizz/xattr | BSD-2-Clause |
|
||||
|https://github.com/minio/blake2b-simd | Apache-2.0 |
|
||||
|https://github.com/go-ole/go-ole | MIT |
|
||||
|
||||
31
GUIDE.md
31
GUIDE.md
@@ -133,7 +133,7 @@ OPTIONS:
|
||||
-t <tag> list snapshots with the specified tag
|
||||
-files print the file list in each snapshot
|
||||
-chunks print chunks in each snapshot or all chunks if no snapshot specified
|
||||
-reset-password take passwords from input rather than keychain/keyring or env
|
||||
-reset-passwords take passwords from input rather than keychain/keyring or env
|
||||
-storage <storage name> retrieve snapshots from the specified storage
|
||||
```
|
||||
|
||||
@@ -456,11 +456,26 @@ destination storage and is required.
|
||||
|
||||
## Include/Exclude Patterns
|
||||
|
||||
An include pattern starts with +, and an exclude pattern starts with -. Patterns may contain wildcard characters such as * and ? with their normal meaning.
|
||||
An include pattern starts with +, and an exclude pattern starts with -. Patterns may contain wildcard characters * which matches a path string of any length, and ? matches a single character. Note that both * and ? will match any character including the path separator /.
|
||||
|
||||
The path separator is always /, even on Windows.
|
||||
|
||||
When matching a path against a list of patterns, the path is compared with the part after + or -, one pattern at a time. Therefore, the order of the patterns is significant. If a match with an include pattern is found, the path is said to be included without further comparisons. If a match with an exclude pattern is found, the path is said to be excluded without further comparison. If a match is not found, the path will be excluded if all patterns are include patterns, but included otherwise.
|
||||
|
||||
Patterns ending with a / apply to directories only, and patterns not ending with a / apply to files only. When a directory is excluded, all files and subdirectires under it will also be excluded. Note that the path separator is always /, even on Windows.
|
||||
Patterns ending with a / apply to directories only, and patterns not ending with a / apply to files only. Patterns ending with * and ?, however, apply to both directories and files. When a directory is excluded, all files and subdirectories under it will also be excluded. Therefore, to include a subdirectory, all parent directories must be explicitly included. For instance, the following pattern list doesn't do what is intended, since the `foo` directory will be excluded so the `foo/bar` will never be visited:
|
||||
|
||||
```
|
||||
+foo/bar/*
|
||||
-*
|
||||
```
|
||||
|
||||
The correct way is to include `foo` as well:
|
||||
|
||||
```
|
||||
+foo/bar/*
|
||||
+foo/
|
||||
-*
|
||||
```
|
||||
|
||||
The following pattern list includes only files under the directory foo/ but not files under the subdirectory foo/bar:
|
||||
|
||||
@@ -500,6 +515,16 @@ Duplicacy will attempt to retrieve in three ways the storage password and the st
|
||||
|
||||
Note that the passwords stored in the environment variable and the preference need to be in plaintext and thus are insecure and should be avoided whenever possible.
|
||||
|
||||
## Cache
|
||||
|
||||
Duplicacy maintains a local cache under the `.duplicacy/cache` folder in the repository. Only snapshot chunks may be stored in this local cache, and file chunks are never cached.
|
||||
|
||||
At the end of a backup operation, Duplicacy will clean up the local cache in such a way that only chunks composing the snapshot file from the last backup will stay in the cache. All other chunks will be removed from the cache. However, if the *prune* command has been run before (which will leave a the `.duplicacy/collection` folder in the repository, then the *backup* command won't perform any cache cleanup and instead defer that to the *prune* command.
|
||||
|
||||
At the end of a prune operation, Duplicacy will remove all chunks from the local cache except those composing the snapshot file from the last backup (those that would be kept by the *backup* command), as well as chunks that contain information about chunks referenced by *all* backups from *all* repositories connected to the same storage url.
|
||||
|
||||
Other commands, such as *list*, *check*, does not clean up the local cache at all, so the local cache may keep growing if many of these commands run consecutively. However, once a *backup* or a *prune* command is invoked, the local cache should shrink to its normal size.
|
||||
|
||||
## Scripts
|
||||
|
||||
You can instruct Duplicacy to run a script before or after executing a command. For example, if you create a bash script with the name *pre-prune* under the *.duplicacy/scripts* directory, this bash script will be run before the *prune* command starts. A script named *post-prune* will be run after the *prune* command finishes. This rule applies to all commands except *init*.
|
||||
|
||||
22
LICENSE.md
22
LICENSE.md
@@ -1,20 +1,6 @@
|
||||
Copyright © 2017 Acrosync LLC
|
||||
|
||||
Licensor: Acrosync LLC
|
||||
|
||||
Software: Dulicacy
|
||||
|
||||
Use Limitation: 5 users
|
||||
|
||||
License Grant. Licensor hereby grants to each recipient of the Software (“you”) a non-exclusive, non-transferable, royalty-free and fully-paid-up license, under all of the Licensor’s copyright and patent rights, to use, copy, distribute, prepare derivative works of, publicly perform and display the Software, subject to the Use Limitation and the conditions set forth below.
|
||||
|
||||
Use Limitation. The license granted above allows use by up to the number of users per entity set forth above (the “Use Limitation”). For determining the number of users, “you” includes all affiliates, meaning legal entities controlling, controlled by, or under common control with you. If you exceed the Use Limitation, your use is subject to payment of Licensor’s then-current list price for licenses.
|
||||
|
||||
Conditions. Redistribution in source code or other forms must include a copy of this license document to be provided in a reasonable manner. Any redistribution of the Software is only allowed subject to this license.
|
||||
|
||||
Trademarks. This license does not grant you any right in the trademarks, service marks, brand names or logos of Licensor.
|
||||
|
||||
DISCLAIMER. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OR CONDITION, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. LICENSORS HEREBY DISCLAIM ALL LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE.
|
||||
|
||||
Termination. If you violate the terms of this license, your rights will terminate automatically and will not be reinstated without the prior written consent of Licensor. Any such termination will not affect the right of others who may have received copies of the Software from you.
|
||||
|
||||
* Free for personal use or commercial trial
|
||||
* Non-trial commercial use requires per-user licenses available from [duplicacy.com](https://duplicacy.com/customer) at a cost of $20 per year
|
||||
* Commercial licenses are not required to restore or manage backups; only the backup command requires a valid commercial license
|
||||
* Modification and redistribution are permitted, but commercial use of derivative works is subject to the same requirements of this license
|
||||
|
||||
118
README.md
118
README.md
@@ -28,6 +28,9 @@ The [design document](https://github.com/gilbertchen/duplicacy-cli/blob/master/D
|
||||
|
||||
## Getting Started
|
||||
|
||||
<details>
|
||||
<summary>Installation</summary>
|
||||
|
||||
Duplicacy is written in Go. You can run the following command to build the executable (which will be created under `$GOPATH/bin`):
|
||||
|
||||
```
|
||||
@@ -36,6 +39,11 @@ go get -u github.com/gilbertchen/duplicacy/...
|
||||
|
||||
You can also visit the [releases page](https://github.com/gilbertchen/duplicacy-cli/releases/latest) to download the pre-built binary suitable for your platform..
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Commands</summary>
|
||||
|
||||
Once you have the Duplicacy executable on your path, you can change to the directory that you want to back up (called *repository*) and run the *init* command:
|
||||
|
||||
```
|
||||
@@ -51,8 +59,16 @@ You can now create snapshots of the repository by invoking the *backup* command.
|
||||
$ duplicacy backup -stats
|
||||
```
|
||||
|
||||
The *restore* command rolls back the repository to a previous revision:
|
||||
```sh
|
||||
$ duplicacy restore -r 1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Duplicacy provides a set of commands, such as list, check, diff, cat history, to manage snapshots:
|
||||
|
||||
|
||||
```makefile
|
||||
$ duplicacy list # List all snapshots
|
||||
$ duplicacy check # Check integrity of snapshots
|
||||
@@ -61,10 +77,6 @@ $ duplicacy cat # Print a file in a snapshot
|
||||
$ duplicacy history # Show how a file changes over time
|
||||
```
|
||||
|
||||
The *restore* command rolls back the repository to a previous revision:
|
||||
```sh
|
||||
$ duplicacy restore -r 1
|
||||
```
|
||||
|
||||
The *prune* command removes snapshots by revisions, or tags, or retention policies:
|
||||
|
||||
@@ -102,21 +114,26 @@ $ duplicacy copy -r 1 -to s3 # Copy snapshot at revision 1 to the s3 storage
|
||||
$ duplicacy copy -to s3 # Copy every snapshot to the s3 storage
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
The [User Guide](https://github.com/gilbertchen/duplicacy-cli/blob/master/GUIDE.md) contains a complete reference to
|
||||
all commands and other features of Duplicacy.
|
||||
|
||||
|
||||
## Storages
|
||||
|
||||
Duplicacy currently supports local file storage, SFTP, and 5 cloud storage providers.
|
||||
Duplicacy currently supports local file storage, SFTP, and many cloud storage providers.
|
||||
|
||||
#### Local disk
|
||||
<details> <summary>Local disk</summary>
|
||||
|
||||
```
|
||||
Storage URL: /path/to/storage (on Linux or Mac OS X)
|
||||
C:\path\to\storage (on Windows)
|
||||
```
|
||||
</details>
|
||||
|
||||
#### SFTP
|
||||
<details> <summary>SFTP</summary>
|
||||
|
||||
```
|
||||
Storage URL: sftp://username@server/path/to/storage
|
||||
@@ -124,7 +141,9 @@ Storage URL: sftp://username@server/path/to/storage
|
||||
|
||||
Login methods include password authentication and public key authentication. Due to a limitation of the underlying Go SSH library, the key pair for public key authentication must be generated without a passphrase. To work with a key that has a passphrase, you can set up SSH agent forwarding which is also supported by Duplicacy.
|
||||
|
||||
#### Dropbox
|
||||
</details>
|
||||
|
||||
<details> <summary>Dropbox</summary>
|
||||
|
||||
```
|
||||
Storage URL: dropbox://path/to/storage
|
||||
@@ -138,7 +157,9 @@ For Duplicacy to access your Dropbox storage, you must provide an access token t
|
||||
|
||||
Dropbox has two advantages over other cloud providers. First, if you are already a paid user then to use the unused space as the backup storage is basically free. Second, unlike other providers Dropbox does not charge bandwidth or API usage fees.
|
||||
|
||||
#### Amazon S3
|
||||
</details>
|
||||
|
||||
<details> <summary>Amazon S3</summary>
|
||||
|
||||
```
|
||||
Storage URL: s3://amazon.com/bucket/path/to/storage (default region is us-east-1)
|
||||
@@ -147,8 +168,31 @@ Storage URL: s3://amazon.com/bucket/path/to/storage (default region is us-east-
|
||||
|
||||
You'll need to input an access key and a secret key to access your Amazon S3 storage.
|
||||
|
||||
Minio-based S3 compatiable storages are also supported by using the `minio` or `minios` backends:
|
||||
```
|
||||
Storage URL: minio://region@host/bucket/path/to/storage (without TLS)
|
||||
Storage URL: minios://region@host/bucket/path/to/storage (with TLS)
|
||||
```
|
||||
|
||||
#### Google Cloud Storage
|
||||
There is another backend that works with S3 compatible storage providers that require V2 signing:
|
||||
```
|
||||
Storage URL: s3c://region@host/bucket/path/to/storage
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details> <summary>Wasabi</summary>
|
||||
|
||||
```
|
||||
Storage URL: s3://us-east-1@s3.wasabisys.com/bucket/path/to/storage
|
||||
```
|
||||
|
||||
[Wasabi](https://wasabi.com) is a relatively new cloud storage service providing a S3-compatible API.
|
||||
It is well suited for storing backups, because it is much cheaper than Amazon S3 with a storage cost of $.0039/GB/Month and a download fee of $0.04/GB, and no additional charges on API calls.
|
||||
|
||||
</details>
|
||||
|
||||
<details> <summary>Google Cloud Storage</summary>
|
||||
|
||||
```
|
||||
Storage URL: gcs://bucket/path/to/storage
|
||||
@@ -158,7 +202,9 @@ Starting from version 2.0.0, a new Google Cloud Storage backend is added which i
|
||||
|
||||
You can also use the s3 protocol to access Google Cloud Storage. To do this, you must enable the [s3 interoperability](https://cloud.google.com/storage/docs/migrating#migration-simple) in your Google Cloud Storage settings and set the storage url as `s3://storage.googleapis.com/bucket/path/to/storage`.
|
||||
|
||||
#### Microsoft Azure
|
||||
</details>
|
||||
|
||||
<details> <summary>Microsoft Azure</summary>
|
||||
|
||||
```
|
||||
Storage URL: azure://account/container
|
||||
@@ -166,7 +212,9 @@ Storage URL: azure://account/container
|
||||
|
||||
You'll need to input the access key once prompted.
|
||||
|
||||
#### Backblaze
|
||||
</details>
|
||||
|
||||
<details> <summary>Backblaze B2</summary>
|
||||
|
||||
```
|
||||
Storage URL: b2://bucket
|
||||
@@ -174,9 +222,11 @@ Storage URL: b2://bucket
|
||||
|
||||
You'll need to input the account id and application key.
|
||||
|
||||
Backblaze's B2 storage is not only the least expensive (at 0.5 cent per GB per month), but also the fastest. We have been working closely with their developers to leverage the full potentials provided by the B2 API in order to maximize the transfer speed.
|
||||
Backblaze's B2 storage is one of the least expensive (at 0.5 cent per GB per month, with a download fee of 2 cents per GB, plus additional charges for API calls).
|
||||
|
||||
#### Google Drive
|
||||
</details>
|
||||
|
||||
<details> <summary>Google Drive</summary>
|
||||
|
||||
```
|
||||
Storage URL: gcd://path/to/storage
|
||||
@@ -185,7 +235,9 @@ Storage URL: gcd://path/to/storage
|
||||
To use Google Drive as the storage, you first need to download a token file from https://duplicacy.com/gcd_start by
|
||||
authorizing Duplicacy to access your Google Drive, and then enter the path to this token file to Duplicacy when prompted.
|
||||
|
||||
#### Microsoft OneDrive
|
||||
</details>
|
||||
|
||||
<details> <summary>Microsoft OneDrive</summary>
|
||||
|
||||
```
|
||||
Storage URL: one://path/to/storage
|
||||
@@ -194,7 +246,9 @@ Storage URL: one://path/to/storage
|
||||
To use Microsoft OneDrive as the storage, you first need to download a token file from https://duplicacy.com/one_start by
|
||||
authorizing Duplicacy to access your OneDrive, and then enter the path to this token file to Duplicacy when prompted.
|
||||
|
||||
#### Hubic
|
||||
</details>
|
||||
|
||||
<details> <summary>Hubic</summary>
|
||||
|
||||
```
|
||||
Storage URL: hubic://path/to/storage
|
||||
@@ -205,8 +259,9 @@ authorizing Duplicacy to access your Hubic drive, and then enter the path to thi
|
||||
|
||||
Hubic offers the most free space (25GB) of all major cloud providers and there is no bandwidth charge (same as Google Drive and OneDrive), so it may be worth a try.
|
||||
|
||||
</details>
|
||||
|
||||
## Comparison with Other Backup Tools
|
||||
## Feature Comparison with Other Backup Tools
|
||||
|
||||
[duplicity](http://duplicity.nongnu.org) works by applying the rsync algorithm (or more specific, the [librsync](https://github.com/librsync/librsync) library)
|
||||
to find the differences from previous backups and only then uploading the differences. It is the only existing backup tool with extensive cloud support -- the [long list](http://duplicity.nongnu.org/duplicity.1.html#sect7) of storage backends covers almost every cloud provider one can think of. However, duplicity's biggest flaw lies in its incremental model -- a chain of dependent backups starts with a full backup followed by a number of incremental ones, and ends when another full backup is uploaded. Deleting one backup will render useless all the subsequent backups on the same chain. Periodic full backups are required, in order to make previous backups disposable.
|
||||
@@ -242,9 +297,32 @@ The following table compares the feature lists of all these backup tools:
|
||||
| Snapshot Migration | No | No | No | No | No | **Yes** |
|
||||
|
||||
|
||||
## Performance Comparison with Other Backup Tools
|
||||
|
||||
Duplicacy is not only more feature-rich but also faster than other backup tools. The following table lists the running times in seconds of backing up the [Linux code base](https://github.com/torvalds/linux) using Duplicacy and 3 other tools. Clearly Duplicacy is the fastest by a significant margin.
|
||||
|
||||
|
||||
| | Duplicacy | restic | Attic | duplicity |
|
||||
|:------------------:|:----------------:|:----------:|:----------:|:-----------:|
|
||||
| Initial backup | 13.7 | 20.7 | 26.9 | 44.2 |
|
||||
| 2nd backup | 4.8 | 8.0 | 15.4 | 19.5 |
|
||||
| 3rd backup | 6.9 | 11.9 | 19.6 | 29.8 |
|
||||
| 4th backup | 3.3 | 7.0 | 13.7 | 18.6 |
|
||||
| 5th backup | 9.9 | 11.4 | 19.9 | 28.0 |
|
||||
| 6th backup | 3.8 | 8.0 | 16.8 | 22.0 |
|
||||
| 7th backup | 5.1 | 7.8 | 14.3 | 21.6 |
|
||||
| 8th backup | 9.5 | 13.5 | 18.3 | 35.0 |
|
||||
| 9th backup | 4.3 | 9.0 | 15.7 | 24.9 |
|
||||
| 10th backup | 7.9 | 20.2 | 32.2 | 35.0 |
|
||||
| 11th backup | 4.6 | 9.1 | 16.8 | 28.1 |
|
||||
| 12th backup | 7.4 | 12.0 | 21.7 | 37.4 |
|
||||
|
||||
|
||||
For more details and other speed comparison results, please visit https://github.com/gilbertchen/benchmarking. There you can also find test scripts that you can use to run your own experiments.
|
||||
|
||||
## License
|
||||
|
||||
Duplicacy CLI is released under the [Fair Source 5 License](https://fair.io), which means it is free for individual users or any company or organization with less than 5 users. If your company or organization has 5 or more users, then a license for the actual number of users must be purchased from [duplicacy.com](https://duplicacy.com/customer).
|
||||
|
||||
A user is defined as the owner of any files to be backed up by Duplicacy. If you are an IT administrator who uses Duplicacy to back up files for your colleagues, then each colleague will be counted in the user limit permitted by the license.
|
||||
* Free for personal use or commercial trial
|
||||
* Non-trial commercial use requires per-user licenses available from [duplicacy.com](https://duplicacy.com/customer) at a cost of $20 per year
|
||||
* Commercial licenses are not required to restore or manage backups; only the backup command requires a valid commercial license
|
||||
* Modification and redistribution are permitted, but commercial use of derivative works is subject to the same requirements of this license
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package main
|
||||
|
||||
@@ -534,7 +534,9 @@ func changePassword(context *cli.Context) {
|
||||
|
||||
password := ""
|
||||
if preference.Encrypted {
|
||||
password = duplicacy.GetPassword(*preference, "password", "Enter old password for storage %s:", false, true)
|
||||
password = duplicacy.GetPassword(*preference, "password",
|
||||
fmt.Sprintf("Enter old password for storage %s:", preference.StorageURL),
|
||||
false, true)
|
||||
}
|
||||
|
||||
config, _, err := duplicacy.DownloadConfig(storage, password)
|
||||
@@ -1683,7 +1685,7 @@ func main() {
|
||||
app.Name = "duplicacy"
|
||||
app.HelpName = "duplicacy"
|
||||
app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
|
||||
app.Version = "2.0.5"
|
||||
app.Version = "2.0.7"
|
||||
|
||||
// If the program is interrupted, call the RunAtError function.
|
||||
c := make(chan os.Signal, 1)
|
||||
|
||||
@@ -8,11 +8,12 @@ fixture
|
||||
pushd ${TEST_REPO}
|
||||
${DUPLICACY} init integration-tests $TEST_STORAGE -c 4
|
||||
|
||||
# Create 10 20 files
|
||||
# Create 10 small files
|
||||
add_file file1 20
|
||||
add_file file2 20
|
||||
add_file file3 20
|
||||
rm file3; touch file3
|
||||
add_file file4 20
|
||||
chmod u-r file4
|
||||
add_file file5 20
|
||||
add_file file6 20
|
||||
add_file file7 20
|
||||
@@ -25,6 +26,8 @@ env DUPLICACY_FAIL_CHUNK=10 ${DUPLICACY} backup
|
||||
|
||||
# Try it again to test the multiple-resume case
|
||||
env DUPLICACY_FAIL_CHUNK=5 ${DUPLICACY} backup
|
||||
add_file file1 20
|
||||
add_file file2 20
|
||||
|
||||
# Fail the backup before uploading the snapshot
|
||||
env DUPLICACY_FAIL_SNAPSHOT=true ${DUPLICACY} backup
|
||||
|
||||
28
integration_tests/sparse_test.sh
Executable file
28
integration_tests/sparse_test.sh
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Testing backup and restore of sparse files
|
||||
|
||||
. ./test_functions.sh
|
||||
|
||||
fixture
|
||||
|
||||
pushd ${TEST_REPO}
|
||||
${DUPLICACY} init integration-tests $TEST_STORAGE -c 1m
|
||||
|
||||
for i in `seq 1 10`; do
|
||||
dd if=/dev/urandom of=file3 bs=1000 count=1000 seek=$((100000 * $i))
|
||||
done
|
||||
|
||||
ls -lsh file3
|
||||
|
||||
${DUPLICACY} backup
|
||||
${DUPLICACY} check --files -stats
|
||||
|
||||
rm file1 file3
|
||||
|
||||
${DUPLICACY} restore -r 1
|
||||
${DUPLICACY} -v restore -r 1 -overwrite -stats -hash
|
||||
|
||||
ls -lsh file3
|
||||
|
||||
popd
|
||||
@@ -10,6 +10,7 @@ backup
|
||||
add_file file3
|
||||
backup
|
||||
add_file file4
|
||||
chmod u-r ${TEST_REPO}/file4
|
||||
backup
|
||||
add_file file5
|
||||
restore
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -14,14 +14,13 @@ import (
|
||||
type AzureStorage struct {
|
||||
RateLimitedStorage
|
||||
|
||||
clients []*storage.BlobStorageClient
|
||||
container string
|
||||
containers []*storage.Container
|
||||
}
|
||||
|
||||
func CreateAzureStorage(accountName string, accountKey string,
|
||||
container string, threads int) (azureStorage *AzureStorage, err error) {
|
||||
containerName string, threads int) (azureStorage *AzureStorage, err error) {
|
||||
|
||||
var clients []*storage.BlobStorageClient
|
||||
var containers []*storage.Container
|
||||
for i := 0; i < threads; i++ {
|
||||
|
||||
client, err := storage.NewBasicClient(accountName, accountKey)
|
||||
@@ -31,21 +30,21 @@ func CreateAzureStorage(accountName string, accountKey string,
|
||||
}
|
||||
|
||||
blobService := client.GetBlobService()
|
||||
clients = append(clients, &blobService)
|
||||
container := blobService.GetContainerReference(containerName)
|
||||
containers = append(containers, container)
|
||||
}
|
||||
|
||||
exist, err := clients[0].ContainerExists(container)
|
||||
exist, err := containers[0].Exists()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !exist {
|
||||
return nil, fmt.Errorf("container %s does not exist", container)
|
||||
return nil, fmt.Errorf("container %s does not exist", containerName)
|
||||
}
|
||||
|
||||
azureStorage = &AzureStorage {
|
||||
clients: clients,
|
||||
container: container,
|
||||
containers: containers,
|
||||
}
|
||||
|
||||
return
|
||||
@@ -77,7 +76,7 @@ func (azureStorage *AzureStorage) ListFiles(threadIndex int, dir string) (files
|
||||
|
||||
for {
|
||||
|
||||
results, err := azureStorage.clients[threadIndex].ListBlobs(azureStorage.container, parameters)
|
||||
results, err := azureStorage.containers[threadIndex].ListBlobs(parameters)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -115,14 +114,15 @@ func (azureStorage *AzureStorage) ListFiles(threadIndex int, dir string) (files
|
||||
|
||||
// DeleteFile deletes the file or directory at 'filePath'.
|
||||
func (storage *AzureStorage) DeleteFile(threadIndex int, filePath string) (err error) {
|
||||
_, err = storage.clients[threadIndex].DeleteBlobIfExists(storage.container, filePath)
|
||||
_, err = storage.containers[threadIndex].GetBlobReference(filePath).DeleteIfExists(nil)
|
||||
return err
|
||||
}
|
||||
|
||||
// MoveFile renames the file.
|
||||
func (storage *AzureStorage) MoveFile(threadIndex int, from string, to string) (err error) {
|
||||
source := storage.clients[threadIndex].GetBlobURL(storage.container, from)
|
||||
err = storage.clients[threadIndex].CopyBlob(storage.container, to, source)
|
||||
source := storage.containers[threadIndex].GetBlobReference(from)
|
||||
destination := storage.containers[threadIndex].GetBlobReference(to)
|
||||
err = destination.Copy(source.GetURL(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -136,7 +136,8 @@ func (storage *AzureStorage) CreateDirectory(threadIndex int, dir string) (err e
|
||||
|
||||
// GetFileInfo returns the information about the file or directory at 'filePath'.
|
||||
func (storage *AzureStorage) GetFileInfo(threadIndex int, filePath string) (exist bool, isDir bool, size int64, err error) {
|
||||
properties, err := storage.clients[threadIndex].GetBlobProperties(storage.container, filePath)
|
||||
blob := storage.containers[threadIndex].GetBlobReference(filePath)
|
||||
err = blob.GetProperties(nil)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "404") {
|
||||
return false, false, 0, nil
|
||||
@@ -145,7 +146,7 @@ func (storage *AzureStorage) GetFileInfo(threadIndex int, filePath string) (exis
|
||||
}
|
||||
}
|
||||
|
||||
return true, false, properties.ContentLength, nil
|
||||
return true, false, blob.Properties.ContentLength, nil
|
||||
}
|
||||
|
||||
// FindChunk finds the chunk with the specified id. If 'isFossil' is true, it will search for chunk files with
|
||||
@@ -167,21 +168,22 @@ func (storage *AzureStorage) FindChunk(threadIndex int, chunkID string, isFossil
|
||||
|
||||
// DownloadFile reads the file at 'filePath' into the chunk.
|
||||
func (storage *AzureStorage) DownloadFile(threadIndex int, filePath string, chunk *Chunk) (err error) {
|
||||
readCloser, err := storage.clients[threadIndex].GetBlob(storage.container, filePath)
|
||||
readCloser, err := storage.containers[threadIndex].GetBlobReference(filePath).Get(nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer readCloser.Close()
|
||||
|
||||
_, err = RateLimitedCopy(chunk, readCloser, storage.DownloadRateLimit / len(storage.clients))
|
||||
_, err = RateLimitedCopy(chunk, readCloser, storage.DownloadRateLimit / len(storage.containers))
|
||||
return err
|
||||
}
|
||||
|
||||
// UploadFile writes 'content' to the file at 'filePath'.
|
||||
func (storage *AzureStorage) UploadFile(threadIndex int, filePath string, content []byte) (err error) {
|
||||
reader := CreateRateLimitedReader(content, storage.UploadRateLimit / len(storage.clients))
|
||||
return storage.clients[threadIndex].CreateBlockBlobFromReader(storage.container, filePath, uint64(len(content)), reader, nil)
|
||||
reader := CreateRateLimitedReader(content, storage.UploadRateLimit / len(storage.containers))
|
||||
blob := storage.containers[threadIndex].GetBlobReference(filePath)
|
||||
return blob.CreateBlockBlobFromReader(reader, nil)
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -76,7 +76,7 @@ func (manager *BackupManager) SetupSnapshotCache(storageName string) bool {
|
||||
preferencePath := GetDuplicacyPreferencePath()
|
||||
cacheDir := path.Join(preferencePath, "cache", storageName)
|
||||
|
||||
storage, err := CreateFileStorage(cacheDir, 1)
|
||||
storage, err := CreateFileStorage(cacheDir, 2, false, 1)
|
||||
if err != nil {
|
||||
LOG_ERROR("BACKUP_CACHE", "Failed to create the snapshot cache dir: %v", err)
|
||||
return false
|
||||
@@ -361,14 +361,16 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
|
||||
var uploadedChunkLengths []int
|
||||
var uploadedChunkLock = &sync.Mutex{}
|
||||
|
||||
// the file reader implements the Reader interface. When an EOF is encounter, it opens the next file unless it
|
||||
// is the last file.
|
||||
fileReader := CreateFileReader(shadowTop, modifiedEntries)
|
||||
// Set all file sizes to -1 to indicate they haven't been processed
|
||||
// Set all file sizes to -1 to indicate they haven't been processed. This must be done before creating the file
|
||||
// reader because the file reader may skip inaccessible files on construction.
|
||||
for _, entry := range modifiedEntries {
|
||||
entry.Size = -1
|
||||
}
|
||||
|
||||
// the file reader implements the Reader interface. When an EOF is encounter, it opens the next file unless it
|
||||
// is the last file.
|
||||
fileReader := CreateFileReader(shadowTop, modifiedEntries)
|
||||
|
||||
startUploadingTime := time.Now().Unix()
|
||||
|
||||
lastUploadingTime := time.Now().Unix()
|
||||
@@ -713,6 +715,11 @@ func (manager *BackupManager) Restore(top string, revision int, inPlace bool, qu
|
||||
LOG_DEBUG("RESTORE_PARAMETERS", "top: %s, revision: %d, in-place: %t, quick: %t, delete: %t",
|
||||
top, revision, inPlace, quickMode, deleteMode)
|
||||
|
||||
if !strings.HasPrefix(GetDuplicacyPreferencePath(), top) {
|
||||
LOG_INFO("RESTORE_INPLACE", "Forcing in-place mode with a non-default preference path")
|
||||
inPlace = true
|
||||
}
|
||||
|
||||
if len(patterns) > 0 {
|
||||
for _, pattern := range patterns {
|
||||
LOG_TRACE("RESTORE_PATTERN", "%s", pattern)
|
||||
@@ -911,7 +918,9 @@ func (manager *BackupManager) Restore(top string, revision int, inPlace bool, qu
|
||||
|
||||
|
||||
if deleteMode && len(patterns) == 0 {
|
||||
for _, file := range extraFiles {
|
||||
// Reverse the order to make sure directories are empty before being deleted
|
||||
for i := range extraFiles {
|
||||
file := extraFiles[len(extraFiles) - 1 - i]
|
||||
fullPath := joinPath(top, file)
|
||||
os.Remove(fullPath)
|
||||
LOG_INFO("RESTORE_DELETE", "Deleted %s", file)
|
||||
@@ -925,8 +934,6 @@ func (manager *BackupManager) Restore(top string, revision int, inPlace bool, qu
|
||||
}
|
||||
}
|
||||
|
||||
RemoveEmptyDirectories(top)
|
||||
|
||||
if showStatistics {
|
||||
for _, file := range downloadedFiles {
|
||||
LOG_INFO("DOWNLOAD_DONE", "Downloaded %s (%d)", file.Path, file.Size)
|
||||
@@ -1143,33 +1150,36 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
|
||||
var offset int64
|
||||
|
||||
existingFile, err = os.Open(fullPath)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
LOG_TRACE("DOWNLOAD_OPEN", "Can't open the existing file: %v", err)
|
||||
}
|
||||
|
||||
fileHash := ""
|
||||
if existingFile != nil {
|
||||
// Break existing file into chunks.
|
||||
chunkMaker.ForEachChunk(
|
||||
existingFile,
|
||||
func (chunk *Chunk, final bool) {
|
||||
hash := chunk.GetHash()
|
||||
chunkSize := chunk.GetLength()
|
||||
existingChunks = append(existingChunks, hash)
|
||||
existingLengths = append(existingLengths, chunkSize)
|
||||
offsetMap[hash] = offset
|
||||
lengthMap[hash] = chunkSize
|
||||
offset += int64(chunkSize)
|
||||
},
|
||||
func (fileSize int64, hash string) (io.Reader, bool) {
|
||||
fileHash = hash
|
||||
return nil, false
|
||||
})
|
||||
if fileHash == entry.Hash && fileHash != "" {
|
||||
LOG_TRACE("DOWNLOAD_SKIP", "File %s unchanged (by hash)", entry.Path)
|
||||
return false
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
if inPlace && entry.Size > 100 * 1024 * 1024 {
|
||||
// Create an empty sparse file
|
||||
existingFile, err = os.OpenFile(fullPath, os.O_WRONLY | os.O_CREATE | os.O_TRUNC, 0600)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_CREATE", "Failed to create the file %s for in-place writing", fullPath)
|
||||
return false
|
||||
}
|
||||
_, err = existingFile.Seek(entry.Size - 1, 0)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_CREATE", "Failed to resize the initial file %s for in-place writing", fullPath)
|
||||
return false
|
||||
}
|
||||
_, err = existingFile.Write([]byte("\x00"))
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_CREATE", "Failed to initialize the sparse file %s for in-place writing", fullPath)
|
||||
return false
|
||||
}
|
||||
existingFile.Close()
|
||||
existingFile, err = os.Open(fullPath)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_OPEN", "Can't reopen the initial file just created: %v", err)
|
||||
return false
|
||||
}
|
||||
}
|
||||
} else {
|
||||
LOG_TRACE("DOWNLOAD_OPEN", "Can't open the existing file: %v", err)
|
||||
}
|
||||
|
||||
} else {
|
||||
if !overwrite {
|
||||
LOG_ERROR("DOWNLOAD_OVERWRITE",
|
||||
"File %s already exists. Please specify the -overwrite option to continue", entry.Path)
|
||||
@@ -1177,9 +1187,83 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
|
||||
}
|
||||
}
|
||||
|
||||
if inPlace {
|
||||
if existingFile == nil {
|
||||
inPlace = false
|
||||
fileHash := ""
|
||||
if existingFile != nil {
|
||||
|
||||
if inPlace {
|
||||
// In inplace mode, we only consider chunks in the existing file with the same offsets, so we
|
||||
// break the original file at offsets retrieved from the backup
|
||||
fileHasher := manager.config.NewFileHasher()
|
||||
buffer := make([]byte, 64 * 1024)
|
||||
err = nil
|
||||
// We set to read one more byte so the file hash will be different if the file to be restored is a
|
||||
// truncated portion of the existing file
|
||||
for i := entry.StartChunk; i <= entry.EndChunk + 1; i++ {
|
||||
hasher := manager.config.NewKeyedHasher(manager.config.HashKey)
|
||||
chunkSize := 1 // the size of extra chunk beyond EndChunk
|
||||
if i == entry.StartChunk {
|
||||
chunkSize -= entry.StartOffset
|
||||
} else if i == entry.EndChunk {
|
||||
chunkSize = entry.EndOffset
|
||||
} else if i > entry.StartChunk && i < entry.EndChunk {
|
||||
chunkSize = chunkDownloader.taskList[i].chunkLength
|
||||
}
|
||||
count := 0
|
||||
for count < chunkSize {
|
||||
n := chunkSize - count
|
||||
if n > cap(buffer) {
|
||||
n = cap(buffer)
|
||||
}
|
||||
n, err := existingFile.Read(buffer[:n])
|
||||
if n > 0 {
|
||||
hasher.Write(buffer[:n])
|
||||
fileHasher.Write(buffer[:n])
|
||||
count += n
|
||||
}
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_SPLIT", "Failed to read existing file: %v", err)
|
||||
return false
|
||||
}
|
||||
}
|
||||
if count > 0 {
|
||||
hash := string(hasher.Sum(nil))
|
||||
existingChunks = append(existingChunks, hash)
|
||||
existingLengths = append(existingLengths, chunkSize)
|
||||
offsetMap[hash] = offset
|
||||
lengthMap[hash] = chunkSize
|
||||
offset += int64(chunkSize)
|
||||
}
|
||||
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
fileHash = hex.EncodeToString(fileHasher.Sum(nil))
|
||||
} else {
|
||||
// If it is not inplace, we want to reuse any chunks in the existing file regardless their offets, so
|
||||
// we run the chunk maker to split the original file.
|
||||
chunkMaker.ForEachChunk(
|
||||
existingFile,
|
||||
func (chunk *Chunk, final bool) {
|
||||
hash := chunk.GetHash()
|
||||
chunkSize := chunk.GetLength()
|
||||
existingChunks = append(existingChunks, hash)
|
||||
existingLengths = append(existingLengths, chunkSize)
|
||||
offsetMap[hash] = offset
|
||||
lengthMap[hash] = chunkSize
|
||||
offset += int64(chunkSize)
|
||||
},
|
||||
func (fileSize int64, hash string) (io.Reader, bool) {
|
||||
fileHash = hash
|
||||
return nil, false
|
||||
})
|
||||
}
|
||||
if fileHash == entry.Hash && fileHash != "" {
|
||||
LOG_TRACE("DOWNLOAD_SKIP", "File %s unchanged (by hash)", entry.Path)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1195,11 +1279,20 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
|
||||
|
||||
LOG_TRACE("DOWNLOAD_INPLACE", "Updating %s in place", fullPath)
|
||||
|
||||
existingFile.Close()
|
||||
existingFile, err = os.OpenFile(fullPath, os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_OPEN", "Failed to open the file %s for in-place writing", fullPath)
|
||||
return false
|
||||
if existingFile == nil {
|
||||
// Create an empty file
|
||||
existingFile, err = os.OpenFile(fullPath, os.O_WRONLY | os.O_CREATE | os.O_TRUNC, 0600)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_CREATE", "Failed to create the file %s for in-place writing", fullPath)
|
||||
}
|
||||
} else {
|
||||
// Close and reopen in a different mode
|
||||
existingFile.Close()
|
||||
existingFile, err = os.OpenFile(fullPath, os.O_RDWR, 0)
|
||||
if err != nil {
|
||||
LOG_ERROR("DOWNLOAD_OPEN", "Failed to open the file %s for in-place writing", fullPath)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
existingFile.Seek(0, 0)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -104,6 +104,27 @@ func modifyFile(path string, portion float32) {
|
||||
}
|
||||
}
|
||||
|
||||
func checkExistence(t *testing.T, path string, exists bool, isDir bool) {
|
||||
stat, err := os.Stat(path)
|
||||
if exists {
|
||||
if err != nil {
|
||||
t.Errorf("%s does not exist: %v", path, err)
|
||||
} else if isDir {
|
||||
if !stat.Mode().IsDir() {
|
||||
t.Errorf("%s is not a directory", path)
|
||||
}
|
||||
} else {
|
||||
if stat.Mode().IsDir() {
|
||||
t.Errorf("%s is not a file", path)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err == nil || !os.IsNotExist(err) {
|
||||
t.Errorf("%s may exist: %v", path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func truncateFile(path string) {
|
||||
file, err := os.OpenFile(path, os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
@@ -173,6 +194,9 @@ func TestBackupManager(t *testing.T) {
|
||||
|
||||
os.Mkdir(testDir + "/repository1", 0700)
|
||||
os.Mkdir(testDir + "/repository1/dir1", 0700)
|
||||
os.Mkdir(testDir + "/repository1/.duplicacy", 0700)
|
||||
os.Mkdir(testDir + "/repository2", 0700)
|
||||
os.Mkdir(testDir + "/repository2/.duplicacy", 0700)
|
||||
|
||||
maxFileSize := 1000000
|
||||
//maxFileSize := 200000
|
||||
@@ -215,12 +239,14 @@ func TestBackupManager(t *testing.T) {
|
||||
|
||||
time.Sleep(time.Duration(delay) * time.Second)
|
||||
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1")
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
|
||||
backupManager := CreateBackupManager("host1", storage, testDir, password)
|
||||
backupManager.SetupSnapshotCache("default")
|
||||
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
|
||||
backupManager.Backup(testDir + "/repository1", /*quickMode=*/true, threads, "first", false, false)
|
||||
time.Sleep(time.Duration(delay) * time.Second)
|
||||
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
|
||||
backupManager.Restore(testDir + "/repository2", threads, /*inPlace=*/false, /*quickMode=*/false, threads, /*overwrite=*/true,
|
||||
/*deleteMode=*/false, /*showStatistics=*/false, /*patterns=*/nil)
|
||||
|
||||
@@ -241,8 +267,10 @@ func TestBackupManager(t *testing.T) {
|
||||
modifyFile(testDir + "/repository1/file2", 0.2)
|
||||
modifyFile(testDir + "/repository1/dir1/file3", 0.3)
|
||||
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
|
||||
backupManager.Backup(testDir + "/repository1", /*quickMode=*/true, threads, "second", false, false)
|
||||
time.Sleep(time.Duration(delay) * time.Second)
|
||||
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
|
||||
backupManager.Restore(testDir + "/repository2", 2, /*inPlace=*/true, /*quickMode=*/true, threads, /*overwrite=*/true,
|
||||
/*deleteMode=*/false, /*showStatistics=*/false, /*patterns=*/nil)
|
||||
|
||||
@@ -254,11 +282,25 @@ func TestBackupManager(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Truncate file2 and add a few empty directories
|
||||
truncateFile(testDir + "/repository1/file2")
|
||||
os.Mkdir(testDir + "/repository1/dir2", 0700)
|
||||
os.Mkdir(testDir + "/repository1/dir2/dir3", 0700)
|
||||
os.Mkdir(testDir + "/repository1/dir4", 0700)
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
|
||||
backupManager.Backup(testDir + "/repository1", /*quickMode=*/false, threads, "third", false, false)
|
||||
time.Sleep(time.Duration(delay) * time.Second)
|
||||
|
||||
// Create some directories and files under repository2 that will be deleted during restore
|
||||
os.Mkdir(testDir + "/repository2/dir5", 0700)
|
||||
os.Mkdir(testDir + "/repository2/dir5/dir6", 0700)
|
||||
os.Mkdir(testDir + "/repository2/dir7", 0700)
|
||||
createRandomFile(testDir + "/repository2/file4", 100)
|
||||
createRandomFile(testDir + "/repository2/dir5/file5", 100)
|
||||
|
||||
SetDuplicacyPreferencePath(testDir + "/repository2/.duplicacy")
|
||||
backupManager.Restore(testDir + "/repository2", 3, /*inPlace=*/true, /*quickMode=*/false, threads, /*overwrite=*/true,
|
||||
/*deleteMode=*/false, /*showStatistics=*/false, /*patterns=*/nil)
|
||||
/*deleteMode=*/true, /*showStatistics=*/false, /*patterns=*/nil)
|
||||
|
||||
for _, f := range []string{ "file1", "file2", "dir1/file3" } {
|
||||
hash1 := getFileHash(testDir + "/repository1/" + f)
|
||||
@@ -268,9 +310,22 @@ func TestBackupManager(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// These files/dirs should not exist because deleteMode == true
|
||||
checkExistence(t, testDir + "/repository2/dir5", false, false);
|
||||
checkExistence(t, testDir + "/repository2/dir5/dir6", false, false);
|
||||
checkExistence(t, testDir + "/repository2/dir7", false, false);
|
||||
checkExistence(t, testDir + "/repository2/file4", false, false);
|
||||
checkExistence(t, testDir + "/repository2/dir5/file5", false, false);
|
||||
|
||||
// These empty dirs should exist
|
||||
checkExistence(t, testDir + "/repository2/dir2", true, true);
|
||||
checkExistence(t, testDir + "/repository2/dir2/dir3", true, true);
|
||||
checkExistence(t, testDir + "/repository2/dir4", true, true);
|
||||
|
||||
// Remove file2 and dir1/file3 and restore them from revision 3
|
||||
os.Remove(testDir + "/repository1/file2")
|
||||
os.Remove(testDir + "/repository1/dir1/file3")
|
||||
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
|
||||
backupManager.Restore(testDir + "/repository1", 3, /*inPlace=*/true, /*quickMode=*/false, threads, /*overwrite=*/true,
|
||||
/*deleteMode=*/false, /*showStatistics=*/false, /*patterns=*/[]string{"+file2", "+dir1/file3", "-*"})
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -154,6 +154,18 @@ func (chunk *Chunk) GetID() string {
|
||||
return chunk.id
|
||||
}
|
||||
|
||||
func (chunk *Chunk) VerifyID() {
|
||||
hasher := chunk.config.NewKeyedHasher(chunk.config.HashKey)
|
||||
hasher.Write(chunk.buffer.Bytes())
|
||||
hash := hasher.Sum(nil)
|
||||
hasher = chunk.config.NewKeyedHasher(chunk.config.IDKey)
|
||||
hasher.Write([]byte(hash))
|
||||
chunkID := hex.EncodeToString(hasher.Sum(nil))
|
||||
if chunkID != chunk.GetID() {
|
||||
LOG_ERROR("CHUNK_ID", "The chunk id should be %s instead of %s, length: %d", chunkID, chunk.GetID(), len(chunk.buffer.Bytes()))
|
||||
}
|
||||
}
|
||||
|
||||
// Encrypt encrypts the plain data stored in the chunk buffer. If derivationKey is not nil, the actual
|
||||
// encryption key will be HMAC-SHA256(encryptionKey, derivationKey).
|
||||
func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string) (err error) {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -45,8 +45,8 @@ type ChunkDownloader struct {
|
||||
completionChannel chan ChunkDownloadCompletion // A downloading goroutine sends back the chunk via this channel after downloading
|
||||
|
||||
startTime int64 // The time it starts downloading
|
||||
totalFileSize int64 // Total file size
|
||||
downloadedFileSize int64 // Downloaded file size
|
||||
totalChunkSize int64 // Total chunk size
|
||||
downloadedChunkSize int64 // Downloaded chunk size
|
||||
numberOfDownloadedChunks int // The number of chunks that have been downloaded
|
||||
numberOfDownloadingChunks int // The number of chunks still being downloaded
|
||||
numberOfActiveChunks int // The number of chunks that is being downloaded or has been downloaded but not reclaimed
|
||||
@@ -95,7 +95,7 @@ func (downloader *ChunkDownloader) AddFiles(snapshot *Snapshot, files [] *Entry)
|
||||
downloader.taskList = nil
|
||||
lastChunkIndex := -1
|
||||
maximumChunks := 0
|
||||
downloader.totalFileSize = 0
|
||||
downloader.totalChunkSize = 0
|
||||
for _, file := range files {
|
||||
if file.Size == 0 {
|
||||
continue
|
||||
@@ -109,6 +109,7 @@ func (downloader *ChunkDownloader) AddFiles(snapshot *Snapshot, files [] *Entry)
|
||||
needed: false,
|
||||
}
|
||||
downloader.taskList = append(downloader.taskList, task)
|
||||
downloader.totalChunkSize += int64(snapshot.ChunkLengths[i])
|
||||
} else {
|
||||
downloader.taskList[len(downloader.taskList) - 1].needed = true
|
||||
}
|
||||
@@ -119,7 +120,6 @@ func (downloader *ChunkDownloader) AddFiles(snapshot *Snapshot, files [] *Entry)
|
||||
if file.EndChunk - file.StartChunk > maximumChunks {
|
||||
maximumChunks = file.EndChunk - file.StartChunk
|
||||
}
|
||||
downloader.totalFileSize += file.Size
|
||||
}
|
||||
}
|
||||
|
||||
@@ -177,12 +177,6 @@ func (downloader *ChunkDownloader) Reclaim(chunkIndex int) {
|
||||
return
|
||||
}
|
||||
|
||||
for i := downloader.lastChunkIndex; i < chunkIndex; i++ {
|
||||
if !downloader.taskList[i].isDownloading {
|
||||
atomic.AddInt64(&downloader.downloadedFileSize, int64(downloader.taskList[i].chunkLength))
|
||||
}
|
||||
}
|
||||
|
||||
for i, _ := range downloader.completedTasks {
|
||||
if i < chunkIndex && downloader.taskList[i].chunk != nil {
|
||||
downloader.config.PutChunk(downloader.taskList[i].chunk)
|
||||
@@ -320,7 +314,11 @@ func (downloader *ChunkDownloader) Download(threadIndex int, task ChunkDownloadT
|
||||
|
||||
if !exist {
|
||||
// A chunk is not found. This is a serious error and hopefully it will never happen.
|
||||
LOG_FATAL("DOWNLOAD_CHUNK", "Chunk %s can't be found", chunkID)
|
||||
if err != nil {
|
||||
LOG_FATAL("DOWNLOAD_CHUNK", "Chunk %s can't be found: %v", chunkID, err)
|
||||
} else {
|
||||
LOG_FATAL("DOWNLOAD_CHUNK", "Chunk %s can't be found", chunkID)
|
||||
}
|
||||
return false
|
||||
}
|
||||
LOG_DEBUG("CHUNK_FOSSIL", "Chunk %s has been marked as a fossil", chunkID)
|
||||
@@ -353,21 +351,20 @@ func (downloader *ChunkDownloader) Download(threadIndex int, task ChunkDownloadT
|
||||
}
|
||||
}
|
||||
|
||||
if (downloader.showStatistics || IsTracing()) && downloader.totalFileSize > 0 {
|
||||
downloadedChunkSize := atomic.AddInt64(&downloader.downloadedChunkSize, int64(chunk.GetLength()))
|
||||
|
||||
atomic.AddInt64(&downloader.downloadedFileSize, int64(chunk.GetLength()))
|
||||
downloadFileSize := atomic.LoadInt64(&downloader.downloadedFileSize)
|
||||
if (downloader.showStatistics || IsTracing()) && downloader.totalChunkSize > 0 {
|
||||
|
||||
now := time.Now().Unix()
|
||||
if now <= downloader.startTime {
|
||||
now = downloader.startTime + 1
|
||||
}
|
||||
speed := downloadFileSize / (now - downloader.startTime)
|
||||
speed := downloadedChunkSize / (now - downloader.startTime)
|
||||
remainingTime := int64(0)
|
||||
if speed > 0 {
|
||||
remainingTime = (downloader.totalFileSize - downloadFileSize) / speed + 1
|
||||
remainingTime = (downloader.totalChunkSize - downloadedChunkSize) / speed + 1
|
||||
}
|
||||
percentage := float32(downloadFileSize * 1000 / downloader.totalFileSize)
|
||||
percentage := float32(downloadedChunkSize * 1000 / downloader.totalChunkSize)
|
||||
LOG_INFO("DOWNLOAD_PROGRESS", "Downloaded chunk %d size %d, %sB/s %s %.1f%%",
|
||||
task.chunkIndex + 1, chunk.GetLength(),
|
||||
PrettySize(speed), PrettyTime(remainingTime), percentage / 10)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -178,6 +178,9 @@ func (maker *ChunkMaker) ForEachChunk(reader io.Reader, endOfChunk func(chunk *C
|
||||
fileHasher = maker.config.NewFileHasher()
|
||||
isEOF = false
|
||||
}
|
||||
} else {
|
||||
endOfChunk(chunk, false)
|
||||
startNewChunk()
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -92,6 +92,11 @@ func (uploader *ChunkUploader) Upload(threadIndex int, task ChunkUploadTask) boo
|
||||
chunkSize := chunk.GetLength()
|
||||
chunkID := chunk.GetID()
|
||||
|
||||
// For a snapshot chunk, verify that its chunk id is correct
|
||||
if uploader.snapshotCache != nil {
|
||||
chunk.VerifyID()
|
||||
}
|
||||
|
||||
if uploader.snapshotCache != nil && uploader.storage.IsCacheNeeded() {
|
||||
// Save a copy to the local snapshot.
|
||||
chunkPath, exist, _, err := uploader.snapshotCache.FindChunk(threadIndex, chunkID, false)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -104,7 +104,7 @@ func TestUploaderAndDownloader(t *testing.T) {
|
||||
|
||||
|
||||
chunkDownloader := CreateChunkDownloader(config, storage, nil, true, testThreads)
|
||||
chunkDownloader.totalFileSize = int64(totalFileSize)
|
||||
chunkDownloader.totalChunkSize = int64(totalFileSize)
|
||||
|
||||
for _, chunk := range chunks {
|
||||
chunkDownloader.AddChunk(chunk.GetHash())
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
package duplicacy
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -18,19 +18,25 @@ import (
|
||||
type FileStorage struct {
|
||||
RateLimitedStorage
|
||||
|
||||
minimumLevel int // The minimum level of directories to dive into before searching for the chunk file.
|
||||
isCacheNeeded bool // Network storages require caching
|
||||
storageDir string
|
||||
numberOfThreads int
|
||||
}
|
||||
|
||||
// CreateFileStorage creates a file storage.
|
||||
func CreateFileStorage(storageDir string, threads int) (storage *FileStorage, err error) {
|
||||
func CreateFileStorage(storageDir string, minimumLevel int, isCacheNeeded bool, threads int) (storage *FileStorage, err error) {
|
||||
|
||||
var stat os.FileInfo
|
||||
|
||||
stat, err = os.Stat(storageDir)
|
||||
if os.IsNotExist(err) {
|
||||
err = os.MkdirAll(storageDir, 0744)
|
||||
if err != nil {
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
err = os.MkdirAll(storageDir, 0744)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
@@ -45,6 +51,8 @@ func CreateFileStorage(storageDir string, threads int) (storage *FileStorage, er
|
||||
|
||||
storage = &FileStorage {
|
||||
storageDir : storageDir,
|
||||
minimumLevel: minimumLevel,
|
||||
isCacheNeeded: isCacheNeeded,
|
||||
numberOfThreads: threads,
|
||||
}
|
||||
|
||||
@@ -128,16 +136,18 @@ func (storage *FileStorage) FindChunk(threadIndex int, chunkID string, isFossil
|
||||
suffix = ".fsl"
|
||||
}
|
||||
|
||||
// The minimum level of directories to dive into before searching for the chunk file.
|
||||
minimumLevel := 2
|
||||
|
||||
for level := 0; level * 2 < len(chunkID); level ++ {
|
||||
if level >= minimumLevel {
|
||||
if level >= storage.minimumLevel {
|
||||
filePath = path.Join(dir, chunkID[2 * level:]) + suffix
|
||||
if stat, err := os.Stat(filePath); err == nil && !stat.IsDir() {
|
||||
// Use Lstat() instead of Stat() since 1) Stat() doesn't work for deduplicated disks on Windows and 2) there isn't
|
||||
// really a need to follow the link if filePath is a link.
|
||||
stat, err := os.Lstat(filePath)
|
||||
if err != nil {
|
||||
LOG_DEBUG("FS_FIND", "File %s can't be found: %v", filePath, err)
|
||||
} else if stat.IsDir() {
|
||||
return filePath[len(storage.storageDir) + 1:], false, 0, fmt.Errorf("The path %s is a directory", filePath)
|
||||
} else {
|
||||
return filePath[len(storage.storageDir) + 1:], true, stat.Size(), nil
|
||||
} else if err == nil && stat.IsDir() {
|
||||
return filePath[len(storage.storageDir) + 1:], true, 0, fmt.Errorf("The path %s is a directory", filePath)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -149,7 +159,7 @@ func (storage *FileStorage) FindChunk(threadIndex int, chunkID string, isFossil
|
||||
continue
|
||||
}
|
||||
|
||||
if level < minimumLevel {
|
||||
if level < storage.minimumLevel {
|
||||
// Create the subdirectory if it doesn't exist.
|
||||
|
||||
if err == nil && !stat.IsDir() {
|
||||
@@ -164,7 +174,6 @@ func (storage *FileStorage) FindChunk(threadIndex int, chunkID string, isFossil
|
||||
return "", false, 0, err
|
||||
}
|
||||
}
|
||||
|
||||
dir = subDir
|
||||
continue
|
||||
}
|
||||
@@ -174,9 +183,7 @@ func (storage *FileStorage) FindChunk(threadIndex int, chunkID string, isFossil
|
||||
|
||||
}
|
||||
|
||||
LOG_FATAL("CHUNK_FIND", "Chunk %s is still not found after having searched a maximum level of directories",
|
||||
chunkID)
|
||||
return "", false, 0, nil
|
||||
return "", false, 0, fmt.Errorf("The maximum level of directories searched")
|
||||
|
||||
}
|
||||
|
||||
@@ -241,7 +248,7 @@ func (storage *FileStorage) UploadFile(threadIndex int, filePath string, content
|
||||
|
||||
// If a local snapshot cache is needed for the storage to avoid downloading/uploading chunks too often when
|
||||
// managing snapshots.
|
||||
func (storage *FileStorage) IsCacheNeeded () (bool) { return false }
|
||||
func (storage *FileStorage) IsCacheNeeded () (bool) { return storage.isCacheNeeded }
|
||||
|
||||
// If the 'MoveFile' method is implemented.
|
||||
func (storage *FileStorage) IsMoveFileImplemented() (bool) { return true }
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"time"
|
||||
"sync"
|
||||
"bytes"
|
||||
@@ -64,7 +65,17 @@ func NewHubicClient(tokenFile string) (*HubicClient, error) {
|
||||
}
|
||||
|
||||
client := &HubicClient{
|
||||
HTTPClient: http.DefaultClient,
|
||||
HTTPClient: &http.Client {
|
||||
Transport: &http.Transport {
|
||||
Dial: (&net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
}).Dial,
|
||||
TLSHandshakeTimeout: 60 * time.Second,
|
||||
ResponseHeaderTimeout: 30 * time.Second,
|
||||
ExpectContinueTimeout: 10 * time.Second,
|
||||
},
|
||||
},
|
||||
TokenFile: tokenFile,
|
||||
Token: token,
|
||||
TokenLock: &sync.Mutex{},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
// +build !windows
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"time"
|
||||
"sync"
|
||||
"bytes"
|
||||
"strings"
|
||||
"io/ioutil"
|
||||
"encoding/json"
|
||||
"io"
|
||||
@@ -41,6 +42,7 @@ type OneDriveClient struct {
|
||||
Token *oauth2.Token
|
||||
TokenLock *sync.Mutex
|
||||
|
||||
IsConnected bool
|
||||
TestMode bool
|
||||
}
|
||||
|
||||
@@ -115,9 +117,27 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
|
||||
|
||||
response, err = client.HTTPClient.Do(request)
|
||||
if err != nil {
|
||||
if client.IsConnected {
|
||||
if strings.Contains(err.Error(), "TLS handshake timeout") {
|
||||
// Give a long timeout regardless of backoff when a TLS timeout happens, hoping that
|
||||
// idle connections are not to be reused on reconnect.
|
||||
retryAfter := time.Duration(rand.Float32() * 60000 + 180000)
|
||||
LOG_INFO("ONEDRIVE_RETRY", "TLS handshake timeout; retry after %d milliseconds", retryAfter)
|
||||
time.Sleep(retryAfter * time.Millisecond)
|
||||
} else {
|
||||
// For all other errors just blindly retry until the maximum is reached
|
||||
retryAfter := time.Duration(rand.Float32() * 1000.0 * float32(backoff))
|
||||
LOG_INFO("ONEDRIVE_RETRY", "%v; retry after %d milliseconds", err, retryAfter)
|
||||
time.Sleep(retryAfter * time.Millisecond)
|
||||
}
|
||||
backoff *= 2
|
||||
continue
|
||||
}
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
client.IsConnected = true
|
||||
|
||||
if response.StatusCode < 400 {
|
||||
return response.Body, response.ContentLength, nil
|
||||
}
|
||||
@@ -139,9 +159,9 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
|
||||
return nil, 0, err
|
||||
}
|
||||
continue
|
||||
} else if response.StatusCode == 500 || response.StatusCode == 503 || response.StatusCode == 509 {
|
||||
} else if response.StatusCode > 401 && response.StatusCode != 404 {
|
||||
retryAfter := time.Duration(rand.Float32() * 1000.0 * float32(backoff))
|
||||
LOG_INFO("ONEDRIVE_RETRY", "Response status: %d; retry after %d milliseconds", response.StatusCode, retryAfter)
|
||||
LOG_INFO("ONEDRIVE_RETRY", "Response code: %d; retry after %d milliseconds", response.StatusCode, retryAfter)
|
||||
time.Sleep(retryAfter * time.Millisecond)
|
||||
backoff *= 2
|
||||
continue
|
||||
@@ -151,7 +171,6 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
|
||||
}
|
||||
|
||||
errorResponse.Error.Status = response.StatusCode
|
||||
|
||||
return nil, 0, errorResponse.Error
|
||||
}
|
||||
}
|
||||
@@ -169,7 +188,7 @@ func (client *OneDriveClient) RefreshToken() (err error) {
|
||||
|
||||
readCloser, _, err := client.call(OneDriveRefreshTokenURL, "POST", client.Token, "")
|
||||
if err != nil {
|
||||
return err
|
||||
return fmt.Errorf("failed to refresh the access token: %v", err)
|
||||
}
|
||||
|
||||
defer readCloser.Close()
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
212
src/duplicacy_s3cstorage.go
Normal file
212
src/duplicacy_s3cstorage.go
Normal file
@@ -0,0 +1,212 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
import (
|
||||
"time"
|
||||
"github.com/gilbertchen/goamz/aws"
|
||||
"github.com/gilbertchen/goamz/s3"
|
||||
)
|
||||
|
||||
// S3CStorage is a storage backend for s3 compatible storages that require V2 Signing.
|
||||
type S3CStorage struct {
|
||||
RateLimitedStorage
|
||||
|
||||
buckets []*s3.Bucket
|
||||
storageDir string
|
||||
}
|
||||
|
||||
// CreateS3CStorage creates a amazon s3 storage object.
|
||||
func CreateS3CStorage(regionName string, endpoint string, bucketName string, storageDir string,
|
||||
accessKey string, secretKey string, threads int) (storage *S3CStorage, err error) {
|
||||
|
||||
var region aws.Region
|
||||
|
||||
if endpoint == "" {
|
||||
if regionName == "" {
|
||||
regionName = "us-east-1"
|
||||
}
|
||||
region = aws.Regions[regionName]
|
||||
} else {
|
||||
region = aws.Region { Name: regionName, S3Endpoint:"https://" + endpoint }
|
||||
}
|
||||
|
||||
auth := aws.Auth{ AccessKey: accessKey, SecretKey: secretKey }
|
||||
|
||||
var buckets []*s3.Bucket
|
||||
for i := 0; i < threads; i++ {
|
||||
s3Client := s3.New(auth, region)
|
||||
s3Client.AttemptStrategy = aws.AttemptStrategy{
|
||||
Min: 8,
|
||||
Total: 300 * time.Second,
|
||||
Delay: 1000 * time.Millisecond,
|
||||
}
|
||||
|
||||
bucket := s3Client.Bucket(bucketName)
|
||||
buckets = append(buckets, bucket)
|
||||
}
|
||||
|
||||
if len(storageDir) > 0 && storageDir[len(storageDir) - 1] != '/' {
|
||||
storageDir += "/"
|
||||
}
|
||||
|
||||
storage = &S3CStorage {
|
||||
buckets: buckets,
|
||||
storageDir: storageDir,
|
||||
}
|
||||
|
||||
return storage, nil
|
||||
}
|
||||
|
||||
// ListFiles return the list of files and subdirectories under 'dir' (non-recursively)
|
||||
func (storage *S3CStorage) ListFiles(threadIndex int, dir string) (files []string, sizes []int64, err error) {
|
||||
if len(dir) > 0 && dir[len(dir) - 1] != '/' {
|
||||
dir += "/"
|
||||
}
|
||||
|
||||
dirLength := len(storage.storageDir + dir)
|
||||
if dir == "snapshots/" {
|
||||
results, err := storage.buckets[threadIndex].List(storage.storageDir + dir, "/", "", 100)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
for _, subDir := range results.CommonPrefixes {
|
||||
files = append(files, subDir[dirLength:])
|
||||
}
|
||||
return files, nil, nil
|
||||
} else if dir == "chunks/" {
|
||||
marker := ""
|
||||
for {
|
||||
results, err := storage.buckets[threadIndex].List(storage.storageDir + dir, "", marker, 1000)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
for _, object := range results.Contents {
|
||||
files = append(files, object.Key[dirLength:])
|
||||
sizes = append(sizes, object.Size)
|
||||
}
|
||||
|
||||
if !results.IsTruncated {
|
||||
break
|
||||
}
|
||||
|
||||
marker = results.Contents[len(results.Contents) - 1].Key
|
||||
}
|
||||
return files, sizes, nil
|
||||
|
||||
} else {
|
||||
|
||||
results, err := storage.buckets[threadIndex].List(storage.storageDir + dir, "", "", 1000)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
for _, object := range results.Contents {
|
||||
files = append(files, object.Key[dirLength:])
|
||||
}
|
||||
return files, nil, nil
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteFile deletes the file or directory at 'filePath'.
|
||||
func (storage *S3CStorage) DeleteFile(threadIndex int, filePath string) (err error) {
|
||||
return storage.buckets[threadIndex].Del(storage.storageDir + filePath)
|
||||
}
|
||||
|
||||
// MoveFile renames the file.
|
||||
func (storage *S3CStorage) MoveFile(threadIndex int, from string, to string) (err error) {
|
||||
|
||||
options := s3.CopyOptions { ContentType: "application/duplicacy" }
|
||||
_, err = storage.buckets[threadIndex].PutCopy(storage.storageDir + to, s3.Private, options, storage.buckets[threadIndex].Name + "/" + storage.storageDir + from)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return storage.DeleteFile(threadIndex, from)
|
||||
}
|
||||
|
||||
// CreateDirectory creates a new directory.
|
||||
func (storage *S3CStorage) CreateDirectory(threadIndex int, dir string) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetFileInfo returns the information about the file or directory at 'filePath'.
|
||||
func (storage *S3CStorage) GetFileInfo(threadIndex int, filePath string) (exist bool, isDir bool, size int64, err error) {
|
||||
|
||||
response, err := storage.buckets[threadIndex].Head(storage.storageDir + filePath, nil)
|
||||
if err != nil {
|
||||
if e, ok := err.(*s3.Error); ok && (e.StatusCode == 403 || e.StatusCode == 404) {
|
||||
return false, false, 0, nil
|
||||
} else {
|
||||
return false, false, 0, err
|
||||
}
|
||||
}
|
||||
|
||||
if response.StatusCode == 403 || response.StatusCode == 404 {
|
||||
return false, false, 0, nil
|
||||
} else {
|
||||
return true, false, response.ContentLength, nil
|
||||
}
|
||||
}
|
||||
|
||||
// FindChunk finds the chunk with the specified id. If 'isFossil' is true, it will search for chunk files with
|
||||
// the suffix '.fsl'.
|
||||
func (storage *S3CStorage) FindChunk(threadIndex int, chunkID string, isFossil bool) (filePath string, exist bool, size int64, err error) {
|
||||
|
||||
filePath = "chunks/" + chunkID
|
||||
if isFossil {
|
||||
filePath += ".fsl"
|
||||
}
|
||||
|
||||
exist, _, size, err = storage.GetFileInfo(threadIndex, filePath)
|
||||
|
||||
if err != nil {
|
||||
return "", false, 0, err
|
||||
} else {
|
||||
return filePath, exist, size, err
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// DownloadFile reads the file at 'filePath' into the chunk.
|
||||
func (storage *S3CStorage) DownloadFile(threadIndex int, filePath string, chunk *Chunk) (err error) {
|
||||
|
||||
readCloser, err := storage.buckets[threadIndex].GetReader(storage.storageDir + filePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer readCloser.Close()
|
||||
|
||||
_, err = RateLimitedCopy(chunk, readCloser, storage.DownloadRateLimit / len(storage.buckets))
|
||||
return err
|
||||
|
||||
}
|
||||
|
||||
// UploadFile writes 'content' to the file at 'filePath'.
|
||||
func (storage *S3CStorage) UploadFile(threadIndex int, filePath string, content []byte) (err error) {
|
||||
|
||||
options := s3.Options { }
|
||||
reader := CreateRateLimitedReader(content, storage.UploadRateLimit / len(storage.buckets))
|
||||
return storage.buckets[threadIndex].PutReader(storage.storageDir + filePath, reader, int64(len(content)), "application/duplicacy", s3.Private, options)
|
||||
}
|
||||
|
||||
// If a local snapshot cache is needed for the storage to avoid downloading/uploading chunks too often when
|
||||
// managing snapshots.
|
||||
func (storage *S3CStorage) IsCacheNeeded () (bool) { return true }
|
||||
|
||||
// If the 'MoveFile' method is implemented.
|
||||
func (storage *S3CStorage) IsMoveFileImplemented() (bool) { return true }
|
||||
|
||||
// If the storage can guarantee strong consistency.
|
||||
func (storage *S3CStorage) IsStrongConsistent() (bool) { return false }
|
||||
|
||||
// If the storage supports fast listing of files names.
|
||||
func (storage *S3CStorage) IsFastListing() (bool) { return true }
|
||||
|
||||
// Enable the test mode.
|
||||
func (storage *S3CStorage) EnableTestMode() {}
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -23,7 +23,8 @@ type S3Storage struct {
|
||||
|
||||
// CreateS3Storage creates a amazon s3 storage object.
|
||||
func CreateS3Storage(regionName string, endpoint string, bucketName string, storageDir string,
|
||||
accessKey string, secretKey string, threads int) (storage *S3Storage, err error) {
|
||||
accessKey string, secretKey string, threads int,
|
||||
isSSLSupported bool, isMinioCompatible bool) (storage *S3Storage, err error) {
|
||||
|
||||
token := ""
|
||||
|
||||
@@ -53,7 +54,9 @@ func CreateS3Storage(regionName string, endpoint string, bucketName string, stor
|
||||
Region: aws.String(regionName),
|
||||
Credentials: auth,
|
||||
Endpoint: aws.String(endpoint),
|
||||
}
|
||||
S3ForcePathStyle: aws.Bool(isMinioCompatible),
|
||||
DisableSSL: aws.Bool(!isSSLSupported),
|
||||
}
|
||||
|
||||
if len(storageDir) > 0 && storageDir[len(storageDir) - 1] != '/' {
|
||||
storageDir += "/"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -285,7 +285,7 @@ func (storage *SFTPStorage) UploadFile(threadIndex int, filePath string, content
|
||||
storage.client.Remove(temporaryFile)
|
||||
return nil
|
||||
} else {
|
||||
return fmt.Errorf("Uploaded file but failed to store it at %s", fullPath)
|
||||
return fmt.Errorf("Uploaded file but failed to store it at %s: %v", fullPath, err)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
// +build !windows
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -1092,15 +1092,18 @@ func (manager *SnapshotManager) RetrieveFile(snapshot *Snapshot, file *Entry, ou
|
||||
}
|
||||
|
||||
// FindFile returns the file entry that has the given file name.
|
||||
func (manager *SnapshotManager) FindFile(snapshot *Snapshot, filePath string) (*Entry) {
|
||||
func (manager *SnapshotManager) FindFile(snapshot *Snapshot, filePath string, suppressError bool) (*Entry) {
|
||||
for _, entry := range snapshot.Files {
|
||||
if entry.Path == filePath {
|
||||
return entry
|
||||
}
|
||||
}
|
||||
|
||||
LOG_ERROR("SNAPSHOT_FIND", "No file %s found in snapshot %s at revision %d",
|
||||
filePath, snapshot.ID, snapshot.Revision)
|
||||
if !suppressError {
|
||||
LOG_ERROR("SNAPSHOT_FIND", "No file %s found in snapshot %s at revision %d",
|
||||
filePath, snapshot.ID, snapshot.Revision)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1139,7 +1142,7 @@ func (manager *SnapshotManager) PrintFile(snapshotID string, revision int, path
|
||||
return true
|
||||
}
|
||||
|
||||
file := manager.FindFile(snapshot, path)
|
||||
file := manager.FindFile(snapshot, path, false)
|
||||
var content [] byte
|
||||
if !manager.RetrieveFile(snapshot, file, func(chunk []byte) { content = append(content, chunk...) }) {
|
||||
LOG_ERROR("SNAPSHOT_RETRIEVE", "File %s is corrupted in snapshot %s at revision %d",
|
||||
@@ -1197,7 +1200,7 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
|
||||
}
|
||||
|
||||
var leftFile []byte
|
||||
if !manager.RetrieveFile(leftSnapshot, manager.FindFile(leftSnapshot, filePath), func(content []byte) {
|
||||
if !manager.RetrieveFile(leftSnapshot, manager.FindFile(leftSnapshot, filePath, false), func(content []byte) {
|
||||
leftFile = append(leftFile, content...)
|
||||
}) {
|
||||
LOG_ERROR("SNAPSHOT_DIFF", "File %s is corrupted in snapshot %s at revision %d",
|
||||
@@ -1207,7 +1210,7 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
|
||||
|
||||
var rightFile []byte
|
||||
if rightSnapshot != nil {
|
||||
if !manager.RetrieveFile(rightSnapshot, manager.FindFile(rightSnapshot, filePath), func(content []byte) {
|
||||
if !manager.RetrieveFile(rightSnapshot, manager.FindFile(rightSnapshot, filePath, false), func(content []byte) {
|
||||
rightFile = append(rightFile, content...)
|
||||
}) {
|
||||
LOG_ERROR("SNAPSHOT_DIFF", "File %s is corrupted in snapshot %s at revision %d",
|
||||
@@ -1376,7 +1379,7 @@ func (manager *SnapshotManager) ShowHistory(top string, snapshotID string, revis
|
||||
for _, revision := range revisions {
|
||||
snapshot := manager.DownloadSnapshot(snapshotID, revision)
|
||||
manager.DownloadSnapshotFileSequence(snapshot, nil)
|
||||
file := manager.FindFile(snapshot, filePath)
|
||||
file := manager.FindFile(snapshot, filePath, true)
|
||||
|
||||
if file != nil {
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -95,14 +95,14 @@ func createTestSnapshotManager(testDir string) *SnapshotManager {
|
||||
os.RemoveAll(testDir)
|
||||
os.MkdirAll(testDir, 0700)
|
||||
|
||||
storage, _ := CreateFileStorage(testDir, 1)
|
||||
storage, _ := CreateFileStorage(testDir, 2, false, 1)
|
||||
storage.CreateDirectory(0, "chunks")
|
||||
storage.CreateDirectory(0, "snapshots")
|
||||
config := CreateConfig()
|
||||
snapshotManager := CreateSnapshotManager(config, storage)
|
||||
|
||||
cacheDir := path.Join(testDir, "cache")
|
||||
snapshotCache, _ := CreateFileStorage(cacheDir, 1)
|
||||
snapshotCache, _ := CreateFileStorage(cacheDir, 2, false, 1)
|
||||
snapshotCache.CreateDirectory(0, "chunks")
|
||||
snapshotCache.CreateDirectory(0, "snapshots")
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -127,6 +127,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
|
||||
storageURL := preference.StorageURL
|
||||
|
||||
isFileStorage := false
|
||||
isCacheNeeded := false
|
||||
|
||||
if strings.HasPrefix(storageURL, "/") {
|
||||
isFileStorage = true
|
||||
@@ -140,11 +141,30 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
|
||||
|
||||
if !isFileStorage && strings.HasPrefix(storageURL, `\\`) {
|
||||
isFileStorage = true
|
||||
isCacheNeeded = true
|
||||
}
|
||||
}
|
||||
|
||||
if isFileStorage {
|
||||
fileStorage, err := CreateFileStorage(storageURL, threads)
|
||||
fileStorage, err := CreateFileStorage(storageURL, 2, isCacheNeeded, threads)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the file storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
}
|
||||
return fileStorage
|
||||
}
|
||||
|
||||
if strings.HasPrefix(storageURL, "flat://") {
|
||||
fileStorage, err := CreateFileStorage(storageURL[7:], 0, false, threads)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the file storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
}
|
||||
return fileStorage
|
||||
}
|
||||
|
||||
if strings.HasPrefix(storageURL, "samba://") {
|
||||
fileStorage, err := CreateFileStorage(storageURL[8:], 2, true, threads)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the file storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
@@ -293,7 +313,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
|
||||
SavePassword(preference, "ssh_password", password)
|
||||
}
|
||||
return sftpStorage
|
||||
} else if matched[1] == "s3" {
|
||||
} else if matched[1] == "s3" || matched[1] == "s3c" || matched[1] == "minio" || matched[1] == "minios" {
|
||||
|
||||
// urlRegex := regexp.MustCompile(`^(\w+)://([\w\-]+@)?([^/]+)(/(.+))?`)
|
||||
|
||||
@@ -319,15 +339,27 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
|
||||
accessKey := GetPassword(preference, "s3_id", "Enter S3 Access Key ID:", true, resetPassword)
|
||||
secretKey := GetPassword(preference, "s3_secret", "Enter S3 Secret Access Key:", true, resetPassword)
|
||||
|
||||
s3Storage, err := CreateS3Storage(region, endpoint, bucket, storageDir, accessKey, secretKey, threads)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the S3 storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
var err error
|
||||
|
||||
if matched[1] == "s3c" {
|
||||
storage, err = CreateS3CStorage(region, endpoint, bucket, storageDir, accessKey, secretKey, threads)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the S3C storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
isMinioCompatible := (matched[1] == "minio" || matched[1] == "minios")
|
||||
isSSLSupported := (matched[1] == "s3" || matched[1] == "minios")
|
||||
storage, err = CreateS3Storage(region, endpoint, bucket, storageDir, accessKey, secretKey, threads, isSSLSupported, isMinioCompatible)
|
||||
if err != nil {
|
||||
LOG_ERROR("STORAGE_CREATE", "Failed to load the S3 storage at %s: %v", storageURL, err)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
SavePassword(preference, "s3_id", accessKey)
|
||||
SavePassword(preference, "s3_secret", secretKey)
|
||||
|
||||
return s3Storage
|
||||
return storage
|
||||
} else if matched[1] == "dropbox" {
|
||||
storageDir := matched[3] + matched[5]
|
||||
token := GetPassword(preference, "dropbox_token", "Enter Dropbox access token:", true, resetPassword)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -41,7 +41,7 @@ func init() {
|
||||
func loadStorage(localStoragePath string, threads int) (Storage, error) {
|
||||
|
||||
if testStorageName == "" || testStorageName == "file" {
|
||||
return CreateFileStorage(localStoragePath, threads)
|
||||
return CreateFileStorage(localStoragePath, 2, false, threads)
|
||||
}
|
||||
|
||||
config, err := ioutil.ReadFile("test_storage.conf")
|
||||
@@ -61,17 +61,27 @@ func loadStorage(localStoragePath string, threads int) (Storage, error) {
|
||||
return nil, fmt.Errorf("No storage named '%s' found", testStorageName)
|
||||
}
|
||||
|
||||
if testStorageName == "sftp" {
|
||||
if testStorageName == "flat" {
|
||||
return CreateFileStorage(localStoragePath, 0, false, threads)
|
||||
} else if testStorageName == "samba" {
|
||||
return CreateFileStorage(localStoragePath, 2, true, threads)
|
||||
} else if testStorageName == "sftp" {
|
||||
port, _ := strconv.Atoi(storage["port"])
|
||||
return CreateSFTPStorageWithPassword(storage["server"], port, storage["username"], storage["directory"], storage["password"], threads)
|
||||
} else if testStorageName == "s3" {
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads)
|
||||
} else if testStorageName == "s3" || testStorageName == "wasabi" {
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads, true, false)
|
||||
} else if testStorageName == "s3c" {
|
||||
return CreateS3CStorage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads)
|
||||
} else if testStorageName == "minio" {
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads, false, true)
|
||||
} else if testStorageName == "minios" {
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads, true, true)
|
||||
} else if testStorageName == "dropbox" {
|
||||
return CreateDropboxStorage(storage["token"], storage["directory"], threads)
|
||||
} else if testStorageName == "b2" {
|
||||
return CreateB2Storage(storage["account"], storage["key"], storage["bucket"], threads)
|
||||
} else if testStorageName == "gcs-s3" {
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads)
|
||||
return CreateS3Storage(storage["region"], storage["endpoint"], storage["bucket"], storage["directory"], storage["access_key"], storage["secret_key"], threads, true, false)
|
||||
} else if testStorageName == "gcs" {
|
||||
return CreateGCSStorage(storage["token_file"], storage["bucket"], storage["directory"], threads)
|
||||
} else if testStorageName == "gcs-sa" {
|
||||
@@ -448,3 +458,64 @@ func TestStorage(t *testing.T) {
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestCleanStorage(t *testing.T) {
|
||||
setTestingT(t)
|
||||
SetLoggingLevel(INFO)
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
switch e := r.(type) {
|
||||
case Exception:
|
||||
t.Errorf("%s %s", e.LogID, e.Message)
|
||||
debug.PrintStack()
|
||||
default:
|
||||
t.Errorf("%v", e)
|
||||
debug.PrintStack()
|
||||
}
|
||||
}
|
||||
} ()
|
||||
|
||||
testDir := path.Join(os.TempDir(), "duplicacy_test", "storage_test")
|
||||
os.RemoveAll(testDir)
|
||||
os.MkdirAll(testDir, 0700)
|
||||
|
||||
LOG_INFO("STORAGE_TEST", "storage: %s", testStorageName)
|
||||
|
||||
storage, err := loadStorage(testDir, 1)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to create storage: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
directories := make([]string, 0, 1024)
|
||||
directories = append(directories, "snapshots/")
|
||||
directories = append(directories, "chunks/")
|
||||
|
||||
for len(directories) > 0 {
|
||||
|
||||
dir := directories[len(directories) - 1]
|
||||
directories = directories[:len(directories) - 1]
|
||||
|
||||
LOG_INFO("LIST_FILES", "Listing %s", dir)
|
||||
|
||||
files, _, err := storage.ListFiles(0, dir)
|
||||
if err != nil {
|
||||
LOG_ERROR("LIST_FILES", "Failed to list the directory %s: %v", dir, err)
|
||||
return
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if len(file) > 0 && file[len(file) - 1] == '/' {
|
||||
directories = append(directories, dir + file)
|
||||
} else {
|
||||
storage.DeleteFile(0, dir + file)
|
||||
LOG_INFO("DELETE_FILE", "Deleted file %s", file)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
storage.DeleteFile(0, "config")
|
||||
LOG_INFO("DELETE_FILE", "Deleted config")
|
||||
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"os"
|
||||
"bufio"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"time"
|
||||
"path"
|
||||
"path/filepath"
|
||||
@@ -190,54 +189,6 @@ func SavePassword(preference Preference, passwordType string, password string) {
|
||||
keyringSet(passwordID, password)
|
||||
}
|
||||
|
||||
// RemoveEmptyDirectories remove all empty subdirectoreies under top.
|
||||
func RemoveEmptyDirectories(top string) {
|
||||
|
||||
stack := make([]string, 0, 256)
|
||||
|
||||
stack = append(stack, top)
|
||||
|
||||
for len(stack) > 0 {
|
||||
|
||||
dir := stack[len(stack) - 1]
|
||||
stack = stack[:len(stack) - 1]
|
||||
|
||||
files, err := ioutil.ReadDir(dir)
|
||||
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if file.IsDir() && file.Name()[0] != '.' {
|
||||
stack = append(stack, path.Join(dir, file.Name()))
|
||||
}
|
||||
}
|
||||
|
||||
if len(files) == 0 {
|
||||
if os.Remove(dir) != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
dir = path.Dir(dir)
|
||||
for (len(dir) > len(top)) {
|
||||
files, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
|
||||
if len(files) == 0 {
|
||||
if os.Remove(dir) != nil {
|
||||
break;
|
||||
}
|
||||
}
|
||||
dir = path.Dir(dir)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// The following code was modified from the online article 'Matching Wildcards: An Algorithm', by Kirk J. Krauss,
|
||||
// Dr. Dobb's, August 26, 2008. However, the version in the article doesn't handle cases like matching 'abcccd'
|
||||
// against '*ccd', and the version here fixed that issue.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
// +build !windows
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright (c) Acrosync LLC. All rights reserved.
|
||||
// Licensed under the Fair Source License 0.9 (https://fair.io/)
|
||||
// User Limitation: 5 users
|
||||
// Free for personal use and commercial trial
|
||||
// Commercial use requires per-user licenses available from https://duplicacy.com
|
||||
|
||||
package duplicacy
|
||||
|
||||
|
||||
Reference in New Issue
Block a user