Compare commits

...

30 Commits

Author SHA1 Message Date
Gilbert Chen
76f1274e13 Bump version to 2.5.2 2020-05-10 21:41:30 -04:00
Gilbert Chen
9c3122b814 Fixed a bug causing the OneDrive Business token file path to be not saved
The file path was saved under the name `one_token` in Keychain/keyring, rather
than the correct name 'odb_token'.
2020-05-10 20:22:49 -04:00
Gilbert Chen
6ca8b8dff0 Disable snapshot cache when checking chunks
Otherwise every chunk will be stored to the snapshot cache when the `-chunks`
option is specified.
2020-05-10 00:26:47 -04:00
Gilbert Chen
51cbf73caa Bump version to 2.5.1 2020-04-17 15:57:08 -04:00
Gilbert Chen
835af11334 Fixed a bug in ssh login with encrypted private key
Check the type of the returned error instead of the error message to
determine if the private key file is encrypted by a passphrase.
2020-04-17 15:55:30 -04:00
Gilbert Chen
4c3557eb80 Bump version to 2.5.0 2020-04-09 23:22:32 -04:00
Gilbert Chen
eebcece9e0 Update github.com/aws/aws-sdk-go and google.golang.org/api to the latest 2020-04-09 23:21:55 -04:00
gilbertchen
8c80470c29 Merge pull request #593 from freaksdotcom/readall
Call ReadAll() on the body io.ReadCloser to allow the http keepalive connection to be reused.
2020-04-09 21:12:41 -04:00
Gilbert Chen
bcb889272d Add test for Google Shared Drive 2020-04-09 00:04:30 -04:00
Gilbert Chen
79d8654a12 Allow the name of Google Shared Drive to be used in the storage url
The previous PR only accepts the id of the shared drive, which is not very
memorable.  This commit makes it able to specify the drive by id or by name.
2020-04-08 23:59:15 -04:00
Brandon High
6bf0d2265c Call ReadAll() on the body io.ReadCloser to allow the http keepalive connection to be reused. 2020-04-07 22:07:41 -07:00
Gilbert Chen
749db78a1f Implemented a global option to suppress logs by ids
You can now use -suppress LOGID or -s LOGID to not print logs with the given
ids.  This is a global option which means it is applicable to all commands.
It can be specified more than once.
2020-04-07 23:22:17 -04:00
Gilbert Chen
0a51bd8d1a Make log.Printf print to Duplicacy's logging system 2020-04-07 13:52:54 -04:00
Gilbert Chen
7208adbce2 Access Google Drive via service account.
Our GCS backend already supports service account.  This just copies relevant
code from there.
2020-04-06 23:25:47 -04:00
gilbertchen
e827662869 Merge pull request #579 from rsanger/master
Add support for Shared Google Drives
2020-04-06 22:55:46 -04:00
gilbertchen
57dd5ba927 Merge pull request #590 from fbarthez/macos_sync_error
Fix "Failed to upload the chunk ... sync ...: operation not supported"
2020-04-06 22:54:44 -04:00
Gilbert Chen
01a37b7828 Fixed a typo in command line arguments 2020-04-06 22:13:08 -04:00
Gilbert Chen
57cd20bb84 Fixed the condition to show 'chunks are encrypted' messages
'File/Metadata chunks are encrypted' were always shown even if the storage
wasn't encrypted.
2020-04-06 12:22:47 -04:00
Gilbert Chen
0e970da222 Fixed build errors in tests caused by snapshotManager.CheckSnapshots 2020-04-06 12:19:42 -04:00
Gilbert Chen
e880636502 Fixed test build errors caused by the prototype change in CheckSnapshots() 2020-03-30 17:49:26 -04:00
Gilbert Chen
810303ce25 Fail the backup if the repository can't be accessed or there are no files
This is mainly to avoid creating an empty snapshot when a drive/share is
not mounted which causes the subsequent backup to scan all files again.
2020-03-30 17:44:53 -04:00
Gilbert Chen
ffac83dd80 Assume the signed certificate of a ssh key file has the suffix '-cert.pub'.
So if the ssh key file is 'mykey' then Duplicacy will check if the signed
certificate can be loaded from the file 'mykey-cert.pub'.  This avoids the
use of another preference variable 'ssh_cert_file'.
2020-03-25 23:50:35 -04:00
gilbertchen
05674871fe Merge pull request #547 from philband/ssh_signed_certificate
Add option to use a ssh key signed with a certificate to authenticate
2020-03-25 23:14:30 -04:00
Gilbert Chen
22d6f3abfc Add a -chunks option to the check command to verify the integrity of chunks
This option will download and verify every chunk.  Unlike the -files option,
this option only downloads each chunk once.  There is also a new -threads
option to use multiple threads to download chunks.
2020-03-24 20:58:45 -04:00
Gilbert Chen
d26ffe2cff Add support for OneDrive for Business
The new storage prefix for OneDrive for Business is odb://

The token file can be downloaded from https://duplicacy.com/odb_start

OneDrive for Business requires basically the same set of API calls with
different endpoints.  However, one major difference is that for files larger
than 4MB, an upload session must be created first which is then used to upload
the file content.  Other than that, there are a few minor differences such as
creating an existing directory, or moving files to a non-existent directory.
2020-03-19 14:59:26 -04:00
Fabian Peters
a35f6c27be Fix "Failed to upload the chunk ... sync ...: operation not supported" issue when using SMB on MacOS. This is done by inspecting the error type and returning the error only if its operation is "sync" and the type is "operation not supported".
Note: this change is my first ever foray into go and based simply on the information provided in: https://forum.duplicacy.com/t/failed-to-upload-the-chunk-operation-not-supported/2875/11
2020-03-16 15:56:31 +01:00
Richard Sanger
aa07feeac0 Fix bug in gcd: init fails to create directories
init would not create directories in the root of a drive as
it did not know the root drive's ID.
2020-01-11 17:25:21 +13:00
Richard Sanger
7719bb9f29 Fix: backup to shared drive root
Allows writing to the drive root using:
gcd://driveid@ or gcd://driveid@/

To write to the root of the default user's drive use the special
shared drive named 'root':
gcd://root@/
2019-11-27 00:00:40 +13:00
Richard Sanger
426110e961 Adds support for GDrive Shared Drives
A shared drive can be accessed via
gcd://sharedDriveId@path/to/storage

sharedDriveId is optional and if omitted duplicacy stores to the user's drive.
This remains backwards compatible with existing drives. E.g.
gcd://path/to/storage

Note: Shared Drives were previously named Team Drives.
2019-11-06 00:51:12 +13:00
Philipp Bandow
a55ac1b7ad Add option to use a ssh key signed with a certificate to authenticate 2019-02-28 01:37:14 +01:00
19 changed files with 560 additions and 138 deletions

137
Gopkg.lock generated
View File

@@ -7,17 +7,11 @@
revision = "2d3a6656c17a60b0815b7e06ab0be04eacb6e613"
version = "v0.16.0"
[[projects]]
name = "github.com/Azure/azure-sdk-for-go"
packages = ["version"]
revision = "b7fadebe0e7f5c5720986080a01495bd8d27be37"
version = "v14.2.0"
[[projects]]
name = "github.com/Azure/go-autorest"
packages = ["autorest","autorest/adal","autorest/azure","autorest/date"]
revision = "0ae36a9e544696de46fdadb7b0d5fb38af48c063"
version = "v10.2.0"
packages = ["autorest","autorest/adal","autorest/azure","autorest/date","logger","version"]
revision = "9bc4033dd347c7f416fca46b2f42a043dc1fbdf6"
version = "v10.15.5"
[[projects]]
branch = "master"
@@ -27,9 +21,9 @@
[[projects]]
name = "github.com/aws/aws-sdk-go"
packages = ["aws","aws/awserr","aws/awsutil","aws/client","aws/client/metadata","aws/corehandlers","aws/credentials","aws/credentials/ec2rolecreds","aws/credentials/endpointcreds","aws/credentials/stscreds","aws/defaults","aws/ec2metadata","aws/endpoints","aws/request","aws/session","aws/signer/v4","internal/shareddefaults","private/protocol","private/protocol/query","private/protocol/query/queryutil","private/protocol/rest","private/protocol/restxml","private/protocol/xml/xmlutil","service/s3","service/sts"]
revision = "a32b1dcd091264b5dee7b386149b6cc3823395c9"
version = "v1.12.31"
packages = ["aws","aws/arn","aws/awserr","aws/awsutil","aws/client","aws/client/metadata","aws/corehandlers","aws/credentials","aws/credentials/ec2rolecreds","aws/credentials/endpointcreds","aws/credentials/processcreds","aws/credentials/stscreds","aws/csm","aws/defaults","aws/ec2metadata","aws/endpoints","aws/request","aws/session","aws/signer/v4","internal/context","internal/ini","internal/s3err","internal/sdkio","internal/sdkmath","internal/sdkrand","internal/sdkuri","internal/shareddefaults","internal/strings","internal/sync/singleflight","private/protocol","private/protocol/eventstream","private/protocol/eventstream/eventstreamapi","private/protocol/json/jsonutil","private/protocol/query","private/protocol/query/queryutil","private/protocol/rest","private/protocol/restxml","private/protocol/xml/xmlutil","service/s3","service/s3/internal/arn","service/sts","service/sts/stsiface"]
revision = "851d5ffb66720c2540cc68020d4d8708950686c8"
version = "v1.30.7"
[[projects]]
name = "github.com/bkaradzic/go-lz4"
@@ -40,14 +34,14 @@
[[projects]]
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
revision = "dbeaa9332f19a944acb5736b4456cfcc02140e29"
version = "v3.1.0"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
branch = "master"
name = "github.com/gilbertchen/azure-sdk-for-go"
packages = ["storage"]
revision = "bbf89bd4d716c184f158d1e1428c2dbef4a18307"
packages = ["storage","version"]
revision = "8fd4663cab7c7c1c46d00449291c92ad23b0d0d9"
[[projects]]
branch = "master"
@@ -59,7 +53,7 @@
branch = "master"
name = "github.com/gilbertchen/go-dropbox"
packages = ["."]
revision = "90711b603312b1f973f3a5da3793ac4f1e5c2f2a"
revision = "994e692c5061cefa14e4296600a773de7119aa15"
[[projects]]
name = "github.com/gilbertchen/go-ole"
@@ -98,33 +92,33 @@
revision = "68e7a6806b0137a396d7d05601d7403ae1abac58"
[[projects]]
name = "github.com/go-ini/ini"
packages = ["."]
revision = "32e4c1e6bc4e7d0d8451aa6b75200d19e37a536a"
version = "v1.32.0"
branch = "master"
name = "github.com/golang/groupcache"
packages = ["lru"]
revision = "8c9f03a8e57eb486e42badaed3fb287da51807ba"
[[projects]]
branch = "master"
name = "github.com/golang/protobuf"
packages = ["proto","protoc-gen-go/descriptor","ptypes","ptypes/any","ptypes/duration","ptypes/timestamp"]
revision = "1e59b77b52bf8e4b449a57e6f79f21226d571845"
packages = ["proto","protoc-gen-go","protoc-gen-go/descriptor","protoc-gen-go/generator","protoc-gen-go/generator/internal/remap","protoc-gen-go/grpc","protoc-gen-go/plugin","ptypes","ptypes/any","ptypes/duration","ptypes/timestamp"]
revision = "84668698ea25b64748563aa20726db66a6b8d299"
version = "v1.3.5"
[[projects]]
name = "github.com/googleapis/gax-go"
packages = ["."]
revision = "317e0006254c44a0ac427cc52a0e083ff0b9622f"
version = "v2.0.0"
packages = [".","v2"]
revision = "c8a15bac9b9fe955bd9f900272f9a306465d28cf"
version = "v2.0.3"
[[projects]]
name = "github.com/jmespath/go-jmespath"
packages = ["."]
revision = "0b12d6b5"
revision = "c2b33e84"
[[projects]]
branch = "master"
name = "github.com/kr/fs"
packages = ["."]
revision = "2788f0dbd16903de03cb8186e5c7d97b69ad387b"
revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
version = "v0.1.0"
[[projects]]
name = "github.com/marstr/guid"
@@ -139,22 +133,22 @@
revision = "3f5f724cb5b182a5c278d6d3d55b40e7f8c2efb4"
[[projects]]
branch = "master"
name = "github.com/ncw/swift"
packages = ["."]
revision = "ae9f0ea1605b9aa6434ed5c731ca35d83ba67c55"
revision = "3e1a09f21340e4828e7265aa89f4dc1495fa7ccc"
version = "v1.0.50"
[[projects]]
name = "github.com/pkg/errors"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
revision = "614d223910a179a466c1767a985424175c39b465"
version = "v0.9.1"
[[projects]]
name = "github.com/pkg/sftp"
packages = ["."]
revision = "3edd153f213d8d4191a0ee4577c61cca19436632"
version = "v1.10.1"
revision = "5616182052227b951e76d9c9b79a616c608bd91b"
version = "v1.11.0"
[[projects]]
name = "github.com/satori/go.uuid"
@@ -168,63 +162,92 @@
packages = ["."]
revision = "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1"
[[projects]]
name = "go.opencensus.io"
packages = [".","internal","internal/tagencoding","metric/metricdata","metric/metricproducer","plugin/ochttp","plugin/ochttp/propagation/b3","resource","stats","stats/internal","stats/view","tag","trace","trace/internal","trace/propagation","trace/tracestate"]
revision = "d835ff86be02193d324330acdb7d65546b05f814"
version = "v0.22.3"
[[projects]]
branch = "master"
name = "golang.org/x/crypto"
packages = ["curve25519","ed25519","ed25519/internal/edwards25519","pbkdf2","ssh","ssh/agent","ssh/terminal"]
revision = "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94"
packages = ["blowfish","chacha20","curve25519","ed25519","ed25519/internal/edwards25519","internal/subtle","pbkdf2","poly1305","ssh","ssh/agent","ssh/internal/bcrypt_pbkdf","ssh/terminal"]
revision = "056763e48d71961566155f089ac0f02f1dda9b5a"
[[projects]]
branch = "master"
name = "golang.org/x/exp"
packages = ["apidiff","cmd/apidiff"]
revision = "e8c3332aa8e5b8e6acb4707c3a7e5979052b20aa"
[[projects]]
name = "golang.org/x/mod"
packages = ["module","semver"]
revision = "ed3ec21bb8e252814c380df79a80f366440ddb2d"
version = "v0.2.0"
[[projects]]
branch = "master"
name = "golang.org/x/net"
packages = ["context","context/ctxhttp","http2","http2/hpack","idna","internal/timeseries","lex/httplex","trace"]
revision = "9dfe39835686865bff950a07b394c12a98ddc811"
packages = ["context","context/ctxhttp","http/httpguts","http2","http2/hpack","idna","internal/timeseries","trace"]
revision = "d3edc9973b7eb1fb302b0ff2c62357091cea9a30"
[[projects]]
branch = "master"
name = "golang.org/x/oauth2"
packages = [".","google","internal","jws","jwt"]
revision = "f95fa95eaa936d9d87489b15d1d18b97c1ba9c28"
revision = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = ["unix","windows"]
revision = "82aafbf43bf885069dc71b7e7c2f9d7a614d47da"
packages = ["cpu","unix","windows"]
revision = "59c9f1ba88faf592b225274f69c5ef1e4ebacf82"
[[projects]]
branch = "master"
name = "golang.org/x/text"
packages = ["collate","collate/build","internal/colltab","internal/gen","internal/tag","internal/triegen","internal/ucd","language","secure/bidirule","transform","unicode/bidi","unicode/cldr","unicode/norm","unicode/rangetable"]
revision = "88f656faf3f37f690df1a32515b479415e1a6769"
packages = ["collate","collate/build","internal/colltab","internal/gen","internal/language","internal/language/compact","internal/tag","internal/triegen","internal/ucd","language","secure/bidirule","transform","unicode/bidi","unicode/cldr","unicode/norm","unicode/rangetable"]
revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475"
version = "v0.3.2"
[[projects]]
branch = "master"
name = "golang.org/x/tools"
packages = ["cmd/goimports","go/ast/astutil","go/gcexportdata","go/internal/gcimporter","go/internal/packagesdriver","go/packages","go/types/typeutil","internal/fastwalk","internal/gocommand","internal/gopathwalk","internal/imports","internal/packagesinternal","internal/telemetry/event"]
revision = "700752c244080ed7ef6a61c3cfd73382cd334e57"
[[projects]]
branch = "master"
name = "golang.org/x/xerrors"
packages = [".","internal"]
revision = "9bdfabe68543c54f90421aeb9a60ef8061b5b544"
[[projects]]
name = "google.golang.org/api"
packages = ["drive/v3","gensupport","googleapi","googleapi/internal/uritemplates","googleapi/transport","internal","iterator","option","storage/v1","transport/http"]
revision = "17b5f22a248d6d3913171c1a557552ace0d9c806"
packages = ["drive/v3","googleapi","googleapi/transport","internal","internal/gensupport","internal/third_party/uritemplates","iterator","option","option/internaloption","storage/v1","transport/cert","transport/http","transport/http/internal/propagation"]
revision = "52f0532eadbcc6f6b82d6f5edf66e610d10bfde6"
version = "v0.21.0"
[[projects]]
name = "google.golang.org/appengine"
packages = [".","internal","internal/app_identity","internal/base","internal/datastore","internal/log","internal/modules","internal/remote_api","internal/urlfetch","urlfetch"]
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a"
version = "v1.0.0"
revision = "971852bfffca25b069c31162ae8f247a3dba083b"
version = "v1.6.5"
[[projects]]
branch = "master"
name = "google.golang.org/genproto"
packages = ["googleapis/api/annotations","googleapis/iam/v1","googleapis/rpc/status"]
revision = "891aceb7c239e72692819142dfca057bdcbfcb96"
packages = ["googleapis/api/annotations","googleapis/iam/v1","googleapis/rpc/status","googleapis/type/expr"]
revision = "baae70f3302d3efdff74db41e48a5d476d036906"
[[projects]]
name = "google.golang.org/grpc"
packages = [".","balancer","balancer/roundrobin","codes","connectivity","credentials","encoding","grpclb/grpc_lb_v1/messages","grpclog","internal","keepalive","metadata","naming","peer","resolver","resolver/dns","resolver/passthrough","stats","status","tap","transport"]
revision = "5a9f7b402fe85096d2e1d0383435ee1876e863d0"
version = "v1.8.0"
packages = [".","attributes","backoff","balancer","balancer/base","balancer/roundrobin","binarylog/grpc_binarylog_v1","codes","connectivity","credentials","credentials/internal","encoding","encoding/proto","grpclog","internal","internal/backoff","internal/balancerload","internal/binarylog","internal/buffer","internal/channelz","internal/envconfig","internal/grpclog","internal/grpcrand","internal/grpcsync","internal/grpcutil","internal/resolver/dns","internal/resolver/passthrough","internal/syscall","internal/transport","keepalive","metadata","naming","peer","resolver","serviceconfig","stats","status","tap"]
revision = "ac54eec90516cee50fc6b9b113b34628a85f976f"
version = "v1.28.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "8636a9db1eb54be5374f9914687693122efdde511f11c47d10c22f9e245e7f70"
inputs-digest = "e124cf64f7f8770e51ae52ac89030d512da946e3fdc2666ebd3a604a624dd679"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -31,7 +31,7 @@
[[constraint]]
name = "github.com/aws/aws-sdk-go"
version = "1.12.31"
version = "1.30.7"
[[constraint]]
name = "github.com/bkaradzic/go-lz4"
@@ -86,9 +86,13 @@
name = "golang.org/x/net"
[[constraint]]
branch = "master"
name = "golang.org/x/oauth2"
revision = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303"
[[constraint]]
branch = "master"
name = "google.golang.org/api"
version = "0.21.0"
[[constraint]]
name = "google.golang.org/grpc"
version = "1.28.0"

View File

@@ -159,6 +159,10 @@ func setGlobalOptions(context *cli.Context) {
}()
}
for _, logID := range context.GlobalStringSlice("suppress") {
duplicacy.SuppressLog(logID)
}
duplicacy.RunInBackground = context.GlobalBool("background")
}
@@ -894,7 +898,12 @@ func checkSnapshots(context *cli.Context) {
runScript(context, preference.Name, "pre")
storage := duplicacy.CreateStorage(*preference, false, 1)
threads := context.Int("threads")
if threads < 1 {
threads = 1
}
storage := duplicacy.CreateStorage(*preference, false, threads)
if storage == nil {
return
}
@@ -922,11 +931,12 @@ func checkSnapshots(context *cli.Context) {
showStatistics := context.Bool("stats")
showTabular := context.Bool("tabular")
checkFiles := context.Bool("files")
checkChunks := context.Bool("chunks")
searchFossils := context.Bool("fossils")
resurrect := context.Bool("resurrect")
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.CheckSnapshots(id, revisions, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect)
backupManager.SnapshotManager.CheckSnapshots(id, revisions, tag, showStatistics, showTabular, checkFiles, checkChunks, searchFossils, resurrect, threads)
runScript(context, preference.Name, "post")
}
@@ -1589,6 +1599,10 @@ func main() {
Name: "files",
Usage: "verify the integrity of every file",
},
cli.BoolFlag{
Name: "chunks",
Usage: "verify the integrity of every chunk",
},
cli.BoolFlag{
Name: "stats",
Usage: "show deduplication statistics (imply -all and all revisions)",
@@ -1607,6 +1621,12 @@ func main() {
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
cli.IntFlag{
Name: "threads",
Value: 1,
Usage: "number of threads used to verify chunks",
Argument: "<n>",
},
},
Usage: "Check the integrity of snapshots",
ArgsUsage: " ",
@@ -1943,7 +1963,7 @@ func main() {
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks from the source storage",
Argument: "<public key>",
Argument: "<private key>",
},
},
Usage: "Copy snapshots between compatible storages",
@@ -2053,13 +2073,18 @@ func main() {
Name: "comment",
Usage: "add a comment to identify the process",
},
cli.StringSliceFlag{
Name: "suppress, s",
Usage: "suppress logs with the specified id",
Argument: "<id>",
},
}
app.HideVersion = true
app.Name = "duplicacy"
app.HelpName = "duplicacy"
app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
app.Version = "2.4.1" + " (" + GitCommit + ")"
app.Version = "2.5.2" + " (" + GitCommit + ")"
// If the program is interrupted, call the RunAtError function.
c := make(chan os.Signal, 1)

View File

@@ -211,6 +211,11 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
return true
}
if len(localSnapshot.Files) == 0 {
LOG_ERROR("SNAPSHOT_EMPTY", "No files under the repository to be backed up")
return false
}
// This cache contains all chunks referenced by last snasphot. Any other chunks will lead to a call to
// UploadChunk.
chunkCache := make(map[string]bool)

View File

@@ -341,7 +341,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
}
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1, 2, 3} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false)
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, []int{1} /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, false /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
numberOfSnapshots = backupManager.SnapshotManager.ListSnapshots( /*snapshotID*/ "host1" /*revisionsToList*/, nil /*tag*/, "" /*showFiles*/, false /*showChunks*/, false)
@@ -349,7 +349,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 2 snapshots but got %d", numberOfSnapshots)
}
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false)
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
backupManager.Backup(testDir+"/repository1" /*quickMode=*/, false, threads, "fourth", false, false, 0, false)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, nil /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, true /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
@@ -358,7 +358,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
}
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3, 4} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false)
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
/*buf := make([]byte, 1<<16)
runtime.Stack(buf, true)

View File

@@ -126,6 +126,7 @@ func (downloader *ChunkDownloader) AddFiles(snapshot *Snapshot, files []*Entry)
// AddChunk adds a single chunk the download list.
func (downloader *ChunkDownloader) AddChunk(chunkHash string) int {
task := ChunkDownloadTask{
chunkIndex: len(downloader.taskList),
chunkHash: chunkHash,
@@ -253,6 +254,47 @@ func (downloader *ChunkDownloader) WaitForChunk(chunkIndex int) (chunk *Chunk) {
return downloader.taskList[chunkIndex].chunk
}
// WaitForCompletion waits until all chunks have been downloaded
func (downloader *ChunkDownloader) WaitForCompletion() {
// Tasks in completedTasks have not been counted by numberOfActiveChunks
downloader.numberOfActiveChunks -= len(downloader.completedTasks)
// find the completed task with the largest index; we'll start from the next index
for index := range downloader.completedTasks {
if downloader.lastChunkIndex < index {
downloader.lastChunkIndex = index
}
}
// Looping until there isn't a download task in progress
for downloader.numberOfActiveChunks > 0 || downloader.lastChunkIndex + 1 < len(downloader.taskList) {
// Wait for a completion event first
if downloader.numberOfActiveChunks > 0 {
completion := <-downloader.completionChannel
downloader.config.PutChunk(completion.chunk)
downloader.numberOfActiveChunks--
downloader.numberOfDownloadedChunks++
downloader.numberOfDownloadingChunks--
}
// Pass the tasks one by one to the download queue
if downloader.lastChunkIndex + 1 < len(downloader.taskList) {
task := &downloader.taskList[downloader.lastChunkIndex + 1]
if task.isDownloading {
downloader.lastChunkIndex++
continue
}
downloader.taskQueue <- *task
task.isDownloading = true
downloader.numberOfDownloadingChunks++
downloader.numberOfActiveChunks++
downloader.lastChunkIndex++
}
}
}
// Stop terminates all downloading goroutines
func (downloader *ChunkDownloader) Stop() {
for downloader.numberOfDownloadingChunks > 0 {

View File

@@ -172,11 +172,11 @@ func (config *Config) Print() {
LOG_TRACE("CONFIG_INFO", "Hash key: %x", config.HashKey)
LOG_TRACE("CONFIG_INFO", "ID key: %x", config.IDKey)
if len(config.ChunkKey) >= 0 {
if len(config.ChunkKey) > 0 {
LOG_TRACE("CONFIG_INFO", "File chunks are encrypted")
}
if len(config.FileKey) >= 0 {
if len(config.FileKey) > 0 {
LOG_TRACE("CONFIG_INFO", "Metadata chunks are encrypted")
}

View File

@@ -6,6 +6,7 @@ package duplicacy
import (
"fmt"
"io/ioutil"
"strings"
"github.com/gilbertchen/go-dropbox"
@@ -199,6 +200,7 @@ func (storage *DropboxStorage) DownloadFile(threadIndex int, filePath string, ch
}
defer output.Body.Close()
defer ioutil.ReadAll(output.Body)
_, err = RateLimitedCopy(chunk, output.Body, storage.DownloadRateLimit/len(storage.clients))
return err

View File

@@ -12,6 +12,7 @@ import (
"os"
"path"
"strings"
"syscall"
"time"
)
@@ -190,10 +191,13 @@ func (storage *FileStorage) UploadFile(threadIndex int, filePath string, content
return err
}
err = file.Sync()
if err != nil {
file.Close()
return err
if err = file.Sync(); err != nil {
pathErr, ok := err.(*os.PathError)
isNotSupported := ok && pathErr.Op == "sync" && pathErr.Err == syscall.ENOTSUP
if !isNotSupported {
_ = file.Close()
return err
}
}
err = file.Close()

View File

@@ -20,13 +20,16 @@ import (
"golang.org/x/net/context"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
"google.golang.org/api/option"
)
var (
GCDFileMimeType = "application/octet-stream"
GCDDirectoryMimeType = "application/vnd.google-apps.folder"
GCDUserDrive = "root"
)
type GCDStorage struct {
@@ -37,6 +40,7 @@ type GCDStorage struct {
idCacheLock sync.Mutex
backoffs []int // desired backoff time in seconds for each thread
attempts []int // number of failed attempts since last success for each thread
driveID string // the ID of the shared drive or 'root' (GCDUserDrive) if the user's drive
createDirectoryLock sync.Mutex
isConnected bool
@@ -191,7 +195,11 @@ func (storage *GCDStorage) listFiles(threadIndex int, parentID string, listFiles
var err error
for {
fileList, err = storage.service.Files.List().Q(query).Fields("nextPageToken", "files(name, mimeType, id, size)").PageToken(startToken).PageSize(maxCount).Do()
q := storage.service.Files.List().Q(query).Fields("nextPageToken", "files(name, mimeType, id, size)").PageToken(startToken).PageSize(maxCount)
if storage.driveID != GCDUserDrive {
q = q.DriveId(storage.driveID).IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true)
}
fileList, err = q.Do()
if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry {
break
} else if retry {
@@ -219,7 +227,11 @@ func (storage *GCDStorage) listByName(threadIndex int, parentID string, name str
for {
query := "name = '" + name + "' and '" + parentID + "' in parents and trashed = false "
fileList, err = storage.service.Files.List().Q(query).Fields("files(name, mimeType, id, size)").Do()
q := storage.service.Files.List().Q(query).Fields("files(name, mimeType, id, size)")
if storage.driveID != GCDUserDrive {
q = q.DriveId(storage.driveID).IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true)
}
fileList, err = q.Do()
if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry {
break
@@ -248,7 +260,7 @@ func (storage *GCDStorage) getIDFromPath(threadIndex int, filePath string, creat
return fileID, nil
}
fileID := "root"
fileID := storage.driveID
if rootID, ok := storage.findPathID(""); ok {
fileID = rootID
@@ -303,37 +315,85 @@ func (storage *GCDStorage) getIDFromPath(threadIndex int, filePath string, creat
}
// CreateGCDStorage creates a GCD storage object.
func CreateGCDStorage(tokenFile string, storagePath string, threads int) (storage *GCDStorage, err error) {
func CreateGCDStorage(tokenFile string, driveID string, storagePath string, threads int) (storage *GCDStorage, err error) {
ctx := context.Background()
description, err := ioutil.ReadFile(tokenFile)
if err != nil {
return nil, err
}
gcdConfig := &GCDConfig{}
if err := json.Unmarshal(description, gcdConfig); err != nil {
return nil, err
}
var object map[string]interface{}
oauth2Config := oauth2.Config{
ClientID: gcdConfig.ClientID,
ClientSecret: gcdConfig.ClientSecret,
Endpoint: gcdConfig.Endpoint,
}
authClient := oauth2Config.Client(context.Background(), &gcdConfig.Token)
service, err := drive.New(authClient)
err = json.Unmarshal(description, &object)
if err != nil {
return nil, err
}
isServiceAccount := false
if value, ok := object["type"]; ok {
if authType, ok := value.(string); ok && authType == "service_account" {
isServiceAccount = true
}
}
var tokenSource oauth2.TokenSource
if isServiceAccount {
config, err := google.JWTConfigFromJSON(description, drive.DriveScope)
if err != nil {
return nil, err
}
tokenSource = config.TokenSource(ctx)
} else {
gcdConfig := &GCDConfig{}
if err := json.Unmarshal(description, gcdConfig); err != nil {
return nil, err
}
config := oauth2.Config{
ClientID: gcdConfig.ClientID,
ClientSecret: gcdConfig.ClientSecret,
Endpoint: gcdConfig.Endpoint,
}
tokenSource = config.TokenSource(ctx, &gcdConfig.Token)
}
service, err := drive.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
return nil, err
}
if len(driveID) == 0 {
driveID = GCDUserDrive
} else {
driveList, err := drive.NewTeamdrivesService(service).List().Do()
if err != nil {
return nil, fmt.Errorf("Failed to look up the drive id: %v", err)
}
found := false
for _, teamDrive := range driveList.TeamDrives {
if teamDrive.Id == driveID || teamDrive.Name == driveID {
driveID = teamDrive.Id
found = true
break
}
}
if !found {
return nil, fmt.Errorf("%s is not the id or name of a shared drive", driveID)
}
}
storage = &GCDStorage{
service: service,
numberOfThreads: threads,
idCache: make(map[string]string),
backoffs: make([]int, threads),
attempts: make([]int, threads),
driveID: driveID,
}
for i := range storage.backoffs {
@@ -341,6 +401,7 @@ func CreateGCDStorage(tokenFile string, storagePath string, threads int) (storag
storage.attempts[i] = 0
}
storage.savePathID("", driveID)
storagePathID, err := storage.getIDFromPath(0, storagePath, true)
if err != nil {
return nil, err
@@ -462,7 +523,7 @@ func (storage *GCDStorage) DeleteFile(threadIndex int, filePath string) (err err
}
for {
err = storage.service.Files.Delete(fileID).Fields("id").Do()
err = storage.service.Files.Delete(fileID).SupportsAllDrives(true).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
storage.deletePathID(filePath)
return nil
@@ -508,7 +569,7 @@ func (storage *GCDStorage) MoveFile(threadIndex int, from string, to string) (er
}
for {
_, err = storage.service.Files.Update(fileID, nil).AddParents(toParentID).RemoveParents(fromParentID).Do()
_, err = storage.service.Files.Update(fileID, nil).SupportsAllDrives(true).AddParents(toParentID).RemoveParents(fromParentID).Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break
} else if retry {
@@ -559,7 +620,7 @@ func (storage *GCDStorage) CreateDirectory(threadIndex int, dir string) (err err
Parents: []string{parentID},
}
file, err = storage.service.Files.Create(file).Fields("id").Do()
file, err = storage.service.Files.Create(file).SupportsAllDrives(true).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break
} else {
@@ -630,7 +691,7 @@ func (storage *GCDStorage) DownloadFile(threadIndex int, filePath string, chunk
for {
// AcknowledgeAbuse(true) lets the download proceed even if GCD thinks that it contains malware.
// TODO: Should this prompt the user or log a warning?
req := storage.service.Files.Get(fileID)
req := storage.service.Files.Get(fileID).SupportsAllDrives(true)
if e, ok := err.(*googleapi.Error); ok {
if strings.Contains(err.Error(), "cannotDownloadAbusiveFile") || len(e.Errors) > 0 && e.Errors[0].Reason == "cannotDownloadAbusiveFile" {
LOG_WARN("GCD_STORAGE", "%s is marked as abusive, will download anyway.", filePath)
@@ -676,7 +737,7 @@ func (storage *GCDStorage) UploadFile(threadIndex int, filePath string, content
for {
reader := CreateRateLimitedReader(content, storage.UploadRateLimit/storage.numberOfThreads)
_, err = storage.service.Files.Create(file).Media(reader).Fields("id").Do()
_, err = storage.service.Files.Create(file).SupportsAllDrives(true).Media(reader).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break
} else if retry {

View File

@@ -7,10 +7,12 @@ package duplicacy
import (
"fmt"
"os"
"log"
"runtime/debug"
"sync"
"testing"
"time"
"regexp"
)
const (
@@ -43,6 +45,13 @@ func setTestingT(t *testing.T) {
testingT = t
}
// Contains the ids of logs that won't be displayed
var suppressedLogs map[string]bool = map[string]bool{}
func SuppressLog(id string) {
suppressedLogs[id] = true
}
func getLevelName(level int) string {
switch level {
case DEBUG:
@@ -143,6 +152,12 @@ func logf(level int, logID string, format string, v ...interface{}) {
defer logMutex.Unlock()
if level >= loggingLevel {
if level <= ERROR && len(suppressedLogs) > 0 {
if _, found := suppressedLogs[logID]; found {
return
}
}
if printLogHeader {
fmt.Printf("%s %s %s %s\n",
now.Format("2006-01-02 15:04:05.000"), getLevelName(level), logID, message)
@@ -161,6 +176,32 @@ func logf(level int, logID string, format string, v ...interface{}) {
}
}
// Set up logging for libraries that Duplicacy depends on. They can call 'log.Printf("[ID] message")'
// to produce logs in Duplicacy's format
type Logger struct {
formatRegex *regexp.Regexp
}
func (logger *Logger) Write(line []byte) (n int, err error) {
n = len(line)
for len(line) > 0 && line[len(line) - 1] == '\n' {
line = line[:len(line) - 1]
}
matched := logger.formatRegex.FindStringSubmatch(string(line))
if matched != nil {
LOG_INFO(matched[1], "%s", matched[2])
} else {
LOG_INFO("LOG_DEFAULT", "%s", line)
}
return
}
func init() {
log.SetFlags(0)
log.SetOutput(&Logger{ formatRegex: regexp.MustCompile(`^\[(.+)\]\s*(.+)`) })
}
const (
duplicacyExitCode = 100
otherExitCode = 101

View File

@@ -15,6 +15,7 @@ import (
"strings"
"sync"
"time"
"path/filepath"
"golang.org/x/oauth2"
)
@@ -32,9 +33,6 @@ type OneDriveErrorResponse struct {
Error OneDriveError `json:"error"`
}
var OneDriveRefreshTokenURL = "https://duplicacy.com/one_refresh"
var OneDriveAPIURL = "https://api.onedrive.com/v1.0"
type OneDriveClient struct {
HTTPClient *http.Client
@@ -44,9 +42,13 @@ type OneDriveClient struct {
IsConnected bool
TestMode bool
IsBusiness bool
RefreshTokenURL string
APIURL string
}
func NewOneDriveClient(tokenFile string) (*OneDriveClient, error) {
func NewOneDriveClient(tokenFile string, isBusiness bool) (*OneDriveClient, error) {
description, err := ioutil.ReadFile(tokenFile)
if err != nil {
@@ -63,6 +65,15 @@ func NewOneDriveClient(tokenFile string) (*OneDriveClient, error) {
TokenFile: tokenFile,
Token: token,
TokenLock: &sync.Mutex{},
IsBusiness: isBusiness,
}
if isBusiness {
client.RefreshTokenURL = "https://duplicacy.com/odb_refresh"
client.APIURL = "https://graph.microsoft.com/v1.0/me"
} else {
client.RefreshTokenURL = "https://duplicacy.com/one_refresh"
client.APIURL = "https://api.onedrive.com/v1.0"
}
client.RefreshToken(false)
@@ -106,9 +117,10 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
if reader, ok := inputReader.(*RateLimitedReader); ok {
request.ContentLength = reader.Length()
request.Header.Set("Content-Range", fmt.Sprintf("bytes 0-%d/%d", reader.Length() - 1, reader.Length()))
}
if url != OneDriveRefreshTokenURL {
if url != client.RefreshTokenURL {
client.TokenLock.Lock()
request.Header.Set("Authorization", "Bearer "+client.Token.AccessToken)
client.TokenLock.Unlock()
@@ -152,7 +164,7 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
if response.StatusCode == 401 {
if url == OneDriveRefreshTokenURL {
if url == client.RefreshTokenURL {
return nil, 0, OneDriveError{Status: response.StatusCode, Message: "Authorization error when refreshing token"}
}
@@ -161,6 +173,8 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
return nil, 0, err
}
continue
} else if response.StatusCode == 409 {
return nil, 0, OneDriveError{Status: response.StatusCode, Message: "Conflict"}
} else if response.StatusCode > 401 && response.StatusCode != 404 {
retryAfter := time.Duration(rand.Float32() * 1000.0 * float32(backoff))
LOG_INFO("ONEDRIVE_RETRY", "Response code: %d; retry after %d milliseconds", response.StatusCode, retryAfter)
@@ -188,7 +202,7 @@ func (client *OneDriveClient) RefreshToken(force bool) (err error) {
return nil
}
readCloser, _, err := client.call(OneDriveRefreshTokenURL, "POST", client.Token, "")
readCloser, _, err := client.call(client.RefreshTokenURL, "POST", client.Token, "")
if err != nil {
return fmt.Errorf("failed to refresh the access token: %v", err)
}
@@ -228,9 +242,9 @@ func (client *OneDriveClient) ListEntries(path string) ([]OneDriveEntry, error)
entries := []OneDriveEntry{}
url := OneDriveAPIURL + "/drive/root:/" + path + ":/children"
url := client.APIURL + "/drive/root:/" + path + ":/children"
if path == "" {
url = OneDriveAPIURL + "/drive/root/children"
url = client.APIURL + "/drive/root/children"
}
if client.TestMode {
url += "?top=8"
@@ -266,7 +280,7 @@ func (client *OneDriveClient) ListEntries(path string) ([]OneDriveEntry, error)
func (client *OneDriveClient) GetFileInfo(path string) (string, bool, int64, error) {
url := OneDriveAPIURL + "/drive/root:/" + path
url := client.APIURL + "/drive/root:/" + path
url += "?select=id,name,size,folder"
readCloser, _, err := client.call(url, "GET", 0, "")
@@ -291,28 +305,95 @@ func (client *OneDriveClient) GetFileInfo(path string) (string, bool, int64, err
func (client *OneDriveClient) DownloadFile(path string) (io.ReadCloser, int64, error) {
url := OneDriveAPIURL + "/drive/items/root:/" + path + ":/content"
url := client.APIURL + "/drive/items/root:/" + path + ":/content"
return client.call(url, "GET", 0, "")
}
func (client *OneDriveClient) UploadFile(path string, content []byte, rateLimit int) (err error) {
url := OneDriveAPIURL + "/drive/root:/" + path + ":/content"
// Upload file using the simple method; this is only possible for OneDrive Personal or if the file
// is smaller than 4MB for OneDrive Business
if !client.IsBusiness || len(content) < 4 * 1024 * 1024 || (client.TestMode && rand.Int() % 2 == 0) {
url := client.APIURL + "/drive/root:/" + path + ":/content"
readCloser, _, err := client.call(url, "PUT", CreateRateLimitedReader(content, rateLimit), "application/octet-stream")
readCloser, _, err := client.call(url, "PUT", CreateRateLimitedReader(content, rateLimit), "application/octet-stream")
if err != nil {
return err
}
readCloser.Close()
return nil
}
// For large files, create an upload session first
uploadURL, err := client.CreateUploadSession(path)
if err != nil {
return err
}
return client.UploadFileSession(uploadURL, content, rateLimit)
}
func (client *OneDriveClient) CreateUploadSession(path string) (uploadURL string, err error) {
type CreateUploadSessionItem struct {
ConflictBehavior string `json:"@microsoft.graph.conflictBehavior"`
Name string `json:"name"`
}
input := map[string]interface{} {
"item": CreateUploadSessionItem {
ConflictBehavior: "replace",
Name: filepath.Base(path),
},
}
readCloser, _, err := client.call(client.APIURL + "/drive/root:/" + path + ":/createUploadSession", "POST", input, "application/json")
if err != nil {
return "", err
}
type CreateUploadSessionOutput struct {
UploadURL string `json:"uploadUrl"`
}
output := &CreateUploadSessionOutput{}
if err = json.NewDecoder(readCloser).Decode(&output); err != nil {
return "", err
}
readCloser.Close()
return output.UploadURL, nil
}
func (client *OneDriveClient) UploadFileSession(uploadURL string, content []byte, rateLimit int) (err error) {
readCloser, _, err := client.call(uploadURL, "PUT", CreateRateLimitedReader(content, rateLimit), "")
if err != nil {
return err
}
type UploadFileSessionOutput struct {
Size int `json:"size"`
}
output := &UploadFileSessionOutput{}
if err = json.NewDecoder(readCloser).Decode(&output); err != nil {
return fmt.Errorf("Failed to complete the file upload session: %v", err)
}
if output.Size != len(content) {
return fmt.Errorf("Uploaded %d bytes out of %d bytes", output.Size, len(content))
}
readCloser.Close()
return nil
}
func (client *OneDriveClient) DeleteFile(path string) error {
url := OneDriveAPIURL + "/drive/root:/" + path
url := client.APIURL + "/drive/root:/" + path
readCloser, _, err := client.call(url, "DELETE", 0, "")
if err != nil {
@@ -325,7 +406,7 @@ func (client *OneDriveClient) DeleteFile(path string) error {
func (client *OneDriveClient) MoveFile(path string, parent string) error {
url := OneDriveAPIURL + "/drive/root:/" + path
url := client.APIURL + "/drive/root:/" + path
parentReference := make(map[string]string)
parentReference["path"] = "/drive/root:/" + parent
@@ -335,6 +416,20 @@ func (client *OneDriveClient) MoveFile(path string, parent string) error {
readCloser, _, err := client.call(url, "PATCH", parameters, "application/json")
if err != nil {
if e, ok := err.(OneDriveError); ok && e.Status == 400 {
// The destination directory doesn't exist; trying to create it...
dir := filepath.Dir(parent)
if dir == "." {
dir = ""
}
client.CreateDirectory(dir, filepath.Base(parent))
readCloser, _, err = client.call(url, "PATCH", parameters, "application/json")
if err != nil {
return nil
}
}
return err
}
@@ -344,24 +439,29 @@ func (client *OneDriveClient) MoveFile(path string, parent string) error {
func (client *OneDriveClient) CreateDirectory(path string, name string) error {
url := OneDriveAPIURL + "/root/children"
url := client.APIURL + "/root/children"
if path != "" {
parentID, isDir, _, err := client.GetFileInfo(path)
pathID, isDir, _, err := client.GetFileInfo(path)
if err != nil {
return err
}
if parentID == "" {
return fmt.Errorf("The path '%s' does not exist", path)
if pathID == "" {
dir := filepath.Dir(path)
if dir != "." {
// The parent directory doesn't exist; trying to create it...
client.CreateDirectory(dir, filepath.Base(path))
isDir = true
}
}
if !isDir {
return fmt.Errorf("The path '%s' is not a directory", path)
}
url = OneDriveAPIURL + "/drive/items/" + parentID + "/children"
url = client.APIURL + "/drive/root:/" + path + ":/children"
}
parameters := make(map[string]interface{})
@@ -370,6 +470,11 @@ func (client *OneDriveClient) CreateDirectory(path string, name string) error {
readCloser, _, err := client.call(url, "POST", parameters, "application/json")
if err != nil {
if e, ok := err.(OneDriveError); ok && e.Status == 409 {
// This error usually means the directory already exists
LOG_TRACE("ONEDRIVE_MKDIR", "The directory '%s/%s' already exists", path, name)
return nil
}
return err
}

View File

@@ -17,7 +17,7 @@ import (
func TestOneDriveClient(t *testing.T) {
oneDriveClient, err := NewOneDriveClient("one-token.json")
oneDriveClient, err := NewOneDriveClient("one-token.json", false)
if err != nil {
t.Errorf("Failed to create the OneDrive client: %v", err)
return

View File

@@ -19,13 +19,13 @@ type OneDriveStorage struct {
}
// CreateOneDriveStorage creates an OneDrive storage object.
func CreateOneDriveStorage(tokenFile string, storagePath string, threads int) (storage *OneDriveStorage, err error) {
func CreateOneDriveStorage(tokenFile string, isBusiness bool, storagePath string, threads int) (storage *OneDriveStorage, err error) {
for len(storagePath) > 0 && storagePath[len(storagePath)-1] == '/' {
storagePath = storagePath[:len(storagePath)-1]
}
client, err := NewOneDriveClient(tokenFile)
client, err := NewOneDriveClient(tokenFile, isBusiness)
if err != nil {
return nil, err
}
@@ -80,6 +80,7 @@ func (storage *OneDriveStorage) convertFilePath(filePath string) string {
// ListFiles return the list of files and subdirectories under 'dir' (non-recursively)
func (storage *OneDriveStorage) ListFiles(threadIndex int, dir string) ([]string, []int64, error) {
for len(dir) > 0 && dir[len(dir)-1] == '/' {
dir = dir[:len(dir)-1]
}

View File

@@ -91,6 +91,10 @@ func CreateSnapshotFromDirectory(id string, top string, nobackupFile string, fil
snapshot.Files = append(snapshot.Files, directory)
subdirectories, skipped, err := ListEntries(top, directory.Path, &snapshot.Files, patterns, nobackupFile, snapshot.discardAttributes)
if err != nil {
if directory.Path == "" {
LOG_ERROR("LIST_FAILURE", "Failed to list the repository root: %v", err)
return nil, nil, nil, err
}
LOG_WARN("LIST_FAILURE", "Failed to list subdirectory: %v", err)
skippedDirectories = append(skippedDirectories, directory.Path)
continue

View File

@@ -653,6 +653,51 @@ func (manager *SnapshotManager) GetSnapshotChunks(snapshot *Snapshot, keepChunkH
return chunks
}
// GetSnapshotChunkHashes has an option to retrieve chunk hashes in addition to chunk ids.
func (manager *SnapshotManager) GetSnapshotChunkHashes(snapshot *Snapshot, chunkHashes *map[string]bool, chunkIDs map[string]bool) {
for _, chunkHash := range snapshot.FileSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
for _, chunkHash := range snapshot.ChunkSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
for _, chunkHash := range snapshot.LengthSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
if len(snapshot.ChunkHashes) == 0 {
description := manager.DownloadSequence(snapshot.ChunkSequence)
err := snapshot.LoadChunks(description)
if err != nil {
LOG_ERROR("SNAPSHOT_CHUNK", "Failed to load chunks for snapshot %s at revision %d: %v",
snapshot.ID, snapshot.Revision, err)
return
}
}
for _, chunkHash := range snapshot.ChunkHashes {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
snapshot.ClearChunks()
}
// ListSnapshots shows the information about a snapshot.
func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList []int, tag string,
showFiles bool, showChunks bool) int {
@@ -757,7 +802,9 @@ func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList
// ListSnapshots shows the information about a snapshot.
func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToCheck []int, tag string, showStatistics bool, showTabular bool,
checkFiles bool, searchFossils bool, resurrect bool) bool {
checkFiles bool, checkChunks, searchFossils bool, resurrect bool, threads int) bool {
manager.chunkDownloader = CreateChunkDownloader(manager.config, manager.storage, manager.snapshotCache, false, threads)
LOG_DEBUG("LIST_PARAMETERS", "id: %s, revisions: %v, tag: %s, showStatistics: %t, showTabular: %t, checkFiles: %t, searchFossils: %t, resurrect: %t",
snapshotID, revisionsToCheck, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect)
@@ -839,6 +886,12 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
}
LOG_INFO("SNAPSHOT_CHECK", "Total chunk size is %s in %d chunks", PrettyNumber(totalChunkSize), len(chunkSizeMap))
var allChunkHashes *map[string]bool
if checkChunks && !checkFiles {
m := make(map[string]bool)
allChunkHashes = &m
}
for snapshotID = range snapshotMap {
for _, snapshot := range snapshotMap[snapshotID] {
@@ -850,9 +903,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
}
chunks := make(map[string]bool)
for _, chunkID := range manager.GetSnapshotChunks(snapshot, false) {
chunks[chunkID] = true
}
manager.GetSnapshotChunkHashes(snapshot, allChunkHashes, chunks)
missingChunks := 0
for chunkID := range chunks {
@@ -946,6 +997,15 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
manager.ShowStatistics(snapshotMap, chunkSizeMap, chunkUniqueMap, chunkSnapshotMap)
}
if checkChunks && !checkFiles {
manager.chunkDownloader.snapshotCache = nil
LOG_INFO("SNAPSHOT_VERIFY", "Verifying %d chunks", len(*allChunkHashes))
for chunkHash := range *allChunkHashes {
manager.chunkDownloader.AddChunk(chunkHash)
}
manager.chunkDownloader.WaitForCompletion()
LOG_INFO("SNAPSHOT_VERIFY", "All %d chunks have been successfully verified", len(*allChunkHashes))
}
return true
}

View File

@@ -620,7 +620,7 @@ func TestPruneNewSnapshots(t *testing.T) {
// Now chunkHash1 wil be resurrected
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 4, 0)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3}, "", false, false, false, false, false)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3}, "", false, false, false, false, false, false, 1)
}
// A fossil collection left by an aborted prune should be ignored if any supposedly deleted snapshot exists
@@ -669,7 +669,7 @@ func TestPruneGhostSnapshots(t *testing.T) {
// Run the prune again but the fossil collection should be igored, since revision 1 still exists
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 3, 2)
snapshotManager.CheckSnapshots("vm1@host1", []int{1, 2, 3}, "", false, false, false, true /*searchFossils*/, false)
snapshotManager.CheckSnapshots("vm1@host1", []int{1, 2, 3}, "", false, false, false, false, true /*searchFossils*/, false, 1)
// Prune snapshot 1 again
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
@@ -683,5 +683,5 @@ func TestPruneGhostSnapshots(t *testing.T) {
// Run the prune again and this time the fossil collection will be processed and the fossils removed
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 3, 0)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3, 4}, "", false, false, false, false, false)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3, 4}, "", false, false, false, false, false, false, 1)
}

View File

@@ -336,7 +336,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
keyFile = GetPassword(preference, "ssh_key_file", "Enter the path of the private key file:",
true, resetPassword)
var key ssh.Signer
var keySigner ssh.Signer
var err error
if keyFile == "" {
@@ -347,15 +347,15 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
if err != nil {
LOG_INFO("SSH_PUBLICKEY", "Failed to read the private key file: %v", err)
} else {
key, err = ssh.ParsePrivateKey(content)
keySigner, err = ssh.ParsePrivateKey(content)
if err != nil {
if strings.Contains(err.Error(), "cannot decode encrypted private keys") {
if _, ok := err.(*ssh.PassphraseMissingError); ok {
LOG_TRACE("SSH_PUBLICKEY", "The private key file is encrypted")
passphrase = GetPassword(preference, "ssh_passphrase", "Enter the passphrase to decrypt the private key file:", false, resetPassword)
if len(passphrase) == 0 {
LOG_INFO("SSH_PUBLICKEY", "No passphrase to descrypt the private key file %s", keyFile)
} else {
key, err = ssh.ParsePrivateKeyWithPassphrase(content, []byte(passphrase))
keySigner, err = ssh.ParsePrivateKeyWithPassphrase(content, []byte(passphrase))
if err != nil {
LOG_INFO("SSH_PUBLICKEY", "Failed to parse the encrypted private key file %s: %v", keyFile, err)
}
@@ -364,11 +364,35 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
LOG_INFO("SSH_PUBLICKEY", "Failed to parse the private key file %s: %v", keyFile, err)
}
}
if keySigner != nil {
certFile := keyFile + "-cert.pub"
if stat, err := os.Stat(certFile); err == nil && !stat.IsDir() {
LOG_DEBUG("SSH_CERTIFICATE", "Attempting to use ssh certificate from file %s", certFile)
var content []byte
content, err = ioutil.ReadFile(certFile)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed to read ssh certificate file %s: %v", certFile, err)
} else {
pubKey, _, _, _, err := ssh.ParseAuthorizedKey(content)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed parse ssh certificate file %s: %v", certFile, err)
} else {
certSigner, err := ssh.NewCertSigner(pubKey.(*ssh.Certificate), keySigner)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed to create certificate signer: %v", err)
} else {
keySigner = certSigner
}
}
}
}
}
}
}
if key != nil {
signers = append(signers, key)
if keySigner != nil {
signers = append(signers, keySigner)
}
if len(signers) > 0 {
@@ -600,26 +624,35 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
SavePassword(preference, "gcs_token", tokenFile)
return gcsStorage
} else if matched[1] == "gcd" {
// Handle writing directly to the root of the drive
// For gcd://driveid@/, driveid@ is match[3] not match[2]
if matched[2] == "" && strings.HasSuffix(matched[3], "@") {
matched[2], matched[3] = matched[3], matched[2]
}
driveID := matched[2]
if driveID != "" {
driveID = driveID[:len(driveID)-1]
}
storagePath := matched[3] + matched[4]
prompt := fmt.Sprintf("Enter the path of the Google Drive token file (downloadable from https://duplicacy.com/gcd_start):")
tokenFile := GetPassword(preference, "gcd_token", prompt, true, resetPassword)
gcdStorage, err := CreateGCDStorage(tokenFile, storagePath, threads)
gcdStorage, err := CreateGCDStorage(tokenFile, driveID, storagePath, threads)
if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the Google Drive storage at %s: %v", storageURL, err)
return nil
}
SavePassword(preference, "gcd_token", tokenFile)
return gcdStorage
} else if matched[1] == "one" {
} else if matched[1] == "one" || matched[1] == "odb" {
storagePath := matched[3] + matched[4]
prompt := fmt.Sprintf("Enter the path of the OneDrive token file (downloadable from https://duplicacy.com/one_start):")
tokenFile := GetPassword(preference, "one_token", prompt, true, resetPassword)
oneDriveStorage, err := CreateOneDriveStorage(tokenFile, storagePath, threads)
tokenFile := GetPassword(preference, matched[1] + "_token", prompt, true, resetPassword)
oneDriveStorage, err := CreateOneDriveStorage(tokenFile, matched[1] == "odb", storagePath, threads)
if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the OneDrive storage at %s: %v", storageURL, err)
return nil
}
SavePassword(preference, "one_token", tokenFile)
SavePassword(preference, matched[1] + "_token", tokenFile)
return oneDriveStorage
} else if matched[1] == "hubic" {
storagePath := matched[3] + matched[4]

View File

@@ -133,11 +133,23 @@ func loadStorage(localStoragePath string, threads int) (Storage, error) {
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "gcd" {
storage, err := CreateGCDStorage(config["token_file"], config["storage_path"], threads)
storage, err := CreateGCDStorage(config["token_file"], "", config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "gcd-shared" {
storage, err := CreateGCDStorage(config["token_file"], config["drive"], config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "one" {
storage, err := CreateOneDriveStorage(config["token_file"], config["storage_path"], threads)
storage, err := CreateOneDriveStorage(config["token_file"], false, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "odb" {
storage, err := CreateOneDriveStorage(config["token_file"], true, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "one" {
storage, err := CreateOneDriveStorage(config["token_file"], false, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "hubic" {