Compare commits

...

36 Commits

Author SHA1 Message Date
Gilbert Chen
4c3557eb80 Bump version to 2.5.0 2020-04-09 23:22:32 -04:00
Gilbert Chen
eebcece9e0 Update github.com/aws/aws-sdk-go and google.golang.org/api to the latest 2020-04-09 23:21:55 -04:00
gilbertchen
8c80470c29 Merge pull request #593 from freaksdotcom/readall
Call ReadAll() on the body io.ReadCloser to allow the http keepalive connection to be reused.
2020-04-09 21:12:41 -04:00
Gilbert Chen
bcb889272d Add test for Google Shared Drive 2020-04-09 00:04:30 -04:00
Gilbert Chen
79d8654a12 Allow the name of Google Shared Drive to be used in the storage url
The previous PR only accepts the id of the shared drive, which is not very
memorable.  This commit makes it able to specify the drive by id or by name.
2020-04-08 23:59:15 -04:00
Brandon High
6bf0d2265c Call ReadAll() on the body io.ReadCloser to allow the http keepalive connection to be reused. 2020-04-07 22:07:41 -07:00
Gilbert Chen
749db78a1f Implemented a global option to suppress logs by ids
You can now use -suppress LOGID or -s LOGID to not print logs with the given
ids.  This is a global option which means it is applicable to all commands.
It can be specified more than once.
2020-04-07 23:22:17 -04:00
Gilbert Chen
0a51bd8d1a Make log.Printf print to Duplicacy's logging system 2020-04-07 13:52:54 -04:00
Gilbert Chen
7208adbce2 Access Google Drive via service account.
Our GCS backend already supports service account.  This just copies relevant
code from there.
2020-04-06 23:25:47 -04:00
gilbertchen
e827662869 Merge pull request #579 from rsanger/master
Add support for Shared Google Drives
2020-04-06 22:55:46 -04:00
gilbertchen
57dd5ba927 Merge pull request #590 from fbarthez/macos_sync_error
Fix "Failed to upload the chunk ... sync ...: operation not supported"
2020-04-06 22:54:44 -04:00
Gilbert Chen
01a37b7828 Fixed a typo in command line arguments 2020-04-06 22:13:08 -04:00
Gilbert Chen
57cd20bb84 Fixed the condition to show 'chunks are encrypted' messages
'File/Metadata chunks are encrypted' were always shown even if the storage
wasn't encrypted.
2020-04-06 12:22:47 -04:00
Gilbert Chen
0e970da222 Fixed build errors in tests caused by snapshotManager.CheckSnapshots 2020-04-06 12:19:42 -04:00
Gilbert Chen
e880636502 Fixed test build errors caused by the prototype change in CheckSnapshots() 2020-03-30 17:49:26 -04:00
Gilbert Chen
810303ce25 Fail the backup if the repository can't be accessed or there are no files
This is mainly to avoid creating an empty snapshot when a drive/share is
not mounted which causes the subsequent backup to scan all files again.
2020-03-30 17:44:53 -04:00
Gilbert Chen
ffac83dd80 Assume the signed certificate of a ssh key file has the suffix '-cert.pub'.
So if the ssh key file is 'mykey' then Duplicacy will check if the signed
certificate can be loaded from the file 'mykey-cert.pub'.  This avoids the
use of another preference variable 'ssh_cert_file'.
2020-03-25 23:50:35 -04:00
gilbertchen
05674871fe Merge pull request #547 from philband/ssh_signed_certificate
Add option to use a ssh key signed with a certificate to authenticate
2020-03-25 23:14:30 -04:00
Gilbert Chen
22d6f3abfc Add a -chunks option to the check command to verify the integrity of chunks
This option will download and verify every chunk.  Unlike the -files option,
this option only downloads each chunk once.  There is also a new -threads
option to use multiple threads to download chunks.
2020-03-24 20:58:45 -04:00
Gilbert Chen
d26ffe2cff Add support for OneDrive for Business
The new storage prefix for OneDrive for Business is odb://

The token file can be downloaded from https://duplicacy.com/odb_start

OneDrive for Business requires basically the same set of API calls with
different endpoints.  However, one major difference is that for files larger
than 4MB, an upload session must be created first which is then used to upload
the file content.  Other than that, there are a few minor differences such as
creating an existing directory, or moving files to a non-existent directory.
2020-03-19 14:59:26 -04:00
Fabian Peters
a35f6c27be Fix "Failed to upload the chunk ... sync ...: operation not supported" issue when using SMB on MacOS. This is done by inspecting the error type and returning the error only if its operation is "sync" and the type is "operation not supported".
Note: this change is my first ever foray into go and based simply on the information provided in: https://forum.duplicacy.com/t/failed-to-upload-the-chunk-operation-not-supported/2875/11
2020-03-16 15:56:31 +01:00
Gilbert Chen
808ae4eb75 Bump version to 2.4.1 2020-03-13 20:44:00 -04:00
Gilbert Chen
6699e2f440 Fixed a bug that disabled RSA when copying from a non RSA-encrypted storage.
When copying to an RSA-encrypted storage, we relied on the RSA encryption
version to determine if a chunk is a snapshot chunk or a file chunk.  This is
wrong when the source storage is not encrypted or not RSA-encrypted.  There
is a more reliable to determine if a chunk is a snapshot chunk or not.
2020-03-13 20:13:27 -04:00
Gilbert Chen
733b68be2c Do not take an RSA private key if the storage wasn't RSA encrypted. 2020-03-11 23:14:01 -04:00
Gilbert Chen
b61906c99e Bump version to 2.4.0 2020-03-05 22:06:24 -05:00
gilbertchen
a0a07d18cc Merge pull request #589 from fracai/b2_download_url
support downloading from a custom URL pointed at B2
2020-03-05 22:00:53 -05:00
Gilbert Chen
a6ce64e715 Fixed handling of repository ids with spaces in the b2 backend
Usually a repository id should not contain spaces or other non-alphanum
characters, but if it does we should be able to handle it correctly.  This
commit fixes the b2 backend to convert file names in a proper way.
2020-03-05 14:45:09 -05:00
Arno Hautala
499b612a0d moving download url config from a key to the storage url pattern 2020-02-25 20:53:19 -05:00
Arno Hautala
46ce0ba1fb support downloading from a custom URL pointed at B2 2020-02-22 22:12:36 -05:00
Gilbert Chen
cc88abd547 Fixed a bug that caused all copied chunks to be RSA encrypted
The field encryptionVersion in the Chunk struct is supposed to pass the status
of RSA encrytpion from a source chunk to a destination chunk in a copy command.
This field needs to be a 3-state boolean in order to pass the status correctly.
2020-02-13 14:03:07 -05:00
Gilbert Chen
e888b6d7e5 Fix bugs in sftp retrying
* Fixed a bug caused by nil sftp client during retry
* Simplify the rework logic in UploadFile
* Change the number of tries from 6 to 8
2020-01-13 16:26:43 -05:00
Richard Sanger
aa07feeac0 Fix bug in gcd: init fails to create directories
init would not create directories in the root of a drive as
it did not know the root drive's ID.
2020-01-11 17:25:21 +13:00
Gilbert Chen
d43fe1a282 Release the list of chunk hashes after processing each snapshot.
The chunk hash list isn't needed any more after being consolidated.
Releasing it immediately after use helps reduce the memory usage.
2019-12-09 22:45:16 -05:00
Richard Sanger
7719bb9f29 Fix: backup to shared drive root
Allows writing to the drive root using:
gcd://driveid@ or gcd://driveid@/

To write to the root of the default user's drive use the special
shared drive named 'root':
gcd://root@/
2019-11-27 00:00:40 +13:00
Richard Sanger
426110e961 Adds support for GDrive Shared Drives
A shared drive can be accessed via
gcd://sharedDriveId@path/to/storage

sharedDriveId is optional and if omitted duplicacy stores to the user's drive.
This remains backwards compatible with existing drives. E.g.
gcd://path/to/storage

Note: Shared Drives were previously named Team Drives.
2019-11-06 00:51:12 +13:00
Philipp Bandow
a55ac1b7ad Add option to use a ssh key signed with a certificate to authenticate 2019-02-28 01:37:14 +01:00
24 changed files with 640 additions and 208 deletions

137
Gopkg.lock generated
View File

@@ -7,17 +7,11 @@
revision = "2d3a6656c17a60b0815b7e06ab0be04eacb6e613" revision = "2d3a6656c17a60b0815b7e06ab0be04eacb6e613"
version = "v0.16.0" version = "v0.16.0"
[[projects]]
name = "github.com/Azure/azure-sdk-for-go"
packages = ["version"]
revision = "b7fadebe0e7f5c5720986080a01495bd8d27be37"
version = "v14.2.0"
[[projects]] [[projects]]
name = "github.com/Azure/go-autorest" name = "github.com/Azure/go-autorest"
packages = ["autorest","autorest/adal","autorest/azure","autorest/date"] packages = ["autorest","autorest/adal","autorest/azure","autorest/date","logger","version"]
revision = "0ae36a9e544696de46fdadb7b0d5fb38af48c063" revision = "9bc4033dd347c7f416fca46b2f42a043dc1fbdf6"
version = "v10.2.0" version = "v10.15.5"
[[projects]] [[projects]]
branch = "master" branch = "master"
@@ -27,9 +21,9 @@
[[projects]] [[projects]]
name = "github.com/aws/aws-sdk-go" name = "github.com/aws/aws-sdk-go"
packages = ["aws","aws/awserr","aws/awsutil","aws/client","aws/client/metadata","aws/corehandlers","aws/credentials","aws/credentials/ec2rolecreds","aws/credentials/endpointcreds","aws/credentials/stscreds","aws/defaults","aws/ec2metadata","aws/endpoints","aws/request","aws/session","aws/signer/v4","internal/shareddefaults","private/protocol","private/protocol/query","private/protocol/query/queryutil","private/protocol/rest","private/protocol/restxml","private/protocol/xml/xmlutil","service/s3","service/sts"] packages = ["aws","aws/arn","aws/awserr","aws/awsutil","aws/client","aws/client/metadata","aws/corehandlers","aws/credentials","aws/credentials/ec2rolecreds","aws/credentials/endpointcreds","aws/credentials/processcreds","aws/credentials/stscreds","aws/csm","aws/defaults","aws/ec2metadata","aws/endpoints","aws/request","aws/session","aws/signer/v4","internal/context","internal/ini","internal/s3err","internal/sdkio","internal/sdkmath","internal/sdkrand","internal/sdkuri","internal/shareddefaults","internal/strings","internal/sync/singleflight","private/protocol","private/protocol/eventstream","private/protocol/eventstream/eventstreamapi","private/protocol/json/jsonutil","private/protocol/query","private/protocol/query/queryutil","private/protocol/rest","private/protocol/restxml","private/protocol/xml/xmlutil","service/s3","service/s3/internal/arn","service/sts","service/sts/stsiface"]
revision = "a32b1dcd091264b5dee7b386149b6cc3823395c9" revision = "851d5ffb66720c2540cc68020d4d8708950686c8"
version = "v1.12.31" version = "v1.30.7"
[[projects]] [[projects]]
name = "github.com/bkaradzic/go-lz4" name = "github.com/bkaradzic/go-lz4"
@@ -40,14 +34,14 @@
[[projects]] [[projects]]
name = "github.com/dgrijalva/jwt-go" name = "github.com/dgrijalva/jwt-go"
packages = ["."] packages = ["."]
revision = "dbeaa9332f19a944acb5736b4456cfcc02140e29" revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.1.0" version = "v3.2.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "github.com/gilbertchen/azure-sdk-for-go" name = "github.com/gilbertchen/azure-sdk-for-go"
packages = ["storage"] packages = ["storage","version"]
revision = "bbf89bd4d716c184f158d1e1428c2dbef4a18307" revision = "8fd4663cab7c7c1c46d00449291c92ad23b0d0d9"
[[projects]] [[projects]]
branch = "master" branch = "master"
@@ -59,7 +53,7 @@
branch = "master" branch = "master"
name = "github.com/gilbertchen/go-dropbox" name = "github.com/gilbertchen/go-dropbox"
packages = ["."] packages = ["."]
revision = "90711b603312b1f973f3a5da3793ac4f1e5c2f2a" revision = "994e692c5061cefa14e4296600a773de7119aa15"
[[projects]] [[projects]]
name = "github.com/gilbertchen/go-ole" name = "github.com/gilbertchen/go-ole"
@@ -98,33 +92,33 @@
revision = "68e7a6806b0137a396d7d05601d7403ae1abac58" revision = "68e7a6806b0137a396d7d05601d7403ae1abac58"
[[projects]] [[projects]]
name = "github.com/go-ini/ini" branch = "master"
packages = ["."] name = "github.com/golang/groupcache"
revision = "32e4c1e6bc4e7d0d8451aa6b75200d19e37a536a" packages = ["lru"]
version = "v1.32.0" revision = "8c9f03a8e57eb486e42badaed3fb287da51807ba"
[[projects]] [[projects]]
branch = "master"
name = "github.com/golang/protobuf" name = "github.com/golang/protobuf"
packages = ["proto","protoc-gen-go/descriptor","ptypes","ptypes/any","ptypes/duration","ptypes/timestamp"] packages = ["proto","protoc-gen-go","protoc-gen-go/descriptor","protoc-gen-go/generator","protoc-gen-go/generator/internal/remap","protoc-gen-go/grpc","protoc-gen-go/plugin","ptypes","ptypes/any","ptypes/duration","ptypes/timestamp"]
revision = "1e59b77b52bf8e4b449a57e6f79f21226d571845" revision = "84668698ea25b64748563aa20726db66a6b8d299"
version = "v1.3.5"
[[projects]] [[projects]]
name = "github.com/googleapis/gax-go" name = "github.com/googleapis/gax-go"
packages = ["."] packages = [".","v2"]
revision = "317e0006254c44a0ac427cc52a0e083ff0b9622f" revision = "c8a15bac9b9fe955bd9f900272f9a306465d28cf"
version = "v2.0.0" version = "v2.0.3"
[[projects]] [[projects]]
name = "github.com/jmespath/go-jmespath" name = "github.com/jmespath/go-jmespath"
packages = ["."] packages = ["."]
revision = "0b12d6b5" revision = "c2b33e84"
[[projects]] [[projects]]
branch = "master"
name = "github.com/kr/fs" name = "github.com/kr/fs"
packages = ["."] packages = ["."]
revision = "2788f0dbd16903de03cb8186e5c7d97b69ad387b" revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
version = "v0.1.0"
[[projects]] [[projects]]
name = "github.com/marstr/guid" name = "github.com/marstr/guid"
@@ -139,22 +133,22 @@
revision = "3f5f724cb5b182a5c278d6d3d55b40e7f8c2efb4" revision = "3f5f724cb5b182a5c278d6d3d55b40e7f8c2efb4"
[[projects]] [[projects]]
branch = "master"
name = "github.com/ncw/swift" name = "github.com/ncw/swift"
packages = ["."] packages = ["."]
revision = "ae9f0ea1605b9aa6434ed5c731ca35d83ba67c55" revision = "3e1a09f21340e4828e7265aa89f4dc1495fa7ccc"
version = "v1.0.50"
[[projects]] [[projects]]
name = "github.com/pkg/errors" name = "github.com/pkg/errors"
packages = ["."] packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d" revision = "614d223910a179a466c1767a985424175c39b465"
version = "v0.8.0" version = "v0.9.1"
[[projects]] [[projects]]
name = "github.com/pkg/sftp" name = "github.com/pkg/sftp"
packages = ["."] packages = ["."]
revision = "3edd153f213d8d4191a0ee4577c61cca19436632" revision = "5616182052227b951e76d9c9b79a616c608bd91b"
version = "v1.10.1" version = "v1.11.0"
[[projects]] [[projects]]
name = "github.com/satori/go.uuid" name = "github.com/satori/go.uuid"
@@ -168,63 +162,92 @@
packages = ["."] packages = ["."]
revision = "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1" revision = "a98ad7ee00ec53921f08832bc06ecf7fd600e6a1"
[[projects]]
name = "go.opencensus.io"
packages = [".","internal","internal/tagencoding","metric/metricdata","metric/metricproducer","plugin/ochttp","plugin/ochttp/propagation/b3","resource","stats","stats/internal","stats/view","tag","trace","trace/internal","trace/propagation","trace/tracestate"]
revision = "d835ff86be02193d324330acdb7d65546b05f814"
version = "v0.22.3"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "golang.org/x/crypto" name = "golang.org/x/crypto"
packages = ["curve25519","ed25519","ed25519/internal/edwards25519","pbkdf2","ssh","ssh/agent","ssh/terminal"] packages = ["blowfish","chacha20","curve25519","ed25519","ed25519/internal/edwards25519","internal/subtle","pbkdf2","poly1305","ssh","ssh/agent","ssh/internal/bcrypt_pbkdf","ssh/terminal"]
revision = "9f005a07e0d31d45e6656d241bb5c0f2efd4bc94" revision = "056763e48d71961566155f089ac0f02f1dda9b5a"
[[projects]]
branch = "master"
name = "golang.org/x/exp"
packages = ["apidiff","cmd/apidiff"]
revision = "e8c3332aa8e5b8e6acb4707c3a7e5979052b20aa"
[[projects]]
name = "golang.org/x/mod"
packages = ["module","semver"]
revision = "ed3ec21bb8e252814c380df79a80f366440ddb2d"
version = "v0.2.0"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "golang.org/x/net" name = "golang.org/x/net"
packages = ["context","context/ctxhttp","http2","http2/hpack","idna","internal/timeseries","lex/httplex","trace"] packages = ["context","context/ctxhttp","http/httpguts","http2","http2/hpack","idna","internal/timeseries","trace"]
revision = "9dfe39835686865bff950a07b394c12a98ddc811" revision = "d3edc9973b7eb1fb302b0ff2c62357091cea9a30"
[[projects]] [[projects]]
branch = "master"
name = "golang.org/x/oauth2" name = "golang.org/x/oauth2"
packages = [".","google","internal","jws","jwt"] packages = [".","google","internal","jws","jwt"]
revision = "f95fa95eaa936d9d87489b15d1d18b97c1ba9c28" revision = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "golang.org/x/sys" name = "golang.org/x/sys"
packages = ["unix","windows"] packages = ["cpu","unix","windows"]
revision = "82aafbf43bf885069dc71b7e7c2f9d7a614d47da" revision = "59c9f1ba88faf592b225274f69c5ef1e4ebacf82"
[[projects]] [[projects]]
branch = "master"
name = "golang.org/x/text" name = "golang.org/x/text"
packages = ["collate","collate/build","internal/colltab","internal/gen","internal/tag","internal/triegen","internal/ucd","language","secure/bidirule","transform","unicode/bidi","unicode/cldr","unicode/norm","unicode/rangetable"] packages = ["collate","collate/build","internal/colltab","internal/gen","internal/language","internal/language/compact","internal/tag","internal/triegen","internal/ucd","language","secure/bidirule","transform","unicode/bidi","unicode/cldr","unicode/norm","unicode/rangetable"]
revision = "88f656faf3f37f690df1a32515b479415e1a6769" revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475"
version = "v0.3.2"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "golang.org/x/tools"
packages = ["cmd/goimports","go/ast/astutil","go/gcexportdata","go/internal/gcimporter","go/internal/packagesdriver","go/packages","go/types/typeutil","internal/fastwalk","internal/gocommand","internal/gopathwalk","internal/imports","internal/packagesinternal","internal/telemetry/event"]
revision = "700752c244080ed7ef6a61c3cfd73382cd334e57"
[[projects]]
branch = "master"
name = "golang.org/x/xerrors"
packages = [".","internal"]
revision = "9bdfabe68543c54f90421aeb9a60ef8061b5b544"
[[projects]]
name = "google.golang.org/api" name = "google.golang.org/api"
packages = ["drive/v3","gensupport","googleapi","googleapi/internal/uritemplates","googleapi/transport","internal","iterator","option","storage/v1","transport/http"] packages = ["drive/v3","googleapi","googleapi/transport","internal","internal/gensupport","internal/third_party/uritemplates","iterator","option","option/internaloption","storage/v1","transport/cert","transport/http","transport/http/internal/propagation"]
revision = "17b5f22a248d6d3913171c1a557552ace0d9c806" revision = "52f0532eadbcc6f6b82d6f5edf66e610d10bfde6"
version = "v0.21.0"
[[projects]] [[projects]]
name = "google.golang.org/appengine" name = "google.golang.org/appengine"
packages = [".","internal","internal/app_identity","internal/base","internal/datastore","internal/log","internal/modules","internal/remote_api","internal/urlfetch","urlfetch"] packages = [".","internal","internal/app_identity","internal/base","internal/datastore","internal/log","internal/modules","internal/remote_api","internal/urlfetch","urlfetch"]
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a" revision = "971852bfffca25b069c31162ae8f247a3dba083b"
version = "v1.0.0" version = "v1.6.5"
[[projects]] [[projects]]
branch = "master" branch = "master"
name = "google.golang.org/genproto" name = "google.golang.org/genproto"
packages = ["googleapis/api/annotations","googleapis/iam/v1","googleapis/rpc/status"] packages = ["googleapis/api/annotations","googleapis/iam/v1","googleapis/rpc/status","googleapis/type/expr"]
revision = "891aceb7c239e72692819142dfca057bdcbfcb96" revision = "baae70f3302d3efdff74db41e48a5d476d036906"
[[projects]] [[projects]]
name = "google.golang.org/grpc" name = "google.golang.org/grpc"
packages = [".","balancer","balancer/roundrobin","codes","connectivity","credentials","encoding","grpclb/grpc_lb_v1/messages","grpclog","internal","keepalive","metadata","naming","peer","resolver","resolver/dns","resolver/passthrough","stats","status","tap","transport"] packages = [".","attributes","backoff","balancer","balancer/base","balancer/roundrobin","binarylog/grpc_binarylog_v1","codes","connectivity","credentials","credentials/internal","encoding","encoding/proto","grpclog","internal","internal/backoff","internal/balancerload","internal/binarylog","internal/buffer","internal/channelz","internal/envconfig","internal/grpclog","internal/grpcrand","internal/grpcsync","internal/grpcutil","internal/resolver/dns","internal/resolver/passthrough","internal/syscall","internal/transport","keepalive","metadata","naming","peer","resolver","serviceconfig","stats","status","tap"]
revision = "5a9f7b402fe85096d2e1d0383435ee1876e863d0" revision = "ac54eec90516cee50fc6b9b113b34628a85f976f"
version = "v1.8.0" version = "v1.28.1"
[solve-meta] [solve-meta]
analyzer-name = "dep" analyzer-name = "dep"
analyzer-version = 1 analyzer-version = 1
inputs-digest = "8636a9db1eb54be5374f9914687693122efdde511f11c47d10c22f9e245e7f70" inputs-digest = "e124cf64f7f8770e51ae52ac89030d512da946e3fdc2666ebd3a604a624dd679"
solver-name = "gps-cdcl" solver-name = "gps-cdcl"
solver-version = 1 solver-version = 1

View File

@@ -31,7 +31,7 @@
[[constraint]] [[constraint]]
name = "github.com/aws/aws-sdk-go" name = "github.com/aws/aws-sdk-go"
version = "1.12.31" version = "1.30.7"
[[constraint]] [[constraint]]
name = "github.com/bkaradzic/go-lz4" name = "github.com/bkaradzic/go-lz4"
@@ -86,9 +86,13 @@
name = "golang.org/x/net" name = "golang.org/x/net"
[[constraint]] [[constraint]]
branch = "master"
name = "golang.org/x/oauth2" name = "golang.org/x/oauth2"
revision = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303"
[[constraint]] [[constraint]]
branch = "master"
name = "google.golang.org/api" name = "google.golang.org/api"
version = "0.21.0"
[[constraint]]
name = "google.golang.org/grpc"
version = "1.28.0"

View File

@@ -159,6 +159,10 @@ func setGlobalOptions(context *cli.Context) {
}() }()
} }
for _, logID := range context.GlobalStringSlice("suppress") {
duplicacy.SuppressLog(logID)
}
duplicacy.RunInBackground = context.GlobalBool("background") duplicacy.RunInBackground = context.GlobalBool("background")
} }
@@ -894,7 +898,12 @@ func checkSnapshots(context *cli.Context) {
runScript(context, preference.Name, "pre") runScript(context, preference.Name, "pre")
storage := duplicacy.CreateStorage(*preference, false, 1) threads := context.Int("threads")
if threads < 1 {
threads = 1
}
storage := duplicacy.CreateStorage(*preference, false, threads)
if storage == nil { if storage == nil {
return return
} }
@@ -922,11 +931,12 @@ func checkSnapshots(context *cli.Context) {
showStatistics := context.Bool("stats") showStatistics := context.Bool("stats")
showTabular := context.Bool("tabular") showTabular := context.Bool("tabular")
checkFiles := context.Bool("files") checkFiles := context.Bool("files")
checkChunks := context.Bool("chunks")
searchFossils := context.Bool("fossils") searchFossils := context.Bool("fossils")
resurrect := context.Bool("resurrect") resurrect := context.Bool("resurrect")
backupManager.SetupSnapshotCache(preference.Name) backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.CheckSnapshots(id, revisions, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect) backupManager.SnapshotManager.CheckSnapshots(id, revisions, tag, showStatistics, showTabular, checkFiles, checkChunks, searchFossils, resurrect, threads)
runScript(context, preference.Name, "post") runScript(context, preference.Name, "post")
} }
@@ -1589,6 +1599,10 @@ func main() {
Name: "files", Name: "files",
Usage: "verify the integrity of every file", Usage: "verify the integrity of every file",
}, },
cli.BoolFlag{
Name: "chunks",
Usage: "verify the integrity of every chunk",
},
cli.BoolFlag{ cli.BoolFlag{
Name: "stats", Name: "stats",
Usage: "show deduplication statistics (imply -all and all revisions)", Usage: "show deduplication statistics (imply -all and all revisions)",
@@ -1607,6 +1621,12 @@ func main() {
Usage: "the RSA private key to decrypt file chunks", Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>", Argument: "<private key>",
}, },
cli.IntFlag{
Name: "threads",
Value: 1,
Usage: "number of threads used to verify chunks",
Argument: "<n>",
},
}, },
Usage: "Check the integrity of snapshots", Usage: "Check the integrity of snapshots",
ArgsUsage: " ", ArgsUsage: " ",
@@ -1943,7 +1963,7 @@ func main() {
cli.StringFlag{ cli.StringFlag{
Name: "key", Name: "key",
Usage: "the RSA private key to decrypt file chunks from the source storage", Usage: "the RSA private key to decrypt file chunks from the source storage",
Argument: "<public key>", Argument: "<private key>",
}, },
}, },
Usage: "Copy snapshots between compatible storages", Usage: "Copy snapshots between compatible storages",
@@ -2053,13 +2073,18 @@ func main() {
Name: "comment", Name: "comment",
Usage: "add a comment to identify the process", Usage: "add a comment to identify the process",
}, },
cli.StringSliceFlag{
Name: "suppress, s",
Usage: "suppress logs with the specified id",
Argument: "<id>",
},
} }
app.HideVersion = true app.HideVersion = true
app.Name = "duplicacy" app.Name = "duplicacy"
app.HelpName = "duplicacy" app.HelpName = "duplicacy"
app.Usage = "A new generation cloud backup tool based on lock-free deduplication" app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
app.Version = "2.3.0" + " (" + GitCommit + ")" app.Version = "2.5.0" + " (" + GitCommit + ")"
// If the program is interrupted, call the RunAtError function. // If the program is interrupted, call the RunAtError function.
c := make(chan os.Signal, 1) c := make(chan os.Signal, 1)

View File

@@ -75,7 +75,7 @@ func B2Escape(path string) string {
return strings.Join(components, "/") return strings.Join(components, "/")
} }
func NewB2Client(applicationKeyID string, applicationKey string, storageDir string, threads int) *B2Client { func NewB2Client(applicationKeyID string, applicationKey string, downloadURL string, storageDir string, threads int) *B2Client {
for storageDir != "" && storageDir[0] == '/' { for storageDir != "" && storageDir[0] == '/' {
storageDir = storageDir[1:] storageDir = storageDir[1:]
@@ -95,6 +95,7 @@ func NewB2Client(applicationKeyID string, applicationKey string, storageDir stri
HTTPClient: http.DefaultClient, HTTPClient: http.DefaultClient,
ApplicationKeyID: applicationKeyID, ApplicationKeyID: applicationKeyID,
ApplicationKey: applicationKey, ApplicationKey: applicationKey,
DownloadURL: downloadURL,
StorageDir: storageDir, StorageDir: storageDir,
UploadURLs: make([]string, threads), UploadURLs: make([]string, threads),
UploadTokens: make([]string, threads), UploadTokens: make([]string, threads),
@@ -325,7 +326,10 @@ func (client *B2Client) AuthorizeAccount(threadIndex int) (err error, allowed bo
client.AuthorizationToken = output.AuthorizationToken client.AuthorizationToken = output.AuthorizationToken
client.APIURL = output.APIURL client.APIURL = output.APIURL
client.DownloadURL = output.DownloadURL if client.DownloadURL == "" {
client.DownloadURL = output.DownloadURL
}
LOG_INFO("BACKBLAZE_URL", "download URL is: %s", client.DownloadURL)
client.IsAuthorized = true client.IsAuthorized = true
client.LastAuthorizationTime = time.Now().Unix() client.LastAuthorizationTime = time.Now().Unix()
@@ -413,16 +417,16 @@ func (client *B2Client) ListFileNames(threadIndex int, startFileName string, sin
input["prefix"] = client.StorageDir input["prefix"] = client.StorageDir
for { for {
url := client.getAPIURL() + "/b2api/v1/b2_list_file_names" apiURL := client.getAPIURL() + "/b2api/v1/b2_list_file_names"
requestHeaders := map[string]string{} requestHeaders := map[string]string{}
requestMethod := http.MethodPost requestMethod := http.MethodPost
var requestInput interface{} var requestInput interface{}
requestInput = input requestInput = input
if includeVersions { if includeVersions {
url = client.getAPIURL() + "/b2api/v1/b2_list_file_versions" apiURL = client.getAPIURL() + "/b2api/v1/b2_list_file_versions"
} else if singleFile { } else if singleFile {
// handle a single file with no versions as a special case to download the last byte of the file // handle a single file with no versions as a special case to download the last byte of the file
url = client.getDownloadURL() + "/file/" + client.BucketName + "/" + B2Escape(client.StorageDir + startFileName) apiURL = client.getDownloadURL() + "/file/" + client.BucketName + "/" + B2Escape(client.StorageDir + startFileName)
// requesting byte -1 works for empty files where 0-0 fails with a 416 error // requesting byte -1 works for empty files where 0-0 fails with a 416 error
requestHeaders["Range"] = "bytes=-1" requestHeaders["Range"] = "bytes=-1"
// HEAD request // HEAD request
@@ -432,7 +436,7 @@ func (client *B2Client) ListFileNames(threadIndex int, startFileName string, sin
var readCloser io.ReadCloser var readCloser io.ReadCloser
var responseHeader http.Header var responseHeader http.Header
var err error var err error
readCloser, responseHeader, _, err = client.call(threadIndex, url, requestMethod, requestHeaders, requestInput) readCloser, responseHeader, _, err = client.call(threadIndex, apiURL, requestMethod, requestHeaders, requestInput)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -445,7 +449,7 @@ func (client *B2Client) ListFileNames(threadIndex int, startFileName string, sin
if singleFile && !includeVersions { if singleFile && !includeVersions {
if responseHeader == nil { if responseHeader == nil {
LOG_DEBUG("BACKBLAZE_LIST", "%s did not return headers", url) LOG_DEBUG("BACKBLAZE_LIST", "%s did not return headers", apiURL)
return []*B2Entry{}, nil return []*B2Entry{}, nil
} }
requiredHeaders := []string{ requiredHeaders := []string{
@@ -459,11 +463,17 @@ func (client *B2Client) ListFileNames(threadIndex int, startFileName string, sin
} }
} }
if len(missingKeys) > 0 { if len(missingKeys) > 0 {
return nil, fmt.Errorf("%s missing headers: %s", url, missingKeys) return nil, fmt.Errorf("%s missing headers: %s", apiURL, missingKeys)
} }
// construct the B2Entry from the response headers of the download request // construct the B2Entry from the response headers of the download request
fileID := responseHeader.Get("x-bz-file-id") fileID := responseHeader.Get("x-bz-file-id")
fileName := responseHeader.Get("x-bz-file-name") fileName := responseHeader.Get("x-bz-file-name")
unescapedFileName, err := url.QueryUnescape(fileName)
if err == nil {
fileName = unescapedFileName
} else {
LOG_WARN("BACKBLAZE_UNESCAPE", "Failed to unescape the file name %s", fileName)
}
fileAction := "upload" fileAction := "upload"
// byte range that is returned: "bytes #-#/# // byte range that is returned: "bytes #-#/#
rangeString := responseHeader.Get("Content-Range") rangeString := responseHeader.Get("Content-Range")
@@ -476,10 +486,10 @@ func (client *B2Client) ListFileNames(threadIndex int, startFileName string, sin
// this should only execute if the requested file is empty and the range request didn't result in a Content-Range header // this should only execute if the requested file is empty and the range request didn't result in a Content-Range header
fileSize, _ = strconv.ParseInt(lengthString, 0, 64) fileSize, _ = strconv.ParseInt(lengthString, 0, 64)
if fileSize != 0 { if fileSize != 0 {
return nil, fmt.Errorf("%s returned non-zero file length", url) return nil, fmt.Errorf("%s returned non-zero file length", apiURL)
} }
} else { } else {
return nil, fmt.Errorf("could not parse headers returned by %s", url) return nil, fmt.Errorf("could not parse headers returned by %s", apiURL)
} }
fileUploadTimestamp, _ := strconv.ParseInt(responseHeader.Get("X-Bz-Upload-Timestamp"), 0, 64) fileUploadTimestamp, _ := strconv.ParseInt(responseHeader.Get("X-Bz-Upload-Timestamp"), 0, 64)

View File

@@ -37,7 +37,7 @@ func createB2ClientForTest(t *testing.T) (*B2Client, string) {
return nil, "" return nil, ""
} }
return NewB2Client(b2["account"], b2["key"], b2["directory"], 1), b2["bucket"] return NewB2Client(b2["account"], b2["key"], "", b2["directory"], 1), b2["bucket"]
} }

View File

@@ -15,9 +15,9 @@ type B2Storage struct {
} }
// CreateB2Storage creates a B2 storage object. // CreateB2Storage creates a B2 storage object.
func CreateB2Storage(accountID string, applicationKey string, bucket string, storageDir string, threads int) (storage *B2Storage, err error) { func CreateB2Storage(accountID string, applicationKey string, downloadURL string, bucket string, storageDir string, threads int) (storage *B2Storage, err error) {
client := NewB2Client(accountID, applicationKey, storageDir, threads) client := NewB2Client(accountID, applicationKey, downloadURL, storageDir, threads)
err, _ = client.AuthorizeAccount(0) err, _ = client.AuthorizeAccount(0)
if err != nil { if err != nil {
@@ -204,7 +204,6 @@ func (storage *B2Storage) GetFileInfo(threadIndex int, filePath string) (exist b
// DownloadFile reads the file at 'filePath' into the chunk. // DownloadFile reads the file at 'filePath' into the chunk.
func (storage *B2Storage) DownloadFile(threadIndex int, filePath string, chunk *Chunk) (err error) { func (storage *B2Storage) DownloadFile(threadIndex int, filePath string, chunk *Chunk) (err error) {
filePath = strings.Replace(filePath, " ", "%20", -1)
readCloser, _, err := storage.client.DownloadFile(threadIndex, filePath) readCloser, _, err := storage.client.DownloadFile(threadIndex, filePath)
if err != nil { if err != nil {
return err return err
@@ -218,7 +217,6 @@ func (storage *B2Storage) DownloadFile(threadIndex int, filePath string, chunk *
// UploadFile writes 'content' to the file at 'filePath'. // UploadFile writes 'content' to the file at 'filePath'.
func (storage *B2Storage) UploadFile(threadIndex int, filePath string, content []byte) (err error) { func (storage *B2Storage) UploadFile(threadIndex int, filePath string, content []byte) (err error) {
filePath = strings.Replace(filePath, " ", "%20", -1)
return storage.client.UploadFile(threadIndex, filePath, content, storage.UploadRateLimit/storage.client.Threads) return storage.client.UploadFile(threadIndex, filePath, content, storage.UploadRateLimit/storage.client.Threads)
} }

View File

@@ -211,6 +211,11 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
return true return true
} }
if len(localSnapshot.Files) == 0 {
LOG_ERROR("SNAPSHOT_EMPTY", "No files under the repository to be backed up")
return false
}
// This cache contains all chunks referenced by last snasphot. Any other chunks will lead to a call to // This cache contains all chunks referenced by last snasphot. Any other chunks will lead to a call to
// UploadChunk. // UploadChunk.
chunkCache := make(map[string]bool) chunkCache := make(map[string]bool)
@@ -1626,6 +1631,9 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
return true return true
} }
// These two maps store hashes of chunks in the source and destination storages, respectively. Note that
// the value of 'chunks' is used to indicated if the chunk is a snapshot chunk, while the value of 'otherChunks'
// is not used.
chunks := make(map[string]bool) chunks := make(map[string]bool)
otherChunks := make(map[string]bool) otherChunks := make(map[string]bool)
@@ -1638,21 +1646,15 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
LOG_TRACE("SNAPSHOT_COPY", "Copying snapshot %s at revision %d", snapshot.ID, snapshot.Revision) LOG_TRACE("SNAPSHOT_COPY", "Copying snapshot %s at revision %d", snapshot.ID, snapshot.Revision)
for _, chunkHash := range snapshot.FileSequence { for _, chunkHash := range snapshot.FileSequence {
if _, found := chunks[chunkHash]; !found { chunks[chunkHash] = true // The chunk is a snapshot chunk
chunks[chunkHash] = true
}
} }
for _, chunkHash := range snapshot.ChunkSequence { for _, chunkHash := range snapshot.ChunkSequence {
if _, found := chunks[chunkHash]; !found { chunks[chunkHash] = true // The chunk is a snapshot chunk
chunks[chunkHash] = true
}
} }
for _, chunkHash := range snapshot.LengthSequence { for _, chunkHash := range snapshot.LengthSequence {
if _, found := chunks[chunkHash]; !found { chunks[chunkHash] = true // The chunk is a snapshot chunk
chunks[chunkHash] = true
}
} }
description := manager.SnapshotManager.DownloadSequence(snapshot.ChunkSequence) description := manager.SnapshotManager.DownloadSequence(snapshot.ChunkSequence)
@@ -1665,9 +1667,11 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
for _, chunkHash := range snapshot.ChunkHashes { for _, chunkHash := range snapshot.ChunkHashes {
if _, found := chunks[chunkHash]; !found { if _, found := chunks[chunkHash]; !found {
chunks[chunkHash] = true chunks[chunkHash] = false // The chunk is a file chunk
} }
} }
snapshot.ChunkHashes = nil
} }
otherChunkFiles, otherChunkSizes := otherManager.SnapshotManager.ListAllFiles(otherManager.storage, "chunks/") otherChunkFiles, otherChunkSizes := otherManager.SnapshotManager.ListAllFiles(otherManager.storage, "chunks/")
@@ -1719,7 +1723,7 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
totalSkipped := 0 totalSkipped := 0
chunkIndex := 0 chunkIndex := 0
for chunkHash := range chunks { for chunkHash, isSnapshot := range chunks {
chunkIndex++ chunkIndex++
chunkID := manager.config.GetChunkIDFromHash(chunkHash) chunkID := manager.config.GetChunkIDFromHash(chunkHash)
newChunkID := otherManager.config.GetChunkIDFromHash(chunkHash) newChunkID := otherManager.config.GetChunkIDFromHash(chunkHash)
@@ -1730,7 +1734,7 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
newChunk := otherManager.config.GetChunk() newChunk := otherManager.config.GetChunk()
newChunk.Reset(true) newChunk.Reset(true)
newChunk.Write(chunk.GetBytes()) newChunk.Write(chunk.GetBytes())
newChunk.encryptionVersion = chunk.encryptionVersion newChunk.isSnapshot = isSnapshot
chunkUploader.StartChunk(newChunk, chunkIndex) chunkUploader.StartChunk(newChunk, chunkIndex)
totalCopied++ totalCopied++
} else { } else {

View File

@@ -341,7 +341,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1, 2, 3} /*tag*/, "", backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{1, 2, 3} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false) /*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, []int{1} /*tags*/, nil /*retentions*/, nil, backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, []int{1} /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, false /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1) /*exhaustive*/ false /*exclusive=*/, false /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
numberOfSnapshots = backupManager.SnapshotManager.ListSnapshots( /*snapshotID*/ "host1" /*revisionsToList*/, nil /*tag*/, "" /*showFiles*/, false /*showChunks*/, false) numberOfSnapshots = backupManager.SnapshotManager.ListSnapshots( /*snapshotID*/ "host1" /*revisionsToList*/, nil /*tag*/, "" /*showFiles*/, false /*showChunks*/, false)
@@ -349,7 +349,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 2 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 2 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3} /*tag*/, "", backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false) /*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
backupManager.Backup(testDir+"/repository1" /*quickMode=*/, false, threads, "fourth", false, false, 0, false) backupManager.Backup(testDir+"/repository1" /*quickMode=*/, false, threads, "fourth", false, false, 0, false)
backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, nil /*tags*/, nil /*retentions*/, nil, backupManager.SnapshotManager.PruneSnapshots("host1", "host1" /*revisions*/, nil /*tags*/, nil /*retentions*/, nil,
/*exhaustive*/ false /*exclusive=*/, true /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1) /*exhaustive*/ false /*exclusive=*/, true /*ignoredIDs*/, nil /*dryRun*/, false /*deleteOnly*/, false /*collectOnly*/, false, 1)
@@ -358,7 +358,7 @@ func TestBackupManager(t *testing.T) {
t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots) t.Errorf("Expected 3 snapshots but got %d", numberOfSnapshots)
} }
backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3, 4} /*tag*/, "", backupManager.SnapshotManager.CheckSnapshots( /*snapshotID*/ "host1" /*revisions*/, []int{2, 3, 4} /*tag*/, "",
/*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*searchFossils*/, false /*resurrect*/, false) /*showStatistics*/ false /*showTabular*/, false /*checkFiles*/, false /*checkChunks*/, false /*searchFossils*/, false /*resurrect*/, false, 1)
/*buf := make([]byte, 1<<16) /*buf := make([]byte, 1<<16)
runtime.Stack(buf, true) runtime.Stack(buf, true)

View File

@@ -63,12 +63,14 @@ type Chunk struct {
config *Config // Every chunk is associated with a Config object. Which hashing algorithm to use is determined config *Config // Every chunk is associated with a Config object. Which hashing algorithm to use is determined
// by the config // by the config
encryptionVersion byte // The version type in the encrytion header isSnapshot bool // Indicates if the chunk is a snapshot chunk (instead of a file chunk). This is only used by RSA
// encryption, where a snapshot chunk is not encrypted by RSA
} }
// Magic word to identify a duplicacy format encrypted file, plus a version number. // Magic word to identify a duplicacy format encrypted file, plus a version number.
var ENCRYPTION_HEADER = "duplicacy\000" var ENCRYPTION_HEADER = "duplicacy\000"
// RSA encrypted chunks start with "duplicacy\002"
var ENCRYPTION_VERSION_RSA byte = 2 var ENCRYPTION_VERSION_RSA byte = 2
// CreateChunk creates a new chunk. // CreateChunk creates a new chunk.
@@ -119,6 +121,7 @@ func (chunk *Chunk) Reset(hashNeeded bool) {
chunk.hash = nil chunk.hash = nil
chunk.id = "" chunk.id = ""
chunk.size = 0 chunk.size = 0
chunk.isSnapshot = false
} }
// Write implements the Writer interface. // Write implements the Writer interface.
@@ -193,8 +196,8 @@ func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string, isSnapsh
key := encryptionKey key := encryptionKey
usingRSA := false usingRSA := false
if chunk.config.rsaPublicKey != nil && (!isSnapshot || chunk.encryptionVersion == ENCRYPTION_VERSION_RSA) { // Enable RSA encryption only when the chunk is not a snapshot chunk
// If the chunk is not a snpashot chunk, we attempt to encrypt it with the RSA publick key if there is one if chunk.config.rsaPublicKey != nil && !isSnapshot && !chunk.isSnapshot {
randomKey := make([]byte, 32) randomKey := make([]byte, 32)
_, err := rand.Read(randomKey) _, err := rand.Read(randomKey)
if err != nil { if err != nil {
@@ -321,8 +324,6 @@ func (chunk *Chunk) Decrypt(encryptionKey []byte, derivationKey string) (err err
chunk.buffer, encryptedBuffer = encryptedBuffer, chunk.buffer chunk.buffer, encryptedBuffer = encryptedBuffer, chunk.buffer
headerLength := len(ENCRYPTION_HEADER) headerLength := len(ENCRYPTION_HEADER)
chunk.encryptionVersion = 0
if len(encryptionKey) > 0 { if len(encryptionKey) > 0 {
key := encryptionKey key := encryptionKey
@@ -347,12 +348,12 @@ func (chunk *Chunk) Decrypt(encryptionKey []byte, derivationKey string) (err err
return fmt.Errorf("The storage doesn't seem to be encrypted") return fmt.Errorf("The storage doesn't seem to be encrypted")
} }
chunk.encryptionVersion = encryptedBuffer.Bytes()[headerLength-1] encryptionVersion := encryptedBuffer.Bytes()[headerLength-1]
if chunk.encryptionVersion != 0 && chunk.encryptionVersion != ENCRYPTION_VERSION_RSA { if encryptionVersion != 0 && encryptionVersion != ENCRYPTION_VERSION_RSA {
return fmt.Errorf("Unsupported encryption version %d", chunk.encryptionVersion) return fmt.Errorf("Unsupported encryption version %d", encryptionVersion)
} }
if chunk.encryptionVersion == ENCRYPTION_VERSION_RSA { if encryptionVersion == ENCRYPTION_VERSION_RSA {
if chunk.config.rsaPrivateKey == nil { if chunk.config.rsaPrivateKey == nil {
LOG_ERROR("CHUNK_DECRYPT", "An RSA private key is required to decrypt the chunk") LOG_ERROR("CHUNK_DECRYPT", "An RSA private key is required to decrypt the chunk")
return fmt.Errorf("An RSA private key is required to decrypt the chunk") return fmt.Errorf("An RSA private key is required to decrypt the chunk")

View File

@@ -126,6 +126,7 @@ func (downloader *ChunkDownloader) AddFiles(snapshot *Snapshot, files []*Entry)
// AddChunk adds a single chunk the download list. // AddChunk adds a single chunk the download list.
func (downloader *ChunkDownloader) AddChunk(chunkHash string) int { func (downloader *ChunkDownloader) AddChunk(chunkHash string) int {
task := ChunkDownloadTask{ task := ChunkDownloadTask{
chunkIndex: len(downloader.taskList), chunkIndex: len(downloader.taskList),
chunkHash: chunkHash, chunkHash: chunkHash,
@@ -253,6 +254,47 @@ func (downloader *ChunkDownloader) WaitForChunk(chunkIndex int) (chunk *Chunk) {
return downloader.taskList[chunkIndex].chunk return downloader.taskList[chunkIndex].chunk
} }
// WaitForCompletion waits until all chunks have been downloaded
func (downloader *ChunkDownloader) WaitForCompletion() {
// Tasks in completedTasks have not been counted by numberOfActiveChunks
downloader.numberOfActiveChunks -= len(downloader.completedTasks)
// find the completed task with the largest index; we'll start from the next index
for index := range downloader.completedTasks {
if downloader.lastChunkIndex < index {
downloader.lastChunkIndex = index
}
}
// Looping until there isn't a download task in progress
for downloader.numberOfActiveChunks > 0 || downloader.lastChunkIndex + 1 < len(downloader.taskList) {
// Wait for a completion event first
if downloader.numberOfActiveChunks > 0 {
completion := <-downloader.completionChannel
downloader.config.PutChunk(completion.chunk)
downloader.numberOfActiveChunks--
downloader.numberOfDownloadedChunks++
downloader.numberOfDownloadingChunks--
}
// Pass the tasks one by one to the download queue
if downloader.lastChunkIndex + 1 < len(downloader.taskList) {
task := &downloader.taskList[downloader.lastChunkIndex + 1]
if task.isDownloading {
downloader.lastChunkIndex++
continue
}
downloader.taskQueue <- *task
task.isDownloading = true
downloader.numberOfDownloadingChunks++
downloader.numberOfActiveChunks++
downloader.lastChunkIndex++
}
}
}
// Stop terminates all downloading goroutines // Stop terminates all downloading goroutines
func (downloader *ChunkDownloader) Stop() { func (downloader *ChunkDownloader) Stop() {
for downloader.numberOfDownloadingChunks > 0 { for downloader.numberOfDownloadingChunks > 0 {

View File

@@ -172,11 +172,11 @@ func (config *Config) Print() {
LOG_TRACE("CONFIG_INFO", "Hash key: %x", config.HashKey) LOG_TRACE("CONFIG_INFO", "Hash key: %x", config.HashKey)
LOG_TRACE("CONFIG_INFO", "ID key: %x", config.IDKey) LOG_TRACE("CONFIG_INFO", "ID key: %x", config.IDKey)
if len(config.ChunkKey) >= 0 { if len(config.ChunkKey) > 0 {
LOG_TRACE("CONFIG_INFO", "File chunks are encrypted") LOG_TRACE("CONFIG_INFO", "File chunks are encrypted")
} }
if len(config.FileKey) >= 0 { if len(config.FileKey) > 0 {
LOG_TRACE("CONFIG_INFO", "Metadata chunks are encrypted") LOG_TRACE("CONFIG_INFO", "Metadata chunks are encrypted")
} }
@@ -588,6 +588,11 @@ func (config *Config) loadRSAPublicKey(keyFile string) {
// loadRSAPrivateKey loads the specifed private key file for decrypting file chunks // loadRSAPrivateKey loads the specifed private key file for decrypting file chunks
func (config *Config) loadRSAPrivateKey(keyFile string, passphrase string) { func (config *Config) loadRSAPrivateKey(keyFile string, passphrase string) {
if config.rsaPublicKey == nil {
LOG_ERROR("RSA_PUBLIC", "The storage was not encrypted by an RSA key")
return
}
encodedKey, err := ioutil.ReadFile(keyFile) encodedKey, err := ioutil.ReadFile(keyFile)
if err != nil { if err != nil {
LOG_ERROR("RSA_PRIVATE", "Failed to read the private key file: %v", err) LOG_ERROR("RSA_PRIVATE", "Failed to read the private key file: %v", err)

View File

@@ -6,6 +6,7 @@ package duplicacy
import ( import (
"fmt" "fmt"
"io/ioutil"
"strings" "strings"
"github.com/gilbertchen/go-dropbox" "github.com/gilbertchen/go-dropbox"
@@ -199,6 +200,7 @@ func (storage *DropboxStorage) DownloadFile(threadIndex int, filePath string, ch
} }
defer output.Body.Close() defer output.Body.Close()
defer ioutil.ReadAll(output.Body)
_, err = RateLimitedCopy(chunk, output.Body, storage.DownloadRateLimit/len(storage.clients)) _, err = RateLimitedCopy(chunk, output.Body, storage.DownloadRateLimit/len(storage.clients))
return err return err

View File

@@ -12,6 +12,7 @@ import (
"os" "os"
"path" "path"
"strings" "strings"
"syscall"
"time" "time"
) )
@@ -190,10 +191,13 @@ func (storage *FileStorage) UploadFile(threadIndex int, filePath string, content
return err return err
} }
err = file.Sync() if err = file.Sync(); err != nil {
if err != nil { pathErr, ok := err.(*os.PathError)
file.Close() isNotSupported := ok && pathErr.Op == "sync" && pathErr.Err == syscall.ENOTSUP
return err if !isNotSupported {
_ = file.Close()
return err
}
} }
err = file.Close() err = file.Close()

View File

@@ -20,13 +20,16 @@ import (
"golang.org/x/net/context" "golang.org/x/net/context"
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/drive/v3" "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi" "google.golang.org/api/googleapi"
"google.golang.org/api/option"
) )
var ( var (
GCDFileMimeType = "application/octet-stream" GCDFileMimeType = "application/octet-stream"
GCDDirectoryMimeType = "application/vnd.google-apps.folder" GCDDirectoryMimeType = "application/vnd.google-apps.folder"
GCDUserDrive = "root"
) )
type GCDStorage struct { type GCDStorage struct {
@@ -37,6 +40,7 @@ type GCDStorage struct {
idCacheLock sync.Mutex idCacheLock sync.Mutex
backoffs []int // desired backoff time in seconds for each thread backoffs []int // desired backoff time in seconds for each thread
attempts []int // number of failed attempts since last success for each thread attempts []int // number of failed attempts since last success for each thread
driveID string // the ID of the shared drive or 'root' (GCDUserDrive) if the user's drive
createDirectoryLock sync.Mutex createDirectoryLock sync.Mutex
isConnected bool isConnected bool
@@ -191,7 +195,11 @@ func (storage *GCDStorage) listFiles(threadIndex int, parentID string, listFiles
var err error var err error
for { for {
fileList, err = storage.service.Files.List().Q(query).Fields("nextPageToken", "files(name, mimeType, id, size)").PageToken(startToken).PageSize(maxCount).Do() q := storage.service.Files.List().Q(query).Fields("nextPageToken", "files(name, mimeType, id, size)").PageToken(startToken).PageSize(maxCount)
if storage.driveID != GCDUserDrive {
q = q.DriveId(storage.driveID).IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true)
}
fileList, err = q.Do()
if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry { if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry {
break break
} else if retry { } else if retry {
@@ -219,7 +227,11 @@ func (storage *GCDStorage) listByName(threadIndex int, parentID string, name str
for { for {
query := "name = '" + name + "' and '" + parentID + "' in parents and trashed = false " query := "name = '" + name + "' and '" + parentID + "' in parents and trashed = false "
fileList, err = storage.service.Files.List().Q(query).Fields("files(name, mimeType, id, size)").Do() q := storage.service.Files.List().Q(query).Fields("files(name, mimeType, id, size)")
if storage.driveID != GCDUserDrive {
q = q.DriveId(storage.driveID).IncludeItemsFromAllDrives(true).Corpora("drive").SupportsAllDrives(true)
}
fileList, err = q.Do()
if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry { if retry, e := storage.shouldRetry(threadIndex, err); e == nil && !retry {
break break
@@ -248,7 +260,7 @@ func (storage *GCDStorage) getIDFromPath(threadIndex int, filePath string, creat
return fileID, nil return fileID, nil
} }
fileID := "root" fileID := storage.driveID
if rootID, ok := storage.findPathID(""); ok { if rootID, ok := storage.findPathID(""); ok {
fileID = rootID fileID = rootID
@@ -303,37 +315,85 @@ func (storage *GCDStorage) getIDFromPath(threadIndex int, filePath string, creat
} }
// CreateGCDStorage creates a GCD storage object. // CreateGCDStorage creates a GCD storage object.
func CreateGCDStorage(tokenFile string, storagePath string, threads int) (storage *GCDStorage, err error) { func CreateGCDStorage(tokenFile string, driveID string, storagePath string, threads int) (storage *GCDStorage, err error) {
ctx := context.Background()
description, err := ioutil.ReadFile(tokenFile) description, err := ioutil.ReadFile(tokenFile)
if err != nil { if err != nil {
return nil, err return nil, err
} }
gcdConfig := &GCDConfig{} var object map[string]interface{}
if err := json.Unmarshal(description, gcdConfig); err != nil {
return nil, err
}
oauth2Config := oauth2.Config{ err = json.Unmarshal(description, &object)
ClientID: gcdConfig.ClientID,
ClientSecret: gcdConfig.ClientSecret,
Endpoint: gcdConfig.Endpoint,
}
authClient := oauth2Config.Client(context.Background(), &gcdConfig.Token)
service, err := drive.New(authClient)
if err != nil { if err != nil {
return nil, err return nil, err
} }
isServiceAccount := false
if value, ok := object["type"]; ok {
if authType, ok := value.(string); ok && authType == "service_account" {
isServiceAccount = true
}
}
var tokenSource oauth2.TokenSource
if isServiceAccount {
config, err := google.JWTConfigFromJSON(description, drive.DriveScope)
if err != nil {
return nil, err
}
tokenSource = config.TokenSource(ctx)
} else {
gcdConfig := &GCDConfig{}
if err := json.Unmarshal(description, gcdConfig); err != nil {
return nil, err
}
config := oauth2.Config{
ClientID: gcdConfig.ClientID,
ClientSecret: gcdConfig.ClientSecret,
Endpoint: gcdConfig.Endpoint,
}
tokenSource = config.TokenSource(ctx, &gcdConfig.Token)
}
service, err := drive.NewService(ctx, option.WithTokenSource(tokenSource))
if err != nil {
return nil, err
}
if len(driveID) == 0 {
driveID = GCDUserDrive
} else {
driveList, err := drive.NewTeamdrivesService(service).List().Do()
if err != nil {
return nil, fmt.Errorf("Failed to look up the drive id: %v", err)
}
found := false
for _, teamDrive := range driveList.TeamDrives {
if teamDrive.Id == driveID || teamDrive.Name == driveID {
driveID = teamDrive.Id
found = true
break
}
}
if !found {
return nil, fmt.Errorf("%s is not the id or name of a shared drive", driveID)
}
}
storage = &GCDStorage{ storage = &GCDStorage{
service: service, service: service,
numberOfThreads: threads, numberOfThreads: threads,
idCache: make(map[string]string), idCache: make(map[string]string),
backoffs: make([]int, threads), backoffs: make([]int, threads),
attempts: make([]int, threads), attempts: make([]int, threads),
driveID: driveID,
} }
for i := range storage.backoffs { for i := range storage.backoffs {
@@ -341,6 +401,7 @@ func CreateGCDStorage(tokenFile string, storagePath string, threads int) (storag
storage.attempts[i] = 0 storage.attempts[i] = 0
} }
storage.savePathID("", driveID)
storagePathID, err := storage.getIDFromPath(0, storagePath, true) storagePathID, err := storage.getIDFromPath(0, storagePath, true)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -462,7 +523,7 @@ func (storage *GCDStorage) DeleteFile(threadIndex int, filePath string) (err err
} }
for { for {
err = storage.service.Files.Delete(fileID).Fields("id").Do() err = storage.service.Files.Delete(fileID).SupportsAllDrives(true).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry { if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
storage.deletePathID(filePath) storage.deletePathID(filePath)
return nil return nil
@@ -508,7 +569,7 @@ func (storage *GCDStorage) MoveFile(threadIndex int, from string, to string) (er
} }
for { for {
_, err = storage.service.Files.Update(fileID, nil).AddParents(toParentID).RemoveParents(fromParentID).Do() _, err = storage.service.Files.Update(fileID, nil).SupportsAllDrives(true).AddParents(toParentID).RemoveParents(fromParentID).Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry { if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break break
} else if retry { } else if retry {
@@ -559,7 +620,7 @@ func (storage *GCDStorage) CreateDirectory(threadIndex int, dir string) (err err
Parents: []string{parentID}, Parents: []string{parentID},
} }
file, err = storage.service.Files.Create(file).Fields("id").Do() file, err = storage.service.Files.Create(file).SupportsAllDrives(true).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry { if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break break
} else { } else {
@@ -630,7 +691,7 @@ func (storage *GCDStorage) DownloadFile(threadIndex int, filePath string, chunk
for { for {
// AcknowledgeAbuse(true) lets the download proceed even if GCD thinks that it contains malware. // AcknowledgeAbuse(true) lets the download proceed even if GCD thinks that it contains malware.
// TODO: Should this prompt the user or log a warning? // TODO: Should this prompt the user or log a warning?
req := storage.service.Files.Get(fileID) req := storage.service.Files.Get(fileID).SupportsAllDrives(true)
if e, ok := err.(*googleapi.Error); ok { if e, ok := err.(*googleapi.Error); ok {
if strings.Contains(err.Error(), "cannotDownloadAbusiveFile") || len(e.Errors) > 0 && e.Errors[0].Reason == "cannotDownloadAbusiveFile" { if strings.Contains(err.Error(), "cannotDownloadAbusiveFile") || len(e.Errors) > 0 && e.Errors[0].Reason == "cannotDownloadAbusiveFile" {
LOG_WARN("GCD_STORAGE", "%s is marked as abusive, will download anyway.", filePath) LOG_WARN("GCD_STORAGE", "%s is marked as abusive, will download anyway.", filePath)
@@ -676,7 +737,7 @@ func (storage *GCDStorage) UploadFile(threadIndex int, filePath string, content
for { for {
reader := CreateRateLimitedReader(content, storage.UploadRateLimit/storage.numberOfThreads) reader := CreateRateLimitedReader(content, storage.UploadRateLimit/storage.numberOfThreads)
_, err = storage.service.Files.Create(file).Media(reader).Fields("id").Do() _, err = storage.service.Files.Create(file).SupportsAllDrives(true).Media(reader).Fields("id").Do()
if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry { if retry, err := storage.shouldRetry(threadIndex, err); err == nil && !retry {
break break
} else if retry { } else if retry {

View File

@@ -7,10 +7,12 @@ package duplicacy
import ( import (
"fmt" "fmt"
"os" "os"
"log"
"runtime/debug" "runtime/debug"
"sync" "sync"
"testing" "testing"
"time" "time"
"regexp"
) )
const ( const (
@@ -43,6 +45,13 @@ func setTestingT(t *testing.T) {
testingT = t testingT = t
} }
// Contains the ids of logs that won't be displayed
var suppressedLogs map[string]bool = map[string]bool{}
func SuppressLog(id string) {
suppressedLogs[id] = true
}
func getLevelName(level int) string { func getLevelName(level int) string {
switch level { switch level {
case DEBUG: case DEBUG:
@@ -143,6 +152,12 @@ func logf(level int, logID string, format string, v ...interface{}) {
defer logMutex.Unlock() defer logMutex.Unlock()
if level >= loggingLevel { if level >= loggingLevel {
if level <= ERROR && len(suppressedLogs) > 0 {
if _, found := suppressedLogs[logID]; found {
return
}
}
if printLogHeader { if printLogHeader {
fmt.Printf("%s %s %s %s\n", fmt.Printf("%s %s %s %s\n",
now.Format("2006-01-02 15:04:05.000"), getLevelName(level), logID, message) now.Format("2006-01-02 15:04:05.000"), getLevelName(level), logID, message)
@@ -161,6 +176,32 @@ func logf(level int, logID string, format string, v ...interface{}) {
} }
} }
// Set up logging for libraries that Duplicacy depends on. They can call 'log.Printf("[ID] message")'
// to produce logs in Duplicacy's format
type Logger struct {
formatRegex *regexp.Regexp
}
func (logger *Logger) Write(line []byte) (n int, err error) {
n = len(line)
for len(line) > 0 && line[len(line) - 1] == '\n' {
line = line[:len(line) - 1]
}
matched := logger.formatRegex.FindStringSubmatch(string(line))
if matched != nil {
LOG_INFO(matched[1], "%s", matched[2])
} else {
LOG_INFO("LOG_DEFAULT", "%s", line)
}
return
}
func init() {
log.SetFlags(0)
log.SetOutput(&Logger{ formatRegex: regexp.MustCompile(`^\[(.+)\]\s*(.+)`) })
}
const ( const (
duplicacyExitCode = 100 duplicacyExitCode = 100
otherExitCode = 101 otherExitCode = 101

View File

@@ -15,6 +15,7 @@ import (
"strings" "strings"
"sync" "sync"
"time" "time"
"path/filepath"
"golang.org/x/oauth2" "golang.org/x/oauth2"
) )
@@ -32,9 +33,6 @@ type OneDriveErrorResponse struct {
Error OneDriveError `json:"error"` Error OneDriveError `json:"error"`
} }
var OneDriveRefreshTokenURL = "https://duplicacy.com/one_refresh"
var OneDriveAPIURL = "https://api.onedrive.com/v1.0"
type OneDriveClient struct { type OneDriveClient struct {
HTTPClient *http.Client HTTPClient *http.Client
@@ -44,9 +42,13 @@ type OneDriveClient struct {
IsConnected bool IsConnected bool
TestMode bool TestMode bool
IsBusiness bool
RefreshTokenURL string
APIURL string
} }
func NewOneDriveClient(tokenFile string) (*OneDriveClient, error) { func NewOneDriveClient(tokenFile string, isBusiness bool) (*OneDriveClient, error) {
description, err := ioutil.ReadFile(tokenFile) description, err := ioutil.ReadFile(tokenFile)
if err != nil { if err != nil {
@@ -63,6 +65,15 @@ func NewOneDriveClient(tokenFile string) (*OneDriveClient, error) {
TokenFile: tokenFile, TokenFile: tokenFile,
Token: token, Token: token,
TokenLock: &sync.Mutex{}, TokenLock: &sync.Mutex{},
IsBusiness: isBusiness,
}
if isBusiness {
client.RefreshTokenURL = "https://duplicacy.com/odb_refresh"
client.APIURL = "https://graph.microsoft.com/v1.0/me"
} else {
client.RefreshTokenURL = "https://duplicacy.com/one_refresh"
client.APIURL = "https://api.onedrive.com/v1.0"
} }
client.RefreshToken(false) client.RefreshToken(false)
@@ -106,9 +117,10 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
if reader, ok := inputReader.(*RateLimitedReader); ok { if reader, ok := inputReader.(*RateLimitedReader); ok {
request.ContentLength = reader.Length() request.ContentLength = reader.Length()
request.Header.Set("Content-Range", fmt.Sprintf("bytes 0-%d/%d", reader.Length() - 1, reader.Length()))
} }
if url != OneDriveRefreshTokenURL { if url != client.RefreshTokenURL {
client.TokenLock.Lock() client.TokenLock.Lock()
request.Header.Set("Authorization", "Bearer "+client.Token.AccessToken) request.Header.Set("Authorization", "Bearer "+client.Token.AccessToken)
client.TokenLock.Unlock() client.TokenLock.Unlock()
@@ -152,7 +164,7 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
if response.StatusCode == 401 { if response.StatusCode == 401 {
if url == OneDriveRefreshTokenURL { if url == client.RefreshTokenURL {
return nil, 0, OneDriveError{Status: response.StatusCode, Message: "Authorization error when refreshing token"} return nil, 0, OneDriveError{Status: response.StatusCode, Message: "Authorization error when refreshing token"}
} }
@@ -161,6 +173,8 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
return nil, 0, err return nil, 0, err
} }
continue continue
} else if response.StatusCode == 409 {
return nil, 0, OneDriveError{Status: response.StatusCode, Message: "Conflict"}
} else if response.StatusCode > 401 && response.StatusCode != 404 { } else if response.StatusCode > 401 && response.StatusCode != 404 {
retryAfter := time.Duration(rand.Float32() * 1000.0 * float32(backoff)) retryAfter := time.Duration(rand.Float32() * 1000.0 * float32(backoff))
LOG_INFO("ONEDRIVE_RETRY", "Response code: %d; retry after %d milliseconds", response.StatusCode, retryAfter) LOG_INFO("ONEDRIVE_RETRY", "Response code: %d; retry after %d milliseconds", response.StatusCode, retryAfter)
@@ -188,7 +202,7 @@ func (client *OneDriveClient) RefreshToken(force bool) (err error) {
return nil return nil
} }
readCloser, _, err := client.call(OneDriveRefreshTokenURL, "POST", client.Token, "") readCloser, _, err := client.call(client.RefreshTokenURL, "POST", client.Token, "")
if err != nil { if err != nil {
return fmt.Errorf("failed to refresh the access token: %v", err) return fmt.Errorf("failed to refresh the access token: %v", err)
} }
@@ -228,9 +242,9 @@ func (client *OneDriveClient) ListEntries(path string) ([]OneDriveEntry, error)
entries := []OneDriveEntry{} entries := []OneDriveEntry{}
url := OneDriveAPIURL + "/drive/root:/" + path + ":/children" url := client.APIURL + "/drive/root:/" + path + ":/children"
if path == "" { if path == "" {
url = OneDriveAPIURL + "/drive/root/children" url = client.APIURL + "/drive/root/children"
} }
if client.TestMode { if client.TestMode {
url += "?top=8" url += "?top=8"
@@ -266,7 +280,7 @@ func (client *OneDriveClient) ListEntries(path string) ([]OneDriveEntry, error)
func (client *OneDriveClient) GetFileInfo(path string) (string, bool, int64, error) { func (client *OneDriveClient) GetFileInfo(path string) (string, bool, int64, error) {
url := OneDriveAPIURL + "/drive/root:/" + path url := client.APIURL + "/drive/root:/" + path
url += "?select=id,name,size,folder" url += "?select=id,name,size,folder"
readCloser, _, err := client.call(url, "GET", 0, "") readCloser, _, err := client.call(url, "GET", 0, "")
@@ -291,28 +305,95 @@ func (client *OneDriveClient) GetFileInfo(path string) (string, bool, int64, err
func (client *OneDriveClient) DownloadFile(path string) (io.ReadCloser, int64, error) { func (client *OneDriveClient) DownloadFile(path string) (io.ReadCloser, int64, error) {
url := OneDriveAPIURL + "/drive/items/root:/" + path + ":/content" url := client.APIURL + "/drive/items/root:/" + path + ":/content"
return client.call(url, "GET", 0, "") return client.call(url, "GET", 0, "")
} }
func (client *OneDriveClient) UploadFile(path string, content []byte, rateLimit int) (err error) { func (client *OneDriveClient) UploadFile(path string, content []byte, rateLimit int) (err error) {
url := OneDriveAPIURL + "/drive/root:/" + path + ":/content" // Upload file using the simple method; this is only possible for OneDrive Personal or if the file
// is smaller than 4MB for OneDrive Business
if !client.IsBusiness || len(content) < 4 * 1024 * 1024 || (client.TestMode && rand.Int() % 2 == 0) {
url := client.APIURL + "/drive/root:/" + path + ":/content"
readCloser, _, err := client.call(url, "PUT", CreateRateLimitedReader(content, rateLimit), "application/octet-stream") readCloser, _, err := client.call(url, "PUT", CreateRateLimitedReader(content, rateLimit), "application/octet-stream")
if err != nil {
return err
}
readCloser.Close()
return nil
}
// For large files, create an upload session first
uploadURL, err := client.CreateUploadSession(path)
if err != nil { if err != nil {
return err return err
} }
return client.UploadFileSession(uploadURL, content, rateLimit)
}
func (client *OneDriveClient) CreateUploadSession(path string) (uploadURL string, err error) {
type CreateUploadSessionItem struct {
ConflictBehavior string `json:"@microsoft.graph.conflictBehavior"`
Name string `json:"name"`
}
input := map[string]interface{} {
"item": CreateUploadSessionItem {
ConflictBehavior: "replace",
Name: filepath.Base(path),
},
}
readCloser, _, err := client.call(client.APIURL + "/drive/root:/" + path + ":/createUploadSession", "POST", input, "application/json")
if err != nil {
return "", err
}
type CreateUploadSessionOutput struct {
UploadURL string `json:"uploadUrl"`
}
output := &CreateUploadSessionOutput{}
if err = json.NewDecoder(readCloser).Decode(&output); err != nil {
return "", err
}
readCloser.Close()
return output.UploadURL, nil
}
func (client *OneDriveClient) UploadFileSession(uploadURL string, content []byte, rateLimit int) (err error) {
readCloser, _, err := client.call(uploadURL, "PUT", CreateRateLimitedReader(content, rateLimit), "")
if err != nil {
return err
}
type UploadFileSessionOutput struct {
Size int `json:"size"`
}
output := &UploadFileSessionOutput{}
if err = json.NewDecoder(readCloser).Decode(&output); err != nil {
return fmt.Errorf("Failed to complete the file upload session: %v", err)
}
if output.Size != len(content) {
return fmt.Errorf("Uploaded %d bytes out of %d bytes", output.Size, len(content))
}
readCloser.Close() readCloser.Close()
return nil return nil
} }
func (client *OneDriveClient) DeleteFile(path string) error { func (client *OneDriveClient) DeleteFile(path string) error {
url := OneDriveAPIURL + "/drive/root:/" + path url := client.APIURL + "/drive/root:/" + path
readCloser, _, err := client.call(url, "DELETE", 0, "") readCloser, _, err := client.call(url, "DELETE", 0, "")
if err != nil { if err != nil {
@@ -325,7 +406,7 @@ func (client *OneDriveClient) DeleteFile(path string) error {
func (client *OneDriveClient) MoveFile(path string, parent string) error { func (client *OneDriveClient) MoveFile(path string, parent string) error {
url := OneDriveAPIURL + "/drive/root:/" + path url := client.APIURL + "/drive/root:/" + path
parentReference := make(map[string]string) parentReference := make(map[string]string)
parentReference["path"] = "/drive/root:/" + parent parentReference["path"] = "/drive/root:/" + parent
@@ -335,6 +416,20 @@ func (client *OneDriveClient) MoveFile(path string, parent string) error {
readCloser, _, err := client.call(url, "PATCH", parameters, "application/json") readCloser, _, err := client.call(url, "PATCH", parameters, "application/json")
if err != nil { if err != nil {
if e, ok := err.(OneDriveError); ok && e.Status == 400 {
// The destination directory doesn't exist; trying to create it...
dir := filepath.Dir(parent)
if dir == "." {
dir = ""
}
client.CreateDirectory(dir, filepath.Base(parent))
readCloser, _, err = client.call(url, "PATCH", parameters, "application/json")
if err != nil {
return nil
}
}
return err return err
} }
@@ -344,24 +439,29 @@ func (client *OneDriveClient) MoveFile(path string, parent string) error {
func (client *OneDriveClient) CreateDirectory(path string, name string) error { func (client *OneDriveClient) CreateDirectory(path string, name string) error {
url := OneDriveAPIURL + "/root/children" url := client.APIURL + "/root/children"
if path != "" { if path != "" {
parentID, isDir, _, err := client.GetFileInfo(path) pathID, isDir, _, err := client.GetFileInfo(path)
if err != nil { if err != nil {
return err return err
} }
if parentID == "" { if pathID == "" {
return fmt.Errorf("The path '%s' does not exist", path) dir := filepath.Dir(path)
if dir != "." {
// The parent directory doesn't exist; trying to create it...
client.CreateDirectory(dir, filepath.Base(path))
isDir = true
}
} }
if !isDir { if !isDir {
return fmt.Errorf("The path '%s' is not a directory", path) return fmt.Errorf("The path '%s' is not a directory", path)
} }
url = OneDriveAPIURL + "/drive/items/" + parentID + "/children" url = client.APIURL + "/drive/root:/" + path + ":/children"
} }
parameters := make(map[string]interface{}) parameters := make(map[string]interface{})
@@ -370,6 +470,11 @@ func (client *OneDriveClient) CreateDirectory(path string, name string) error {
readCloser, _, err := client.call(url, "POST", parameters, "application/json") readCloser, _, err := client.call(url, "POST", parameters, "application/json")
if err != nil { if err != nil {
if e, ok := err.(OneDriveError); ok && e.Status == 409 {
// This error usually means the directory already exists
LOG_TRACE("ONEDRIVE_MKDIR", "The directory '%s/%s' already exists", path, name)
return nil
}
return err return err
} }

View File

@@ -17,7 +17,7 @@ import (
func TestOneDriveClient(t *testing.T) { func TestOneDriveClient(t *testing.T) {
oneDriveClient, err := NewOneDriveClient("one-token.json") oneDriveClient, err := NewOneDriveClient("one-token.json", false)
if err != nil { if err != nil {
t.Errorf("Failed to create the OneDrive client: %v", err) t.Errorf("Failed to create the OneDrive client: %v", err)
return return

View File

@@ -19,13 +19,13 @@ type OneDriveStorage struct {
} }
// CreateOneDriveStorage creates an OneDrive storage object. // CreateOneDriveStorage creates an OneDrive storage object.
func CreateOneDriveStorage(tokenFile string, storagePath string, threads int) (storage *OneDriveStorage, err error) { func CreateOneDriveStorage(tokenFile string, isBusiness bool, storagePath string, threads int) (storage *OneDriveStorage, err error) {
for len(storagePath) > 0 && storagePath[len(storagePath)-1] == '/' { for len(storagePath) > 0 && storagePath[len(storagePath)-1] == '/' {
storagePath = storagePath[:len(storagePath)-1] storagePath = storagePath[:len(storagePath)-1]
} }
client, err := NewOneDriveClient(tokenFile) client, err := NewOneDriveClient(tokenFile, isBusiness)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -80,6 +80,7 @@ func (storage *OneDriveStorage) convertFilePath(filePath string) string {
// ListFiles return the list of files and subdirectories under 'dir' (non-recursively) // ListFiles return the list of files and subdirectories under 'dir' (non-recursively)
func (storage *OneDriveStorage) ListFiles(threadIndex int, dir string) ([]string, []int64, error) { func (storage *OneDriveStorage) ListFiles(threadIndex int, dir string) ([]string, []int64, error) {
for len(dir) > 0 && dir[len(dir)-1] == '/' { for len(dir) > 0 && dir[len(dir)-1] == '/' {
dir = dir[:len(dir)-1] dir = dir[:len(dir)-1]
} }

View File

@@ -91,7 +91,7 @@ func CreateSFTPStorage(server string, port int, username string, storageDir stri
storageDir: storageDir, storageDir: storageDir,
minimumNesting: minimumNesting, minimumNesting: minimumNesting,
numberOfThreads: threads, numberOfThreads: threads,
numberOfTries: 6, numberOfTries: 8,
serverAddress: serverAddress, serverAddress: serverAddress,
sftpConfig: sftpConfig, sftpConfig: sftpConfig,
} }
@@ -129,22 +129,19 @@ func (storage *SFTPStorage) retry(f func () error) error {
delay *= 2 delay *= 2
storage.clientLock.Lock() storage.clientLock.Lock()
if storage.client != nil {
storage.client.Close()
storage.client = nil
}
connection, err := ssh.Dial("tcp", storage.serverAddress, storage.sftpConfig) connection, err := ssh.Dial("tcp", storage.serverAddress, storage.sftpConfig)
if err != nil { if err != nil {
LOG_WARN("SFT_RECONNECT", "Failed to connect to %s: %v; retrying", storage.serverAddress, err)
storage.clientLock.Unlock() storage.clientLock.Unlock()
return err continue
} }
client, err := sftp.NewClient(connection) client, err := sftp.NewClient(connection)
if err != nil { if err != nil {
LOG_WARN("SFT_RECONNECT", "Failed to create a new SFTP client to %s: %v; retrying", storage.serverAddress, err)
connection.Close() connection.Close()
storage.clientLock.Unlock() storage.clientLock.Unlock()
return err continue
} }
storage.client = client storage.client = client
storage.clientLock.Unlock() storage.clientLock.Unlock()
@@ -275,36 +272,19 @@ func (storage *SFTPStorage) UploadFile(threadIndex int, filePath string, content
fullPath := path.Join(storage.storageDir, filePath) fullPath := path.Join(storage.storageDir, filePath)
dirs := strings.Split(filePath, "/") dirs := strings.Split(filePath, "/")
if len(dirs) > 1 { fullDir := path.Dir(fullPath)
fullDir := path.Dir(fullPath) return storage.retry(func() error {
err = storage.retry(func() error {
_, err := storage.getSFTPClient().Stat(fullDir)
return err
})
if err != nil {
// The error may be caused by a non-existent fullDir, or a broken connection. In either case,
// we just assume it is the former because there isn't a way to tell which is the case.
for i := range dirs[1 : len(dirs)-1] {
subDir := path.Join(storage.storageDir, path.Join(dirs[0:i+2]...))
// We don't check the error; just keep going blindly but always store the last err
err = storage.getSFTPClient().Mkdir(subDir)
}
// If there is an error creating the dirs, we check fullDir one more time, because another thread if len(dirs) > 1 {
// may happen to create the same fullDir ahead of this thread _, err := storage.getSFTPClient().Stat(fullDir)
if err != nil { if os.IsNotExist(err) {
err = storage.retry(func() error { for i := range dirs[1 : len(dirs)-1] {
_, err := storage.getSFTPClient().Stat(fullDir) subDir := path.Join(storage.storageDir, path.Join(dirs[0:i+2]...))
return err // We don't check the error; just keep going blindly
}) storage.getSFTPClient().Mkdir(subDir)
if err != nil {
return err
} }
} }
} }
}
return storage.retry(func() error {
letters := "abcdefghijklmnopqrstuvwxyz" letters := "abcdefghijklmnopqrstuvwxyz"
suffix := make([]byte, 8) suffix := make([]byte, 8)

View File

@@ -91,6 +91,10 @@ func CreateSnapshotFromDirectory(id string, top string, nobackupFile string, fil
snapshot.Files = append(snapshot.Files, directory) snapshot.Files = append(snapshot.Files, directory)
subdirectories, skipped, err := ListEntries(top, directory.Path, &snapshot.Files, patterns, nobackupFile, snapshot.discardAttributes) subdirectories, skipped, err := ListEntries(top, directory.Path, &snapshot.Files, patterns, nobackupFile, snapshot.discardAttributes)
if err != nil { if err != nil {
if directory.Path == "" {
LOG_ERROR("LIST_FAILURE", "Failed to list the repository root: %v", err)
return nil, nil, nil, err
}
LOG_WARN("LIST_FAILURE", "Failed to list subdirectory: %v", err) LOG_WARN("LIST_FAILURE", "Failed to list subdirectory: %v", err)
skippedDirectories = append(skippedDirectories, directory.Path) skippedDirectories = append(skippedDirectories, directory.Path)
continue continue

View File

@@ -653,6 +653,51 @@ func (manager *SnapshotManager) GetSnapshotChunks(snapshot *Snapshot, keepChunkH
return chunks return chunks
} }
// GetSnapshotChunkHashes has an option to retrieve chunk hashes in addition to chunk ids.
func (manager *SnapshotManager) GetSnapshotChunkHashes(snapshot *Snapshot, chunkHashes *map[string]bool, chunkIDs map[string]bool) {
for _, chunkHash := range snapshot.FileSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
for _, chunkHash := range snapshot.ChunkSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
for _, chunkHash := range snapshot.LengthSequence {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
if len(snapshot.ChunkHashes) == 0 {
description := manager.DownloadSequence(snapshot.ChunkSequence)
err := snapshot.LoadChunks(description)
if err != nil {
LOG_ERROR("SNAPSHOT_CHUNK", "Failed to load chunks for snapshot %s at revision %d: %v",
snapshot.ID, snapshot.Revision, err)
return
}
}
for _, chunkHash := range snapshot.ChunkHashes {
if chunkHashes != nil {
(*chunkHashes)[chunkHash] = true
}
chunkIDs[manager.config.GetChunkIDFromHash(chunkHash)] = true
}
snapshot.ClearChunks()
}
// ListSnapshots shows the information about a snapshot. // ListSnapshots shows the information about a snapshot.
func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList []int, tag string, func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList []int, tag string,
showFiles bool, showChunks bool) int { showFiles bool, showChunks bool) int {
@@ -757,7 +802,9 @@ func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList
// ListSnapshots shows the information about a snapshot. // ListSnapshots shows the information about a snapshot.
func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToCheck []int, tag string, showStatistics bool, showTabular bool, func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToCheck []int, tag string, showStatistics bool, showTabular bool,
checkFiles bool, searchFossils bool, resurrect bool) bool { checkFiles bool, checkChunks, searchFossils bool, resurrect bool, threads int) bool {
manager.chunkDownloader = CreateChunkDownloader(manager.config, manager.storage, manager.snapshotCache, false, threads)
LOG_DEBUG("LIST_PARAMETERS", "id: %s, revisions: %v, tag: %s, showStatistics: %t, showTabular: %t, checkFiles: %t, searchFossils: %t, resurrect: %t", LOG_DEBUG("LIST_PARAMETERS", "id: %s, revisions: %v, tag: %s, showStatistics: %t, showTabular: %t, checkFiles: %t, searchFossils: %t, resurrect: %t",
snapshotID, revisionsToCheck, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect) snapshotID, revisionsToCheck, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect)
@@ -839,6 +886,12 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
} }
LOG_INFO("SNAPSHOT_CHECK", "Total chunk size is %s in %d chunks", PrettyNumber(totalChunkSize), len(chunkSizeMap)) LOG_INFO("SNAPSHOT_CHECK", "Total chunk size is %s in %d chunks", PrettyNumber(totalChunkSize), len(chunkSizeMap))
var allChunkHashes *map[string]bool
if checkChunks && !checkFiles {
m := make(map[string]bool)
allChunkHashes = &m
}
for snapshotID = range snapshotMap { for snapshotID = range snapshotMap {
for _, snapshot := range snapshotMap[snapshotID] { for _, snapshot := range snapshotMap[snapshotID] {
@@ -850,9 +903,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
} }
chunks := make(map[string]bool) chunks := make(map[string]bool)
for _, chunkID := range manager.GetSnapshotChunks(snapshot, false) { manager.GetSnapshotChunkHashes(snapshot, allChunkHashes, chunks)
chunks[chunkID] = true
}
missingChunks := 0 missingChunks := 0
for chunkID := range chunks { for chunkID := range chunks {
@@ -946,6 +997,14 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
manager.ShowStatistics(snapshotMap, chunkSizeMap, chunkUniqueMap, chunkSnapshotMap) manager.ShowStatistics(snapshotMap, chunkSizeMap, chunkUniqueMap, chunkSnapshotMap)
} }
if checkChunks && !checkFiles {
LOG_INFO("SNAPSHOT_VERIFY", "Verifying %d chunks", len(*allChunkHashes))
for chunkHash := range *allChunkHashes {
manager.chunkDownloader.AddChunk(chunkHash)
}
manager.chunkDownloader.WaitForCompletion()
LOG_INFO("SNAPSHOT_VERIFY", "All %d chunks have been successfully verified", len(*allChunkHashes))
}
return true return true
} }

View File

@@ -620,7 +620,7 @@ func TestPruneNewSnapshots(t *testing.T) {
// Now chunkHash1 wil be resurrected // Now chunkHash1 wil be resurrected
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1) snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 4, 0) checkTestSnapshots(snapshotManager, 4, 0)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3}, "", false, false, false, false, false) snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3}, "", false, false, false, false, false, false, 1)
} }
// A fossil collection left by an aborted prune should be ignored if any supposedly deleted snapshot exists // A fossil collection left by an aborted prune should be ignored if any supposedly deleted snapshot exists
@@ -669,7 +669,7 @@ func TestPruneGhostSnapshots(t *testing.T) {
// Run the prune again but the fossil collection should be igored, since revision 1 still exists // Run the prune again but the fossil collection should be igored, since revision 1 still exists
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1) snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 3, 2) checkTestSnapshots(snapshotManager, 3, 2)
snapshotManager.CheckSnapshots("vm1@host1", []int{1, 2, 3}, "", false, false, false, true /*searchFossils*/, false) snapshotManager.CheckSnapshots("vm1@host1", []int{1, 2, 3}, "", false, false, false, false, true /*searchFossils*/, false, 1)
// Prune snapshot 1 again // Prune snapshot 1 again
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false, 1) snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
@@ -683,5 +683,5 @@ func TestPruneGhostSnapshots(t *testing.T) {
// Run the prune again and this time the fossil collection will be processed and the fossils removed // Run the prune again and this time the fossil collection will be processed and the fossils removed
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1) snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false, 1)
checkTestSnapshots(snapshotManager, 3, 0) checkTestSnapshots(snapshotManager, 3, 0)
snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3, 4}, "", false, false, false, false, false) snapshotManager.CheckSnapshots("vm1@host1", []int{2, 3, 4}, "", false, false, false, false, false, false, 1)
} }

View File

@@ -336,7 +336,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
keyFile = GetPassword(preference, "ssh_key_file", "Enter the path of the private key file:", keyFile = GetPassword(preference, "ssh_key_file", "Enter the path of the private key file:",
true, resetPassword) true, resetPassword)
var key ssh.Signer var keySigner ssh.Signer
var err error var err error
if keyFile == "" { if keyFile == "" {
@@ -347,7 +347,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
if err != nil { if err != nil {
LOG_INFO("SSH_PUBLICKEY", "Failed to read the private key file: %v", err) LOG_INFO("SSH_PUBLICKEY", "Failed to read the private key file: %v", err)
} else { } else {
key, err = ssh.ParsePrivateKey(content) keySigner, err = ssh.ParsePrivateKey(content)
if err != nil { if err != nil {
if strings.Contains(err.Error(), "cannot decode encrypted private keys") { if strings.Contains(err.Error(), "cannot decode encrypted private keys") {
LOG_TRACE("SSH_PUBLICKEY", "The private key file is encrypted") LOG_TRACE("SSH_PUBLICKEY", "The private key file is encrypted")
@@ -355,7 +355,7 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
if len(passphrase) == 0 { if len(passphrase) == 0 {
LOG_INFO("SSH_PUBLICKEY", "No passphrase to descrypt the private key file %s", keyFile) LOG_INFO("SSH_PUBLICKEY", "No passphrase to descrypt the private key file %s", keyFile)
} else { } else {
key, err = ssh.ParsePrivateKeyWithPassphrase(content, []byte(passphrase)) keySigner, err = ssh.ParsePrivateKeyWithPassphrase(content, []byte(passphrase))
if err != nil { if err != nil {
LOG_INFO("SSH_PUBLICKEY", "Failed to parse the encrypted private key file %s: %v", keyFile, err) LOG_INFO("SSH_PUBLICKEY", "Failed to parse the encrypted private key file %s: %v", keyFile, err)
} }
@@ -364,11 +364,35 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
LOG_INFO("SSH_PUBLICKEY", "Failed to parse the private key file %s: %v", keyFile, err) LOG_INFO("SSH_PUBLICKEY", "Failed to parse the private key file %s: %v", keyFile, err)
} }
} }
if keySigner != nil {
certFile := keyFile + "-cert.pub"
if stat, err := os.Stat(certFile); err == nil && !stat.IsDir() {
LOG_DEBUG("SSH_CERTIFICATE", "Attempting to use ssh certificate from file %s", certFile)
var content []byte
content, err = ioutil.ReadFile(certFile)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed to read ssh certificate file %s: %v", certFile, err)
} else {
pubKey, _, _, _, err := ssh.ParseAuthorizedKey(content)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed parse ssh certificate file %s: %v", certFile, err)
} else {
certSigner, err := ssh.NewCertSigner(pubKey.(*ssh.Certificate), keySigner)
if err != nil {
LOG_INFO("SSH_CERTIFICATE", "Failed to create certificate signer: %v", err)
} else {
keySigner = certSigner
}
}
}
}
}
} }
} }
if key != nil { if keySigner != nil {
signers = append(signers, key) signers = append(signers, keySigner)
} }
if len(signers) > 0 { if len(signers) > 0 {
@@ -531,7 +555,25 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
accountID := GetPassword(preference, "b2_id", "Enter Backblaze account or application id:", true, resetPassword) accountID := GetPassword(preference, "b2_id", "Enter Backblaze account or application id:", true, resetPassword)
applicationKey := GetPassword(preference, "b2_key", "Enter corresponding Backblaze application key:", true, resetPassword) applicationKey := GetPassword(preference, "b2_key", "Enter corresponding Backblaze application key:", true, resetPassword)
b2Storage, err := CreateB2Storage(accountID, applicationKey, bucket, storageDir, threads) b2Storage, err := CreateB2Storage(accountID, applicationKey, "", bucket, storageDir, threads)
if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the Backblaze B2 storage at %s: %v", storageURL, err)
return nil
}
SavePassword(preference, "b2_id", accountID)
SavePassword(preference, "b2_key", applicationKey)
return b2Storage
} else if matched[1] == "b2-custom" {
b2customUrlRegex := regexp.MustCompile(`^b2-custom://([^/]+)/([^/]+)(/(.+))?`)
matched := b2customUrlRegex.FindStringSubmatch(storageURL)
downloadURL := "https://" + matched[1]
bucket := matched[2]
storageDir := matched[4]
accountID := GetPassword(preference, "b2_id", "Enter Backblaze account or application id:", true, resetPassword)
applicationKey := GetPassword(preference, "b2_key", "Enter corresponding Backblaze application key:", true, resetPassword)
b2Storage, err := CreateB2Storage(accountID, applicationKey, downloadURL, bucket, storageDir, threads)
if err != nil { if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the Backblaze B2 storage at %s: %v", storageURL, err) LOG_ERROR("STORAGE_CREATE", "Failed to load the Backblaze B2 storage at %s: %v", storageURL, err)
return nil return nil
@@ -582,21 +624,30 @@ func CreateStorage(preference Preference, resetPassword bool, threads int) (stor
SavePassword(preference, "gcs_token", tokenFile) SavePassword(preference, "gcs_token", tokenFile)
return gcsStorage return gcsStorage
} else if matched[1] == "gcd" { } else if matched[1] == "gcd" {
// Handle writing directly to the root of the drive
// For gcd://driveid@/, driveid@ is match[3] not match[2]
if matched[2] == "" && strings.HasSuffix(matched[3], "@") {
matched[2], matched[3] = matched[3], matched[2]
}
driveID := matched[2]
if driveID != "" {
driveID = driveID[:len(driveID)-1]
}
storagePath := matched[3] + matched[4] storagePath := matched[3] + matched[4]
prompt := fmt.Sprintf("Enter the path of the Google Drive token file (downloadable from https://duplicacy.com/gcd_start):") prompt := fmt.Sprintf("Enter the path of the Google Drive token file (downloadable from https://duplicacy.com/gcd_start):")
tokenFile := GetPassword(preference, "gcd_token", prompt, true, resetPassword) tokenFile := GetPassword(preference, "gcd_token", prompt, true, resetPassword)
gcdStorage, err := CreateGCDStorage(tokenFile, storagePath, threads) gcdStorage, err := CreateGCDStorage(tokenFile, driveID, storagePath, threads)
if err != nil { if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the Google Drive storage at %s: %v", storageURL, err) LOG_ERROR("STORAGE_CREATE", "Failed to load the Google Drive storage at %s: %v", storageURL, err)
return nil return nil
} }
SavePassword(preference, "gcd_token", tokenFile) SavePassword(preference, "gcd_token", tokenFile)
return gcdStorage return gcdStorage
} else if matched[1] == "one" { } else if matched[1] == "one" || matched[1] == "odb" {
storagePath := matched[3] + matched[4] storagePath := matched[3] + matched[4]
prompt := fmt.Sprintf("Enter the path of the OneDrive token file (downloadable from https://duplicacy.com/one_start):") prompt := fmt.Sprintf("Enter the path of the OneDrive token file (downloadable from https://duplicacy.com/one_start):")
tokenFile := GetPassword(preference, "one_token", prompt, true, resetPassword) tokenFile := GetPassword(preference, matched[1] + "_token", prompt, true, resetPassword)
oneDriveStorage, err := CreateOneDriveStorage(tokenFile, storagePath, threads) oneDriveStorage, err := CreateOneDriveStorage(tokenFile, matched[1] == "odb", storagePath, threads)
if err != nil { if err != nil {
LOG_ERROR("STORAGE_CREATE", "Failed to load the OneDrive storage at %s: %v", storageURL, err) LOG_ERROR("STORAGE_CREATE", "Failed to load the OneDrive storage at %s: %v", storageURL, err)
return nil return nil

View File

@@ -109,7 +109,7 @@ func loadStorage(localStoragePath string, threads int) (Storage, error) {
storage.SetDefaultNestingLevels([]int{2, 3}, 2) storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err return storage, err
} else if testStorageName == "b2" { } else if testStorageName == "b2" {
storage, err := CreateB2Storage(config["account"], config["key"], config["bucket"], config["directory"], threads) storage, err := CreateB2Storage(config["account"], config["key"], "", config["bucket"], config["directory"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2) storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err return storage, err
} else if testStorageName == "gcs-s3" { } else if testStorageName == "gcs-s3" {
@@ -133,11 +133,23 @@ func loadStorage(localStoragePath string, threads int) (Storage, error) {
storage.SetDefaultNestingLevels([]int{2, 3}, 2) storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err return storage, err
} else if testStorageName == "gcd" { } else if testStorageName == "gcd" {
storage, err := CreateGCDStorage(config["token_file"], config["storage_path"], threads) storage, err := CreateGCDStorage(config["token_file"], "", config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "gcd-shared" {
storage, err := CreateGCDStorage(config["token_file"], config["drive"], config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2) storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err return storage, err
} else if testStorageName == "one" { } else if testStorageName == "one" {
storage, err := CreateOneDriveStorage(config["token_file"], config["storage_path"], threads) storage, err := CreateOneDriveStorage(config["token_file"], false, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "odb" {
storage, err := CreateOneDriveStorage(config["token_file"], true, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err
} else if testStorageName == "one" {
storage, err := CreateOneDriveStorage(config["token_file"], false, config["storage_path"], threads)
storage.SetDefaultNestingLevels([]int{2, 3}, 2) storage.SetDefaultNestingLevels([]int{2, 3}, 2)
return storage, err return storage, err
} else if testStorageName == "hubic" { } else if testStorageName == "hubic" {